Evaluation database

Evaluation report

2013 Global: Global Evaluation Report Oversight System (GEROS) Global Meta-Evaluation Report of 2012 Evaluation Reports



Author: Universalia and UNICEF

Executive summary

Background:

UNICEF’s establishment of a Global Evaluation Quality Assurance System to ensure that evaluations managed or commissioned by UNICEF meet high quality standards has its roots in the Global Evaluation Compact endorsed in 2009, which commits the Evaluation Office (EO) and the Regional Offices (ROs) to collaborate in strengthening the evaluation function within UNICEF. The Evaluation Policy and Executive Directive set out clear roles and responsibilities as well as management measures to strengthen the evaluation function in the organisation.

Because of the decentralized nature of UNICEF, the majority of evaluations supported by the agency are managed at the Country Office level. While this ensures that evidence generated is relevant to local contexts – and therefore more likely to inform national policies for children – such decentralization presents challenges in terms of establishing a system that can ensure good quality, high credibility, and utility that are consistent across the organisation. To address such variations across regions and Country Offices in the way M&E functions are carried out, the Global Evaluation Report Oversight System (GEROS) was developed to provide greater oversight of the Evaluation function within UNICEF. Following three years of implementation, GEROS underwent another review in 2012 that involved the Regional Offices and the EO.

Purpose/Objective:

The main objectives of GEROS are to provide senior managers with an independent assessment of the quality of their evaluation reports; to strengthen the internal evaluation capacity to improve the quality of the evaluations; to contribute to corporate knowledge management and organizational learning; and to report to senior management the on the quality of evaluation reports - generating key performance indicators that point to the improvements, strengths and weaknesses of the overall evaluation reports.

Methodology:

The quality review process and this meta-evaluation cover all of the 2012 reports submitted to the Global Evaluation Database before May 2013. Each evaluation was reviewed using a tool that is based on the UNEG Evaluation Reports Standards adapted for UNICEF.  The tool contains a total of 6 sections and 22 sub-sections, which comprise a total of 58 guiding questions.  Each section in the review tool was rated according to a four-point performance scale: "outstanding/best practice, highly satisfactory; mostly satisfactory and unsatisfactory.  Reports were each given an overall rating based on the same scale.

Findings and Conclusions:

The quality of reports increased dramatically between 2011 and 2012.  The number of good quality reports - rated either outstanding/best practice or highly satisfactory - increased by 20% from 42% to 62%.  In 2012 only 4 reports (8%) were rated unsatisfactory, and a greater number of reports were rated mostly satisfactory (30%).  For the most part, these ratings were attributed to reports that failed to address a large portion of the required GEROS criteria and question. 

Evidence drawn from this and previous review cycles shows an unmistakable trend towards improved report submissions.  The quality of reports reviewed did not vary dramatically across sections, although there were some sections that came out stronger than others.

Recommendations:

There are 3 major recommendations:

1. UNICEF should continue to systematically communicate the GEROS results as part of its effort to incentivize manager regarding the system, as well as communicate the specific criteria of GEROS to evaluators

2. UNICEF's internal learning systems around evaluation should continue to be strengthened; the GEROS system can play a role in informing the continuous improvement of that learning system.

3. UNICEF should continue to review and continually improve the standards used in the GEROS process, even if it risks compromising comparability of GEROS data from year to year.

Lessons Learned:

According to UNICEF, lessons learned are contributions to general knowledge that refine or add to common understanding, and should not be merely a repetition of common knowledge.

1. The general characteristics of a strong evaluation report include clearly and directly addressing the evaluation criteria, good structure, and logical linkages threaded throughout. Thus, while content is important, the presentation of that content is just as important.

2. Monitoring the quality of evaluations through a GEROS-type system improves the quality of evaluations.

3. The more that UNICEF makes clear to evaluators the priorities and foci of its evaluation system, the more likely it is that evaluation reports will meet those standards.

4. Evaluators usually attempt to satisfy the ToRs and adhere to UNICEF’s evaluation standards. If evaluators are aware that they will be judged according to the GEROS standards and know what these standards specify, they will strive to meet these criteria and thereby produce better evaluation reports.

Strong evaluation reports depend upon appropriate time being allocated to analysis and writing.



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

New enhanced search