Evaluation database

Evaluation report

2012 Global: 2012 Meta-Evaluation: UNICEF Global Evaluation Report Oversight System - Quality Review of 2011 Evaluation Reports



Author: Joseph Barnes, Hatty Dinsmore, Sadie Watson, Annalize Struwig

Executive summary

Background:

UNICEF holds a long-standing commitment to independent assessment of the quality of evaluation reports produced by its country and regional offices all over the world, as well as HQ divisions. This is the third report to use the current methodology to assess the quality of evaluation reports against UNICEF standards1.
This quality review process covered all 2011 evaluation reports submitted to the UNICEF Global Evaluation Database by the cut-off date of April 2012. The standards against which evaluation reports are assessed are set by the UNICEF deployment of the United Nations Evaluation Group (UNEG) global evaluation report standards.

Purpose/Objective:

Specific objectives of the review are to: 1) review and rate (with justifications) the quality of the main elements of evaluation reports 2) provide constructive feedback to improve future evaluations; 3) provide a global analysis of key trends; and 4) provide actionable conclusions and recommendations to improve the evaluation function.

Methodology:

Reports were selected as responding to the definition of evaluation according to UNICEF standard criteria. This led to 86 full reviews. An expert familiar with the UNICEF evaluation function undertook each review after completing a dedicated induction process. Three levels of quality assurance were applied: basic completeness; sampled peer-reviewing; and a right-to-challenge option exercised by the UNICEF Evaluation Office.
The full review tool is presented in the Annexes. This was originally co-designed by UNICEF and IOD PARC in 2010 and redesigned in 2011 based upon experience garnered from implementing the approach. Each of 58 questions, 6 sections and the overall report are given a rating of either: ‘very confident’, ‘confident’, ‘almost confident’, or ‘not confident’2. In addition to ratings, commentary is provided against each section and sub-section, suggestions for future improvement provided for each
section, and executive feedback provided for each section and the overall report.
The review process generated an extensive dataset to inform the trend analysis, including 5,829 quantitative ratings and 3,654 sections of qualitative text. In order to distil the key findings from this data, a multi-stage process was adopted consistent with the previous two meta-evaluations.
The limitations of time on the level of data analysis were mitigated as far as possible through triangulation of quantitative and qualitative patterns in the data. This enables assessment only of the evaluation report and not the evaluation process itself. Furthermore, the approach is limited to being able to identify only the ‘headline’ findings, with the possibility that more nuanced or infrequently occurring
issues exist for individual readers to find within the reviews themselves.
For the first time, evaluation reports resulting from evaluations submitted by the Evaluation Office were considered separately from evaluations submitted by other corporate departments and offices. Only one country-led evaluation was included in the sample frame.

Findings and Conclusions:

Overall, the review found a year-on-year improvement in performance in terms of more reports being rated as meeting UNICEF Evaluation Standards (42% in 2011, 40% in 2010) and fewer reports identified as having fundamental problems (23% in 2011, 30% in 2010). At the top end of the scale, four reports were rated as very confident3 overall, with ten reports having at least one individual section rated as
very confident. This data reflects a positive trend across a three-year period.

Conclusions were developed by analysing the findings for trends in underlying factors that contributed to the performance of evaluation reports. This analysis was grounded in the concept of ‘confidence’.

Recommendations:

RECOMMENDATIONS FOR THE EVALUATION OFFICE
The recommendations have been generated through the analysis of the core-evaluation team.  See the report for more detail.
- Report quality is improving: focus on accelerating this through synthesising lessons learned and cross-fertilising these insights between regions
- Provide quick reference guidance on methodologies, limitations and ethics
- Investigate what institutional conditions are driving inconsistent quality within and across evaluation reports

RECOMMENDATIONS FOR REGIONS AND COUNTRIES
- Efforts to improve quality are showing signs of working in some places: find out what is working well and keep doing it
- Increase the proportion of reports rated as confident by going the last mile with structure, language and presentation

Lessons Learned:

Lessons learned have been generated through the analysis of the core-evaluation team and revised based on comments from UNICEF.
Clear language and structure is as important as quality content Qualitative analysis of feedback from reviews suggests that the structure and language of reports can have a big impact on their confidence rating: great content only has value to the reader if it can be understood in context. Quantitative analysis
suggests that there is strong correlation between, for instance, the structure of reports, the quality of executive summaries, and a reports overall ratings.
Large numbers of evaluations do not exclude quality reports It appears that the argument for fewer evaluations made in previous metaevaluations was wrong. The experience of ESARO suggests that it is possible to successfully deliver quantity and quality. The lesson for the meta-evaluation process
is the value of longitudinal trends in developing insights and recommendations.
- Experiment with ways to reduce the number of small sub-national and output-level evaluations



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Year:
2012

Region:
Global

Type:
Meta-Evaluation

Language:
English

New enhanced search