Base de données d'évaluation

Evaluation report

2006 Global: UNICEF Evaluation Report Quality Review 2006



Author: Alexandra Roth, UNICEF Evaluation Office

Executive summary

Background
The UNICEF Evaluation Office (EO) released the Evaluation Report Standards in September 2004.  The 22 Evaluation Report Standards are to be included with every Terms of Reference for an evaluation and describe what UNICEF requires of every evaluation report commissioned by the organization.  The Standards also created a rating system to provide a transparent and objective monitoring and measurement tool regarding the quality of the evaluation report.  An annual quality review describes the quality status of Country Office (CO)-commissioned evaluation reports. This is the 3rd Annual Review.

Purpose/Objective
The findings of the Annual Review 2006 are to provide preliminary results for recent initiatives to improve the quality of Country Office commissioned evaluations and to assist offices in adjusting their work plans respectively. The report also serves as annual update to the Evaluation Committee.

This review identifies areas of strength and weakness in general categories.  Mainly, this review will talk about the evaluation reports as a whole. An analysis is provided for the various regions, topics, and key individual standards.

Methodology
The criteria by which quality will be judged are the UNICEF Evaluation Report Standards. Each evaluation is given a rate on a five point scale for each of 22 standards and a weighted average of these ratings is computed for an overall rating. Typical DAC evaluation criteria of efficiency, effectiveness, impact, sustainability and relevance are not considered as they are appropriate to programme or project evaluations and not meta-evaluations of this type.

As of December 2006, UNICEF Country Offices have sponsored or conducted 1622 evaluations in the years 2000-2006.  The Annual Quality Review focuses on Country Office evaluation quality only since it is the COs that implement most evaluations (93% of all submissions to the ERD are from COs as of December 2006). In addition, only CO-led evaluations exist in sufficient regional and thematic diversity to permit the analysis featured in this report. Also, ultimately it is CO evaluation quality that reflects mostly if capacity building for evaluation achieved success.

Sample
Out of all 1622 ERD postings from 2000-2006, 601 reports (37%) were submitted to the ERD as of December 12, 2006. Out of the submitted evaluations, 313 reports were rated and also conducted by a Country Office and in English (up to 2006 the EO only had capacity to quality review English language reports). These evaluations constitute the sample for this review – representing 20% of all 1622 sponsored or conducted CO evaluations from the period 2000-2006.

Findings
In total, the ratings for CO, Regional Office (RO), HQ evaluations range from a Very Good to Poor rating from 2000-2006. The majority (58%) of reports were in the Satisfactory range while 15% were considered Very Good and 26% considered Poor.

Looking at the trend of ratings over the time from 2000, a remarkable increase in evaluations with a Very Good rating from single percentages (2000-2003) to 59% in 2005 is to be noted. At the same time, the share of Poor rated reports shrinked. This increase in evaluation quality correlates with the introduction of the UNICEF evaluation report standards in 2004.

Not only did the share of Very Good evaluations rise and Poor ratings decrease. The quality of evaluation reports also rose in terms of average rating from a low range ‘Satisfactory’ towards a highly ‘Satisfactory’ quality within the ‘Satisfactory’ range.

Conclusions/ Recommendations
The improvement in report quality can be attributed largely to the combination of the introduction of the evaluation report standards and activities such as more advocacy about evaluation quality by the EO, better hiring of M&E staff, and the issuance of the TOR technical note. However, a critical role remains with the Evaluation and Research Database’s inherent quality review mechanism as it is clear that the knowledge that the reports will be rated causes COs to be more attentive to quality. The development since introduction of the ERD raises hope for a soon to be reached Very Good or even Excellent quality of all UNICEF evaluations.

To further increase quality and not to forget also quantity of accessible evaluations some steps can be taken. Main recommendations include:

Quality of CO evaluations
COs should ensure that all necessary description is included in the evaluation report with special focus on
- Evaluation methodology with a description of ethical considerations and safeguards
- HRBAP, gender analysis/data dissaggregation and RBM
- Description and address of the role and contributions of partners, including their role in the evaluation itself
- Key standards
- Comprehensiveness of information and format

COs are to use the UNICEF Evaluation Report Standards and make sure the evaluation team/consultant are provided with a copy during the TOR design and recruitment process. Training on the Standards can be provided for COs/ROs and questions/suggestions are always welcome.

Quantity
Organizational learning relies on the increase of evaluation submission rates. Having a representative sample at hand is not only important  for learning from successes and failures of CO project and programmes but also to to create an evidence-base for UNICEF activities and strategies. COs should pay special attention to share information and make sure that reports are sent as soon as finalized to NYHQ using the Evaluation Report Submission Website for everyone to access.



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Date:
2006

Region:
Global

Country:
New York

Type:
Evaluation

Theme:
Evaluation

Partners:

PIDB:

Follow Up:

Language:
English

Sequence Number:
2006/805

Recherche