Author: Stoddard, A.; UNICEF NYHQ
This external review, commissioned by UNICEF’s Evaluation Office, examines UNICEF’s track record in evaluating its humanitarian-related programming. The Evaluation Office is currently completing the final phase of the Country Programme Evaluation (CPE) Methodology and Guidance Development Project, funded by DFID, and requested this review to inform and support the project with an eye toward determining whether and how the CPE model might be employed for the particular conditions, issues, and challenges of humanitarian response and transitions. The research comprised a survey of 82 evaluations and reports submitted to the Evaluation Office by field offices between January 2000 and June 2005, a document review of internal policy and general literature on evaluation of humanitarian action, and interviews with 16 UNICEF staff members from 10 Country Offices, the Regional Office for South Asia and at UNICEF Headquarters.
The research for the review consisted of review of humanitarian-related evaluation reports, a documentary review, and key informant interviews. The research comprised a survey of 82 evaluations and reports submitted to the Evaluation Office by field offices between January 2000 and June 2005, a document review of internal policy and general literature on evaluation of humanitarian action, and interviews with 16 UNICEF staff members from 10 Country Offices, the Regional Office for South Asia and at UNICEF Headquarters.
The interviews yielded anecdotal evidence that the evaluation function in UNICEF is on an upward trend, and signaled the seriousness with which staff intended to continue the improvement. Nevertheless, a more systematic investigation suggests that there remains a considerable way to go to meet UNICEF’s stated goals in this area. Overall, the review found paucity in both quality and numbers of evaluation of humanitarian actions, with the largest shortfalls evident in evaluations focused at country-wide and systemic levels. These findings suggest that the evaluation function currently plays a limited role in the strategic direction of the organization’s country programmes and its role as a key actor in the international humanitarian system.
Key findings of the review of UNICEF-supported evaluation of humanitarian actions:
Uneven quality, skewing toward the lower end: Although UNICEF’s humanitarian-specific evaluations produced a higher number of reports deemed “very good” than UNICEF evaluations overall (according to the 2003 Meta-evaluation), the majority still fell in the middle to poor range. The quality of the rated studies falls mostly within the lower end of “Satisfactory” according to UNICEF’s own standards, which are based on generally accepted criteria and best practice including the OECD/DAC guidelines.
Low level of evaluation activity and predominantly narrow focus: Less than 60 field level evaluations of humanitarian response activities from the past five years were available for the review. The low number was surprising when one considers UNICEF’s prominent presence in humanitarian response and the number of UNICEF programme countries deemed to be in an emergency or transitional status during that period. This number represents only those humanitarian assistance evaluations that were submitted by field offices to the central database maintained by the Evaluation Office. Staff members from UNICEF country offices and the Evaluation Office interviewed for this review estimate the number to be close to the sum total of humanitarian-related evaluations undertaken during the review period.
Of perhaps greater significance is that the vast majority of these evaluations were at the level of individual projects and programmes. This is despite the fact that UNICEF frequently facilitates and even leads sectoral and inter-agency coordination in humanitarian contexts and the emphasis at UNICEF (and the UN generally) on country-level and inter-agency evaluations. Less than a fifth of the evaluations examined UNICEF’s country-wide humanitarian responses or assessed the relevance and appropriateness of UNICEF programming beyond individual projects, sectors, and themes. Moreover, only a tiny fraction focused at the systemic level of UN common country programming and multi-agency emergency response.
Limited strategic intent and use: The source of the demand for the majority of evaluations is unclear, and anecdotal evidence suggests that many evaluations are driven more by funding and project cycles than by any strategic rationale. The tendency of reports to lack attribution, authorship and/or terms of reference both hinders and make it virtually impossible to track the use and impact of the recommendations and lessons learned from the evaluation. In general, evaluation in UNICEF seems heavily oriented toward planning and fairly weak on follow-up.
These findings suggest that, in the humanitarian sphere in particular, UNICEF is falling short of its stated goals to:
Reasons for the shortcomings found in the evaluation function are linked to the following organizational factors.
The deficiencies and points of tension found in this review are neither unique to UNICEF, nor do they necessarily signal deep-seated organizational pathologies that will be especially difficult to overcome. They do, however, suggest a need for additional mechanisms to increase the demand, capacity, and follow-up of evaluations that examine the organization’s strategic-level approach and the policy and positioning of its programming. In this regard, the review examined the potential role and the added value of the Country Programme Evaluation – an evaluation framework that looks broadly at the performance of UNICEF’s Country Programme as a whole and informs strategic decisions – to establish a driver for evaluating the policy and positioning of UNICEF humanitarian action, and an adaptive and flexible mechanism for doing so in situations of evolving conditions.
Recommendations: Organizational action to address strategic shortcomings in evaluation (Headquarters and Regional Office) Accountability, oversight, and support issues (Headquarters and Regional Office) Technical and management issues (Country Office)
Organizational action to address strategic shortcomings in evaluation (Headquarters and Regional Office)
Accountability, oversight, and support issues (Headquarters and Regional Office)
Technical and management issues (Country Office)
PDF files require Acrobat Reader.