2005 Global: Review of UNICEF's Evaluation of Humanitarian Action
Author: Stoddard, A.; UNICEF NYHQ
This external review, commissioned by UNICEF’s Evaluation Office, examines UNICEF’s track record in evaluating its humanitarian-related programming. The Evaluation Office is currently completing the final phase of the Country Programme Evaluation (CPE) Methodology and Guidance Development Project, funded by DFID, and requested this review to inform and support the project with an eye toward determining whether and how the CPE model might be employed for the particular conditions, issues, and challenges of humanitarian response and transitions. The research comprised a survey of 82 evaluations and reports submitted to the Evaluation Office by field offices between January 2000 and June 2005, a document review of internal policy and general literature on evaluation of humanitarian action, and interviews with 16 UNICEF staff members from 10 Country Offices, the Regional Office for South Asia and at UNICEF Headquarters.
The research for the review consisted of review of humanitarian-related evaluation reports, a documentary review, and key informant interviews. The research comprised a survey of 82 evaluations and reports submitted to the Evaluation Office by field offices between January 2000 and June 2005, a document review of internal policy and general literature on evaluation of humanitarian action, and interviews with 16 UNICEF staff members from 10 Country Offices, the Regional Office for South Asia and at UNICEF Headquarters.
The interviews yielded anecdotal evidence that the evaluation function in UNICEF is on an upward trend, and signaled the seriousness with which staff intended to continue the improvement. Nevertheless, a more systematic investigation suggests that there remains a considerable way to go to meet UNICEF’s stated goals in this area. Overall, the review found paucity in both quality and numbers of evaluation of humanitarian actions, with the largest shortfalls evident in evaluations focused at country-wide and systemic levels. These findings suggest that the evaluation function currently plays a limited role in the strategic direction of the organization’s country programmes and its role as a key actor in the international humanitarian system.
Key findings of the review of UNICEF-supported evaluation of humanitarian actions:
Uneven quality, skewing toward the lower end: Although UNICEF’s humanitarian-specific evaluations produced a higher number of reports deemed “very good” than UNICEF evaluations overall (according to the 2003 Meta-evaluation), the majority still fell in the middle to poor range. The quality of the rated studies falls mostly within the lower end of “Satisfactory” according to UNICEF’s own standards, which are based on generally accepted criteria and best practice including the OECD/DAC guidelines.
Low level of evaluation activity and predominantly narrow focus: Less than 60 field level evaluations of humanitarian response activities from the past five years were available for the review. The low number was surprising when one considers UNICEF’s prominent presence in humanitarian response and the number of UNICEF programme countries deemed to be in an emergency or transitional status during that period. This number represents only those humanitarian assistance evaluations that were submitted by field offices to the central database maintained by the Evaluation Office. Staff members from UNICEF country offices and the Evaluation Office interviewed for this review estimate the number to be close to the sum total of humanitarian-related evaluations undertaken during the review period.
Of perhaps greater significance is that the vast majority of these evaluations were at the level of individual projects and programmes. This is despite the fact that UNICEF frequently facilitates and even leads sectoral and inter-agency coordination in humanitarian contexts and the emphasis at UNICEF (and the UN generally) on country-level and inter-agency evaluations. Less than a fifth of the evaluations examined UNICEF’s country-wide humanitarian responses or assessed the relevance and appropriateness of UNICEF programming beyond individual projects, sectors, and themes. Moreover, only a tiny fraction focused at the systemic level of UN common country programming and multi-agency emergency response.
Limited strategic intent and use: The source of the demand for the majority of evaluations is unclear, and anecdotal evidence suggests that many evaluations are driven more by funding and project cycles than by any strategic rationale. The tendency of reports to lack attribution, authorship and/or terms of reference both hinders and make it virtually impossible to track the use and impact of the recommendations and lessons learned from the evaluation. In general, evaluation in UNICEF seems heavily oriented toward planning and fairly weak on follow-up.
These findings suggest that, in the humanitarian sphere in particular, UNICEF is falling short of its stated goals to:
- Raise the level of quality and “strategic value” of the evaluations (Executive Board decision 2004/9);
- Use evaluations as management tools to “influence decision-making” (Programme Policy and Procedure Manual);
- Focus evaluations “more on country programme level (lessons learned) and on strategic governance of the organization as a whole” (Medium Term Strategic Plan 2002-2005); and
- Strengthen evaluation within the United Nations system and with other partners” (Medium Term Strategic Plan 2006-2009) as per UN operational and reform goals mentioned in the 2004 Triennial Comprehensive Policy Review which includes “the systematic use of monitoring and evaluation approaches at the system-wide level and the promotion of collaborative approaches to the evaluation, including joint evaluations.”
Reasons for the shortcomings found in the evaluation function are linked to the following organizational factors.
- The demand for evaluations emanates almost solely from the country office and sub-country office levels which tend to focus more narrowly, on projects/programmes. The demand gap for policy level and country programme evaluations is related to the uneven and often underutilized links between the regional and country offices in initiation, prioritization/coordination, and management of evaluations.
- Associating evaluation with a development model that requires a long lead-time, close government partnership, and stable conditions, inhibits evaluation in humanitarian and transitional contexts.
- There is a continuing lack of new, quick and flexible tools for evaluations in humanitarian contexts. Additionally, evaluative tools for important aspects of humanitarian response, such as protection, children in armed conflict, prevention of sexual exploitation, and rights advocacy are undeveloped or still in their infancy.
The deficiencies and points of tension found in this review are neither unique to UNICEF, nor do they necessarily signal deep-seated organizational pathologies that will be especially difficult to overcome. They do, however, suggest a need for additional mechanisms to increase the demand, capacity, and follow-up of evaluations that examine the organization’s strategic-level approach and the policy and positioning of its programming. In this regard, the review examined the potential role and the added value of the Country Programme Evaluation – an evaluation framework that looks broadly at the performance of UNICEF’s Country Programme as a whole and informs strategic decisions – to establish a driver for evaluating the policy and positioning of UNICEF humanitarian action, and an adaptive and flexible mechanism for doing so in situations of evolving conditions.
Organizational action to address strategic shortcomings in evaluation (Headquarters and Regional Office)
- Systematize and regularize evaluations at the country programme level that examine UNICEF humanitarian action from a strategic and policy standpoint, particularly vis-à-vis the organization’s role in the broader humanitarian response.
- Strengthen the responsibility and capacity of Regional Offices to initiate, coordinate, and manage support for these evaluations.
- Establish guidance and criteria for selecting the appropriate methodology for strategic level evaluations according to the context.
- Consider policies that would counsel the use of real time evaluation during the acute emergency phase and CPE during longer-term emergency and transitional phases.
- Identify and programme additional resources to increase M&E staffing at both regional and country levels.
Accountability, oversight, and support issues (Headquarters and Regional Office)
- Enhance the transparency and accountability of country offices for follow-through of accepted recommendations and lessons-learned through the audit mechanism and expand the evaluation submission process to include plans/steps taken for follow-through. (Evaluation office).
- Establish and maintain links with government, universities, think tanks, non-governmental organizations and other appropriate sources in order to be able to provide country offices with updates and suggestions for staffing potential evaluation assignments. (Regional Office Monitoring & Evaluation staff)
Technical and management issues (Country Office)
- Feed all major evaluations and studies of acceptable quality into the centrally managed Evaluation Database
- Encourage evaluation teams to include the review of previous evaluations and studies that are relevant for the project, programme or thematic subject to be evaluated as part of the evaluation approach and methodology
- Ensure all evaluation consultants are provided with and understand their responsibility for meeting UNICEF’s Evaluation Report Standards. (Monitoring & Evaluation staff)
Full report in PDF
PDF files require Acrobat Reader.