Base de datos de evaluación

Evaluation report

2006 Global: Peer review of evaluation function at UNICEF



Author: Independent Peer Panel

Executive summary

Background
The UNICEF Peer Review is the second effort to apply a new assessment approach designed under the auspices of the Evaluation Network of the Development Assistance Committee (DAC) of the Organization for Economic Co-operation and Development (OECD).  The approach aims to enhance multilateral agencies’ own evaluation capacity and performance by reviewing an agency’s evaluation systems and processes. The conclusions and recommendations in the report reflect the Panel’s judgment. However, the Panel recognizes that UNICEF must decide which approach is best suited to the particularities of the organization.

Purpose of the Review
The purpose of the review was to determine whether UNICEF’s evaluation function and its products are independent, credible, and useful for learning and accountability purposes, as assessed against UNEG norms and standards by a panel of evaluation peers.

Methodology
The three crucial aspects of evaluation – independence, credibility and usefulness – were assessed against defined and agreed-upon international benchmarks and best practices, articulated in the Norms and Standards for Evaluation in the UN System approved by the United Nations Evaluation Group (UNEG) in April 2005.

The UNICEF Peer Review was able to draw from, and build on, the experience of the UNDP review completed in December 2005. The UNICEF Peer Review followed the same general methodology, but the Panel made some adjustments to reflect the particularities of UNICEF’s evaluation system including: Ghana as a country reference case; partner countries’ role as stakeholders and users of evaluation. Fostering evaluation capacity building in member countries, facilitating stakeholder participation in evaluation, and mainstreaming gender in evaluation were issues of interest to UNICEF that were assessed against UNEG standards.

Limitations of the Review
The Peer Panel was not able to draw strong conclusions about the decentralized elements by limitations of data collected from the regional and country levels. Challenges fced by the panel included the limitations posed by the OECD-DAC assessment approach as well as the UNEG Norms and Standards. The Panel generally followed the ‘sorting’ approach used for the UNDP Peer Review normative framework.

Oververall assesment
The primary purposes for UNICEF’s evaluation function are coherent with UNEG  Norms and Standards and serve to: inform decision-making by identifying and understanding results and their impacts; to identify lessons in order to facilitate improvements in on-going or future operations; and to provide information for accountability purposes.

Secondary purposes for evaluation important to UNICEF included: using participatory processes to expand ownership of the evaluation and using evaluation results as “impartial and credible evidence”  to advocate for children’s and women’s rights in global and national policies and programmes.  The review found UNICEF emphasis for evaluation to be more on learning to inform decision-making and planning than to accountability.  The Panel believes that the organization will have to enhance its systems for planning and performance measurement (Results-Based Management).

Central Evaluation Office
The central Evaluation Office has strengthened the role and performance of the evaluation function in UNICEF over the past five years. It demonstrates a high level of independence and professional credibility. Evaluation’s contribution to management and decision-making for both programmes and policies is considered by the Panel to be strong, timely and useful. The EO has played an important leadership role in UN harmonization through the United Nations Evaluation Group (UNEG). However, the Panel agrees with the EO’s self-assessment that improvements are needed in the areas of (1) strengthening evaluation capacity at the decentralized levels (regional/country offices, partner countries), and (2) disseminating evaluation results and lessons more effectively.

Decentralized Evaluation System
The majority of UNICEF evaluations (97%) are undertaken at the country level. The Panel recognizes that a decentralized system of evaluation is well suited to the operational nature of the organization, given UNICEF’s intent to act as an authoritative voice on children’s issues in the many countries where it works and the necessity to reflect the differences and particularities of each country and region. However, the systems, capacities and outputs of evaluation at the regional and country levels exhibit critical gaps that must be addressed in order to ensure that the evaluation function serves the Organization effectively.  The Panel notes that evaluation at the regional and country level serves learning and decision-making purposes well but it is less useful for accountability purposes at those levels. In addition, evaluation results are not yet being aggregated from the country level to the regional or Headquarters level to provide information on overall organizational performance.  

Resources for Evaluation
The Panel notes that there are limitations in the level and predictability of core resources for evaluation, especially for the Evaluation Office. The EO’s core budget from Regular Resources provides assured funding for approximately two corporate evaluations per year. The EO is heavily dependent on Other Resources, which generally come from donors and may be designated for specific evaluations (e.g. Tsunami, Real Time evaluations). The EO may also manage evaluations for other Headquarters Divisions if requested to do so. These evaluations are generally identified and funded by the Division.

No funding has been allocated by UNICEF for activities related to evaluation capacity development at the country and regional levels or for Country Programme Evaluations. The EO Director has been authorized to seek funding from donors for these activities, estimated to be 64% of the EO budget for 2006-2007.

The Panel acknowledges UNICEF’s intention to allocate 2-5% of country programme funding to monitoring, evaluation and research. However the present UNICEF financial management system does not disaggregate commitments and expenditures for M&E and it is not possible to verify whether the targets are being met.

It was reported that country-level evaluations are most often undertaken in response to donor requests, although the frequency of this [-practice varies between countries and regions.

The Panel believes that the limited core budget for evaluation and the heavy reliance on Other Resources has an impact on planning, prioritization and evaluation coverage at all levels. The capacity to identify and carry out evaluations of strategic importance is reduced when evaluation is funded on a project-by-project basis.

UNICEF has an on-going need for credible and independent assessment of results to demonstrate that the organization is meeting its mandate and is accountable to all stakeholders, including partner governments and beneficiaries. Evaluation is an essential tool to demonstrate impact and sustainability. In the Panel’s view, evaluation should be considered a core function and should be provided with a predictable and adequate budget.  

Results-Based Management
The Panel’s mandate did not include a comprehensive analysis of UNICEF’s system for Results-Based Management. However, in the course of data collection and interviews it became apparent that weaknesses in the organization’s RBM systems have an impact on the quality of evaluations, and their credibility, particularly at the country level. These weaknesses are not unique to UNICEF; the challenges are the same for other development cooperation agencies and for bilateral donors.  As UNICEF endeavours to focus more on policy advocacy and joint programming, it becomes harder to define results, measure progress and determine attribution.

UNICEF has made progress since 2002 in creating a stronger organizational framework for results-based management, as demonstrated in the Integrated Monitoring and Evaluation Framework that accompanies the current corporate plan (MTSP 2006-2009), the requirements at the country level for IMEPs and a summary results matrix in the Country Programme Document (CPD).

UNICEF’s participation in the UNDAF process at the country level is also placing greater emphasis on results-oriented planning as “the UNDAF Results Matrix describes the results to be collaboratively achieved”. 

The Panel concluded that the EO has contributed towards strengthening UNICEF’s Results-Based Management systems, most notably through its contribution to development of the integrated monitoring and evaluation framework and detailed performance indicators for the MTSP 2006-2009. However, there is a gap between high level, organization-wide indicators and the systems used for planning and performance assessment at the programme/ project level.

Evaluation Policy
The Panel concluded that the culture and practice of independent evaluation seems well established at UNICEF but it is not supported by an up-to-date and comprehensive evaluation policy which reflects the Norms and Standards for Evaluation in the UN System. The Panel believes that the independence, credibility and usefulness of the evaluation function would be strengthened by updating the current policy statements into a comprehensive policy document that provides a clearer framework for implementation of the evaluation function.

Independence
The Panel considers that UNICEF’s Evaluation Office is meeting the UNEG Norms and Standards related to independence, including: fostering an enabling environment for evaluation; independence and impartiality of evaluators; ensuring access to information required for evaluations; and EO’s freedom to report to the appropriate level of decision-making on evaluation findings.

The Panel believes that independence of the evaluation function should be formalized in an updated evaluation policy document that is approved by the Executive Board and disseminated and implemented throughout the organization by way of an Executive Directive.

Clarifying the EO’s reporting line and responsibilities would provide assurance against any infringement on independence, real or perceived. The Panel recommends that the Director of the Evaluation Office should report directly to the Executive Director.

The Panel considered the option of a direct reporting line to the Executive Board but concluded that such an arrangement would not significantly increase the EO’s independence. Board members have not identified a direct reporting relationship as a priority and it would be inconsistent with the reporting lines for other elements of the decentralized system. Frequent rotation of Board members and lack of evaluation experience were identified as potential barriers to ensuring strong oversight for an Evaluation Office that reported to the Board.  

Engaging Executive Board members in a discussion of an updated evaluation policy document would afford an opportunity to explore ways in which the evaluation function could make a stronger contribution to the Board’s decision-making. In particular, the Board could consider: commissioning evaluations on specific subjects; greater use of evaluation (including Country Programme evaluations) to validate results of self-assessments undertaken at the country level; requesting aggregation of evaluation information to assess performance at the organizational level.

It is important to note that, in the Panel’s view, independence of the evaluation function does not mean isolation. Evaluation has intrinsic links to all stages of the project/programme cycle. It provides essential information to determine whether results are being achieved, the impact of those results, the need for change, and the potential for a project/ programme to be sustainable. Evaluation is a key management tool for learning and for performance accountability. In fact, it has been argued that, “rigorous program evaluations are the lifeblood of good governance.”   In this respect, the Panel considers evaluation as a core function that should have a predictable and adequate budget to ensure credible and independent information to assess whether UNICEF is fulfilling its mandate.

The Panel considers the ability to budget for evaluation as a key element of independence. Having limited Regular Resources in the EO’s core budget and having to negotiate with other Divisions for evaluation funding restricts the EO’s capacity to choose evaluation topics that it considers strategically important for accountability. Similarly, having to raise almost two-thirds of its budget from Other Resources makes the EO potentially vulnerable to donor demands.

Relation between Evaluation and Audit – The Panel notes that UNICEF intends to review the mandates of the Audit and Evaluation functions. This is timely in light of the current discussions within the UN system about co-locating these functions. The Panel discussed the relation between the two functions but did not undertake a review of options for locating the evaluation function within various organizational structures. The UNEG Norms and Standards indicate that the EO Director should report either to the Board or to the Head of the organization to ensure independence of the evaluation function. The consensus of the Panel was not to make a specific recommendation on structure, but instead, to encourage UNICEF to ensure that evaluation remains a strong, independent and credible function that addresses programme effectiveness, value and impact results.

Credibility: The Panel considers that UNICEF’s Evaluation Office is meeting the UNEG Norms and Standards related to credibility as follows:setting quality standards and providing guidance on key aspects of evaluation; highly competent and credible professional staff; transparency in selection and management processes for EO evaluations; impartiality of EO evaluations; participation of country governments and other partners in EO-led evaluative activities; building evaluation capacity in member countries, especially through CPE methodology and the facilitation and support of evaluation networks. The Panel noted that UNICEF’s approach to evaluation at the country level fosters partnership and builds ownership for development results. This process of mutual accountability enhances UNICEF’s overall credibility with its partners.

Weaknesses were noted in the following areas, especially related to country-level evaluative activities:

  • Lack of clear organizational criteria for the selection of evaluations;
  • Inconsistencies in applying guidance provided by the EO to ensure that all evaluations, and evaluation reports, meet the required quality standards;
  • No clear separation of responsibilities for evaluation, monitoring, programming, fund raising and advocacy functions at the country level;
  • Uneven participation by stakeholders, including beneficiaries, in roles other than information sources;
  • Inconsistent assessment of gender issues, especially analysis of the impact of results for women/ girls and men/boys;
  • Inconsistent assessment/ analysis of how the human-rights-based approach was applied;
  • No mandatory use of end-of-project evaluations for pilot projects;
  •  Limited capacity to aggregate information on results in order to assess performance at the organizational level.

Budget limitations have reduced the EO’s ability to strengthen UNICEF’s internal evaluation capacity at the decentralized levels, in spite of the Executive Board’s having identified this as a priority focus. The Panel notes that approximately half of UNICEF’s 126 country offices do not have a level 3 M&E officer (level 3 is the desired minimum level to ensure competence). The EO reports that these offices are less able to consistently deliver high quality evaluations. 

Poor quality of country level evaluations was first identified as a problem following a meta-evaluation commissioned by the Evaluation Office in 2004.  Since then, the EO has provided guidance for Terms of Reference, and quality standards for conducting evaluations and reporting on them. The EO carries out an annual quality review of evaluation reports submitted from all levels (HQ, region, country). The EO’s latest Evaluation Report Quality Review indicates that there has been some improvement in the quality of evaluation reports submitted for review over the past two years, but the low number of reports submitted suggests that training on the standards or other support is still needed.

Usefulness of Evaluation Evidence: the Panel considers that UNICEF’s Evaluation Office is meeting the UNEG Norms and Standards related to usefulness of evaluation evidence as follows: intentionality by the Executive Board and senior management to use evaluations to inform decision-making; transparency of the evaluation process, disclosure policy and public accessibility of reports; and contribution to strengthening UNICEF’s Results-Based Management systems; to policy making, organizational effectiveness, and development effectiveness, and to harmonization in evaluation and humanitarian assistance.

Timeliness – Evaluations are generally well-timed to feed into the planning cycle for country programmes and for decision-making at the Board level. Evaluation’s contribution to management and decision-making for both programmes and policies is considered by the Panel to be strong at all levels. There is also evidence that evaluation is contributing to improving the development effectiveness of UNICEF interventions.

Learning – Evaluation’s contribution to learning is stressed at all levels of the organization and there are good indications that evaluation findings are used to improve programming and policies. At the same time, however, the Panel notes that organizational systems for knowledge sharing and institutional learning are not yet adequately developed.

Contribution to UN harmonization – Senior managers and other agencies recognize the EO’s leadership within the United Nations Evaluation Group (UNEG) to create professional Norms and Standards for implementation of evaluation across the UN system.

UNICEF’s active role in promoting the improvement of best practices across UN agencies has also been recognized.    The EO is presently providing leadership for three UNEG task forces: Country Level Evaluation – intended to build strategies for joint evaluations at the national level and to undertake case studies on joint evaluations; Evaluation Capacity Development – which will contribute to the professionalization of evaluation in the UN system by developing generic competencies for Evaluation Officers and a curriculum for evaluation training tailored to the needs and specifications of the UN system; Evaluation Practice Exchange – in which agencies will share ‘better practice’ using examples of (a) proven and transferable experience, and (b) innovations with potential for wider application.

UNICEF’s participation in the area of humanitarian assistance has increased significantly in the past few years. The EO has made a contribution to developing more effective methodology for evaluation in disaster and crisis situations. EO-led evaluations of Iraq, Darfur, Liberia, Tsunami-affected countries, and two major evaluations of humanitarian capacity building, have helped set a new agenda to improve humanitarian response. The Darfur evaluation was used as an illustrative case by the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) to promote discussion and learning through its network of organizations that provide humanitarian assistance.

Information provided to the Executive Board – The Panel notes that the Executive Board has repeatedly requested more results-oriented reporting from UNICEF. During this Review, some Executive Board members expressed the view that the information provided on evaluation is still not adequately substantive or analytical. Some also indicated that the time available for discussion of evaluations is too limited.  Some members indicated that a management response should be included with evaluation reports.

Tracking System – The Panel commends the recently undertaken initiative to track management response to global/ corporate evaluations. Management response and implementation of evaluation recommendations are fundamental indicators of the importance of an evaluation function to an organization. In addition to the new tracking system at Headquarters, efforts should also be made to strengthen tracking of management response at the field level. 

The complete summary and the report are attached.  See also the Management Response to the Peer Review.



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Date:
2006

Region:
Global

Country:
Inter-regional

Type:
Evaluation

Theme:
Evaluation

Partners:
African Development Bank, CIDA, Institute for Policy Alternatives (Ghana), Cooperation, Irish Aid, Norwegian Agency for Development, UN Evaluation Group

PIDB:

Follow Up:

Language:
English

Sequence Number:

Búsqueda