Base de données d'évaluation

Evaluation report

2001 Global: Country Program Evaluations in Selected UN Organizations and International Banks



Author: Cossee, O.

Executive summary

Background

Since the 1980's, many multilateral aid organizations have had to reform their ways of operation, often under the pressing request of their respective governing bodies. As it became apparent that UN interventions at the country level were often scattered and lacking a coherent frame of reference, one idea that gained currency in the mid-1980's was country programming. The idea is basically to draft a frame of reference, or master plan of operation, or country programme, to ensure that aid provided by a given organization to a given country is relevant to its needs and opportunities; and to provide structure, strategic vision and coherence to what was so far a mere portfolio (collection) of independent projects. In many organizations country programming was linked to increased decentralization of management authority to their country or regional offices. Country programme evaluations are a direct consequence of country programmes.
Half a decade after pushing successfully for country programming, Executive Boards asked for evaluations that would follow the same structure and account for CP resources.

UNICEF was one of the first UN organizations to implement Country Programmes and Country Programme Evaluations. The first pilots, undertaken in the mid-1990's, did not lead to the broad use of CPEs as conceived by the Evaluation and Research Office. A lighter procedure, the end-of-cycle review, took over the CP accountability function. Recently the Division of Evaluation, Policy and Planning (EPP) has embarked on a revision of its modus operandi, and the CPE is a formula that might be revisited.

Purpose / Objective

To review the Country Programme Evaluation (CPE) experience of of selected UN agencies and International Finance Organizations in order to extract preliminary lessons and best practices.

Methodology

Review of CPE-related documentation and interview of staff in central evaluation offices in New York, Washington and Rome. The proceedings of the DAC Vienna Workshop were also a source of information and inspiration for the next section.

Key Findings and Conclusions

Almost all agencies like CPEs: The only exception being FAO, which seems to favor thematic evaluations. But this may also reflect the size of their country-level resources, since even FAO is trying the format in their emergency programmes. Agencies that use country programmes to plan their assistance have some sort of CPE exercise at the core of they evaluation strategy.

Role of operational units and country offices: has been weaker than it should have been. Evaluation offices are moving towards more demand orientation and greater involvement of operational units in order to build buy-in and evaluation usefulness. The issue of who should own or manage the CPE procedure in house (operational units or evaluation office) has been raised in many organizations. The best solution is perhaps shared ownership, but clearly, central evaluation offices have a strong role to play to ensure accountability, especially for large CPs.

Role of the government: the agency / government working relationship seems to be the core of the matter as far as CPEs are concerned. As explained above, the fact that the government is a key partner and determinant of international aid provides the strongest rationale for CPEs. Issues such as national ownership, government performance and policy environment are central to the exercise. In terms of government ownership of, or participation in CPE themselves, the prevalent practice has been to consider the government more as the evaluated that the evaluator. But things are changing and the value of government adherence to the evaluation findings and recommendations is slowly being recognized.

Levels of analysis: the CP level is the focus, but one can only document results by looking at projects and non-project activities in some details, so the project level should also be analyzed and reported in a summary fashion. A review of the strength and weaknesses of the portfolio would be useful to the formulation of the next CP. The issue of the linkages between CP and projects goals is still a bit fuzzy for some agencies.

Topical span: relevance and positioning issues are the most extensively covered. Processes and short-term results come next. Agencies hold modest expectations as per the contribution CPEs could make to impact assessment. There was some discussion on this point in Vienna, but impact is often considered out of scope. The banks (WB & IADB) and UNDP compile in their CPEs whatever impact information has already been collected through project evaluations or studies.

Size of the exercise: there is wide agreement that flexibility in defining the evaluation framework is key to keeping CPEs manageable. The exercise cannot be done cheaply. Four or five man.month seems to represent a minimum for any serious evaluative work at the country level.

Joint CPs: multi-donor CPEs are not in the cards for now, although it would go in the same direction as the CCA and UNDAF. Most agencies are still experimenting and donor sensitivities come into play. Maybe a few pilots will be launched in the coming years.

Implicit and explicit CP: the EU has made an interesting distinction between explicit CP (as written) and implicit CP (as implemented and/or as favored by the country), and used a comparison of spendings vs. plan as a quick way to highlight the discrepancies between the two. The concept can be useful in cases of systematic discrepancies between the planned and implemented CP, i.e. when one specific type of projects is consistently under-implemented while others are not.

Ratings: the WB has pioneered projects rating - a process of breaking down complex issues into elementary variables rated by evaluators - but acknowledges evaluators' subjectivity and effect on ratings. Ratings can only come as complement to a detailed description. Comparability within country is adequate, but not between countries.

Recommendations

One of the recurrent problem faced by CPEs and the people who manage them is the issue of timing, where one need to strike a balance between the two main evaluation objectives of CPEs, accountability and contribution to planning. A CPE coming late in the planning cycle would be strong on accountability but would probably come too late to contribute significantly to the planning of the next CP, which has to be designed and approved before the current CP comes to an end. A CPE coming early in the cycle (e.g. at mid-term) would come early enough to help shape the future CP, but would find it difficult to document the results of the ongoing CP.
--The only practical solution seems to be to evaluate a longer period than the ongoing CP alone, i.e. include in the review the end of the previous CP and the transition between the previous and the current.

CPEs are expensive evaluation exercises. As currently practiced in UN agencies and IFIs, and depending on its scope, a country review or evaluation can cost between $100,000 and $500,000, and requires from 3 to 15 man.month. CPEs lend themselves to an all-embracing scope, often at the expense of depth.
--A common and workable solution to this problem is to draw a sample of projects and non-project activities that will be evaluated. Such a sampling is more often done implicitly, based on the perceived importance of projects from a financial or strategic point of view, than explicitly, i.e. with a list of the entire portfolio to which one applies documented selection criteria.

Most agencies rely on external evaluator to conduct CPEs, usually for reasons of greater objectivity and credibility vis-à-vis the Board. However, it appears that the level of ownership or buy-in from country management teams has been a recurrent issue in many agencies. The consultant is aware of quite a number of cases where operational units and COs rejected CPE findings and recommendations altogether.
--The best practice to ensure local buy-in seem to include the following:
- Hire external evaluators to constitute the core of the evaluation team, and enroll CO staff members and Government representatives in the team (preferably staff with an M&E role) as real participants, with the responsibility to write parts of the report.
- The process should be viewed as fair. A systematic, standardized process applied across countries is more likely to be perceived as fair than a more loose and intuitive process.
- Involve the CO in every step of the evaluation (desk review, TORs, project sampling, feedback on results, etc.) for example through a small evaluation management team composed of key CO staff. Obviously, the evaluation team should present its preliminary findings and recommendations to a debriefing meeting in country, with the CO, governments and other stakeholders participation as appropriate. Feedback obtained in the debriefing meeting should be reflected in the final report.

In terms of CPEs contribution to impact assessment, UN agencies are holding modest expectations. The general view is that it is too difficult in an evaluation of such a broad scope to dig deep into each activity and gather sufficient evidence to make impact statements. CPEs could potentially become a data-gathering cluster or echelon in a system-wide information management system. Collecting already documented impact statements (from evaluation reports, studies, etc.) and discussing their merit based on the evidence provided is a worthy task in itself. Some agencies however mentioned the lack of data as a reason for not assessing impact, which would indicate that many of their project monitoring systems and evaluations do not provide sufficient or credible impact information.
--CPEs should not attempt to collect primary impact data other than stakeholders' opinion. Regular project-level M&E practices and surveys should be used to gather information on final results of external assistance at the country level. This data can then be pulled together periodically through CPEs to pass a judgment on the impact of at least the largest projects in the CP.

Some agencies have tried and developed standard TORs for CP evaluation or reviews (UNICEF, UNDP, WFP). In general, issues listed in standard CPE TORs reflect classic, project-level evaluation frameworks: relevance, efficacy, cost-efficiency, results and sustainability are the major headings. Classic evaluation issues such as design quality, coherence of objectives and quality of partnerships can be applied to the CP level mutatis mutandis.
--To avoid consultants defining their own focus when faced with very broad TORs, standard TORs should be streamlined and considered as a menu of questions. The evaluation office and operational units should make the effort of selecting from the menu a short list of issues to define an evaluation agenda particularly relevant to the CP.



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Date:
2001

Region:
Global

Country:
Inter-regional

Type:
Evaluation

Theme:
Evaluation

Partners:
UNDP, FAO, WFP, World Bank, IADB

PIDB:

Follow Up:

Language:
English

Sequence Number:

Recherche