Base de données d'évaluation

Evaluation report

2003 Global: Desk Review of Real-time Evaluation Experience



Author: Sandison, P.

Executive summary

Background:

As part of an organisation-wide process undertaken by the United Nations Children's Fund (UNICEF) to strengthen its humanitarian response capacity, the organisation’s Evaluation Office (EO), in collaboration with the Office of Internal Audit (OIA), is pursuing a number of strategies to strengthen monitoring and evaluation (M&E) at field level. One of these strategies is to establish some form of real-time evaluation (RTE).

A real-time evaluation is one carried out whilst a programme is in full implementation and almost simultaneously feeds back its findings to the programme for immediate use. The common characteristics of RTE examined in this study are that they correspond to standard definitions and characteristics of evaluation, are carried out in the early stages of an emergency, typically two to three months after the crisis and, ideally, though not necessarily, repeated during the project cycle. The evaluators are either internal but not directly involved with or responsible for the programme in question, or external but highly familiar with the agency's work. The approach emphasises participation with agency staff and the reporting method prioritises accessibility, particularly rapid dissemination and participation with the implementing staff. Hence, findings and recommendations are delivered briefly in verbal and written form, typically prior to leaving the field, and the length of even final reports is kept short.

Purpose/Objective:

UNICEF commissioned this desk study with the objective of identifying key lessons on real-time evaluation (RTE) through drawing from the experience of other agencies and organisations in this area. The findings are intended to provide the basis for UNICEF to develop and test a real-time review or evaluation methodology as a component of its evaluation system.

Methodology:

What little "literature" exists on RTE essentially consists of Jamal and Crisp's paper on frequently asked questions for UNHCR and Broughton's concept paper for WFP. These, the RTE TORs, and reports from UNHCR, WFP, IFRC, HAP, DFID, ETC and the DEC have been used in combination with semi-structured interviews to examine the rationale for RTE, its purpose, design, advantages and disadvantages, and, where possible, to identify examples of good practice, challenges, and pitfalls. Interviews were carried out mostly by telephone with 20 individuals with experience conducting RTE. Interviews by telephone and e-mail have also been carried out with five programme staff from three different agencies who have participated in RTE.

Findings and Conclusions:

Agencies interviewed saw real additional value gained by using RTE as compared to internal review or standard evaluations, mainly because of its timing and rapid feedback, and its combination of proximity to the actual programme and people with its evaluative distance and objectivity. Its recommendations can be checked for appropriateness in the field prior to departure and fed straight back into the programme, making an immediate difference and enhancing accountability. At the same time, its contact with staff early in an emergency, its witness to policy in practice and privileged access to process and other unrecorded information can capture unique information that enhances subsequent standard evaluations and, potentially, organisation-wide learning. It facilitates in situ learning that may be a highly appropriate form of learning for field staff. The participative process of RTE can facilitate team building and resolve tensions by bridging the gap between the country office, regional office and headquarters, bringing up-to-date information from one to another and acting as "a voice" or advocate, when appropriate. An RTE team can also be a resource, encouraging and advising on appropriate baseline surveys and monitoring systems.

Many of the shortcomings of RTE are associated with it being done badly rather than with it being done at all. Nonetheless, it has inherent risks primarily associated with its timing. Although its recommendations may save money, it is likely to increase the evaluation budget of a programme and may be difficult to fund and to recruit for, particularly as a high calibre of evaluators is emphasised, and there is a short time period in which to implement, after which the opportunity to be genuinely real-time is lost. An RTE may over-emphasise what is essentially a snapshot of a fast-moving situation; the picture lending too much weight to ephemera and becoming hard to subsequently shake off. Negative, published reports could damage the agency's reputation and relationships, undermining its ability to function during an emergency. Programme staff may be too pressurised to absorb lessons and the evaluation team could distract them from life-saving activities. A greater reliance than usual on interviews, and the need for the evaluator(s) to establish good relationships and act as a resource within a team, could distort the evaluator's role of objectivity and distance from management.

The benefits of an RTE are increased through a full process of consultation with key (internal) stakeholders in developing the TOR, and clarity and careful planning prior to implementation regarding its purpose; for example, the extent to which it should serve external or exclusively internal accountability and hence the role of the evaluator(s), types of reporting and mechanisms for follow-up of recommendations. The process of the RTE is important in terms of maximising participation and ownership of the findings and recommendations, and enabling rapid feedback and implementation.

Recommendations:

It is suggested that UNICEF consider the following steps:

  • Carry out a simple, brief stakeholder analysis of the potential "users" of an RTE
  • Plan a pilot in complete collaboration with key stakeholders, particularly a Regional Office (RO) to establish the overall purpose and design
  • Create a generic TOR
  • Decide, prior to implementation, whether the RTE reports would be published, and design the versions of reports appropriately, giving brief guidelines to the evaluators
  • Obtain high-level support for RTE before initiating even a pilot. This would greatly facilitate support for tackling weaknesses and institutional problems beyond the capacity of the deep field to influence
  • Agree on decision-making authority prior to implementation and build into the TOR
  • Agree on the mechanisms for collaboration between the evaluation office and programme management, including exactly what role the evaluation office would play in an RTE
  • Draw up a roster of real-time evaluators in collaboration with the regional offices
  • Obtain funding, or at least the agreement of funding, for a pilot RTE
  • Agree with the highest level of authority possible, on a clear and transparent mechanism of upwards accountability. This would greatly enhance the reception and perceived value of an RTE in the country offices
  • Consider benchmarks that will facilitate the inclusion of UNICEF's partners, such as the use of Sphere standards likely to be used by partner NGOs
  • Build the results of a final RTE into existing mechanisms, such as the mid-term review


Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Date:
2003

Region:
Global

Country:
Inter-regional

Type:
Evaluation

Theme:
Emergency

Partners:

PIDB:

Follow-up:

Language:
English

Sequence Number:
2003/803

Recherche