Base de datos de evaluación

Evaluation report

Global 1999: Evaluation of UNICEF Multiple Indicator Cluster Surveys



Author: Stoeckel, J.; Hong, S.; Hereward, M.; Thorpe, I.; Cerulli, A.; UNICEF NYHQ

Executive summary

Background

Multiple Indicator Cluster Survey (MICS) was developed as an affordable, fast and reliable household survey system that would fill in gaps in knowledge and update available data. It was designed in collaboration with the World Health Organization (WHO), the United Nations Statistical Division, the London School of Hygiene and Tropical Medicine, and the United States Centers for Disease Control (CDC). By 1996, more than 60 countries had carried out a stand-alone MICS, and another 40 had incorporated some if its modules into other surveys.

An evaluation of MICS was planned for two reasons. First, there is a political commitment to achieve World Summit for Children goals and to adhere to the Convention on the Rights of the Child, and a cost-effective way of monitoring progress is important to sustain this commitment. It is, therefore, necessary to assess the future role of MICS in this activity. Second, considerable resources have been invested in the implementation of MICS by various governments and by UNICEF. National statistical offices and other national and international agencies and organizations are considering or already using MICS methodology and want to know its strengths and weaknesses.

Purpose / Objective

The objective was to assess whether and how the MICS could be adapted for future monitoring of the World Summit goals and implementations of the Convention on the Rights of the Child. Specifically, the evaluation assessed the management, implementation, outputs, use of results, capacity-building effects and future monitoring needs of MICS and made recommendations in each of the areas under review.

Methodology

Three complementary methodologies were used to cover both the breadth and depth of MICS experiences: a desktop review of existing documentation; a questionnaire-based survey of all UNICEF offices regardless of whether they conducted a MICS, some other kind of survey or no survey at all; and key informant interviews through field visits to six countries and face-to-face or telephone interviews with UNICEF and other actors at the global and regional levels.

The number of non-MICS countries that responded was too small to conduct an analysis of the differences in implementation, use and outputs between MICS and non-MICS data. 50% of the MICS countries responded to the questionnaire survey.

Key Findings and Conclusions

The evaluation revealed a high level of satisfaction with the quality of the MICS data and with its ability to produce information more quickly and economically than other methods of data collection. MICS exceeded expectations in helping even the most data-poor countries monitor their progress towards achieving the mid-decade goals. Other salient points and recommendations include:

Management?
The extent to which governments were involved in the MICS process varied greatly; in some countries, local ownership appeared strong, while, in others, MICS was seen as a UNICEF activity. To increase government participation in the future, all sectors with an interest in the results should be actively involved in planning the survey.

Implementation?
Most countries trained survey personnel in interviewing, data processing and analysis, and report writing. Insufficient training caused problems in some cases, and both training periods and methods should be expanded. In addition, the MICS manual should put greater emphasis on sampling and, in particular, on the use of probability methods.

Output?
Although the level of satisfaction with the quality of the data obtained was high, there were some concerns about inconsistencies between MICS results and those from other surveys.

Use of Results?
Almost all countries that carried out a MICS used the data in some way, whether for advocacy, planning, health programs, supplementing or confirming data from routine reporting systems, or as a baseline against which to compare follow-up MICS and other surveys. However, of those countries that had performed a MICS, only half used the data to report on progress towards the goals. The reason most frequently cited was that the results were not ready in time, but in some cases, government officials did not accept the data and used other sources instead. In the future, discussions should be held with relevant government officials before MICS results are disseminated.

Capacity-building?
The country offices that undertook a MICS believe that the exercise improved the skills and capacity of local government staff as well as of UNICEF staff. However, the evaluation notes that many of the respondents who did not conduct a MICS cited insufficient technical expertise as their reason. Helping them overcome such shortcomings is vital.

Future monitoring needs?
Most of the countries performing a MICS agreed that the surveys should be continued, with a follow-up administered about every five years. Indicators should be modified to monitor the World Summit Goals and other Convention indicators as they develop. With decentralization occurring in many countries, surveys will be used increasingly for monitoring and planning at provincial, state and regional levels, which will require more staff and technical assistance.

Costs--
Among the countries for which cost data are available, the MICS exercise (excluding costs of UNICEF and government staff) cost on average about US $80,000. Compared to estimates of $250,000 to $500,000 for other major surveys, MICS proved to be extremely cost effective.

Recommendations

Management:
- As discussed in the MICS manual, in order to increase government ownership, all government sectors having an interest in MICS results should be involved in the process of planning the MICS, and in the dissemination and use of data. This should include: health, education and statistical offices, representatives of provinces, states or districts, and, when appropriate, national institutions/coordinating bodies on children's rights.
- The MICS manual should be revised to include more information on sample design and alternatives when reliable sampling frames are not available.
- The period for training interviewers and supervisors together should allow sufficient time for training on the selection of the sample, extensive role playing for practice interviews, and the observation and assessment of interviewers in the field by supervisors.

Implementation:
- As the MICS manual suggests, if the indicator can be reported from existing sources, a MICS need not be used to collect it.
- In addition to global goals, the selection of the modules or other questions to be included in MICS should also be based upon the relevance of the goal to be measured by the module/questions for the country.
- Additional training and technical assistance should be provided through regional workshops on the areas of data analysis and report writing. If the EPI-INFO software program is to be used, training should include methods to revise the special program or create a new one when changes are made to the questionnaire.
- The MICS manual section on sampling should place greater emphasis on the use of probability methods. The use of non-probability methods, such as random-walk, or purposive sampling as sometimes used in sentinel community surveillance methods, should be discouraged. Further recommendations on sampling that should be included in the MICS manual are: using only two stages of sample selection; avoiding large and small alternate area units; using probability proportionate to size sample methods properly; adjusting for changing measure of size; weighting, where necessary; avoiding biases; and calculating sampling errors only when valid.

Outputs:
The MICS questionnaire should be translated back, and the back-translation should be compared to the original questionnaire to check for inconsistencies and not just be used as a reference. A pilot test should precede the training of interviewers and the finalization of the questionnaire.

Use of results:
Dissemination workshops should be held upon completion of each MICS with concerned government officials and relevant agencies. The purpose of the workshops would be to present the MICS results and assess their reliability and utility compared to other data sources for reporting on the end-decade goals. This could contribute to more informed decisions on the part of government officials regarding the data that would be used for the end-decade goals.

Capacity-Building:
Countries that did not conduct a MICS from lack of expertise should be targeted for appropriate training inputs and technical assistance to provide and build the capacity required to conduct MICS. This would require the development of a formal management plan for assistance in specific areas of need and the establishment of a set of criteria upon which countries could be judged as eligible for the assistance.



Full report in PDF

PDF files require Acrobat Reader.


 

 

Report information

Date:
1999

Region:
Global

Country:
Inter-regional

Type:
Evaluation

Theme:
Evaluation

Partners:

PIDB:

Follow Up:

Language:
English

Sequence Number:

Búsqueda