DANIDA'S

EVALUATION POLICY 2006

 

1. Introduction

 

Evaluation has been used systematically since the early days of Danish development cooperation as a tool for improving methods and results. In 1982 Danida established a special unit responsible for evaluation. The use of evaluations has developed successively in four main stages:

 

Prior to 1982 evaluations focussed essentially on individual projects and programmes.  Most of these were mid-term or phase evaluations conducted as the project moved from one phase to the next. Only a few end-evaluations were conducted, and only occasionally were ex-post evaluations carried out to study the long-term effect of projects.

 

In the period 1982-87, after Danida’s Evaluation Unit was established, it was agreed to use evaluations for more systematic studies to improve the quality of Danida financed development activities. Also in this period most evaluations were mid-term or phase evaluations of individual projects. The trend was to replace mid-term evaluations with internal reviews and increase the number of end-evaluations. The use of evaluations was more systematic in the sense that it was guided by an annual evaluation programme to ensure that the sample of evaluated projects and programmes were representative for Danish bilateral cooperation.

 

During 1987-97 the number of individual project evaluations was reduced and the number of thematic and sector evaluations increased. As a principle, all evaluation reports were made public. In 1992, informing the public was included as an essential goal for evaluation in accordance with DAC principles. In this period evaluations became more experimental and included a number of impact evaluations as well as use of participatory methods. All evaluations were conducted by external, independent consultants.

 

In 1997 an evaluation policy was formulated and the Evaluation Secretariat was established as a separate, independent entity within the Ministry of Foreign Affairs (in 2003 the name was changed to the Evaluation Department). A Review of Evaluation in Danida by external international consultants found in 2003 that Danida’s evaluation system basically is sound in comparison to general international standards. Based on a public hearing held in 2004 the institutional arrangement of the Evaluation Department was confirmed.

 

While the policy has provided the overall framework for specific Danida evaluation, practice has developed considerably since 1997. In particular the move towards sector wide approaches has required donor evaluation departments and partners to work together to conduct evaluations jointly. Danida has been at the forefront in this move towards joint evaluations. In recent years about half of evaluations have been conducted jointly with partners.

 

The Rome, Marrakech, and Paris Declarations on ownership, alignment, harmonisation, managing for results, and the Good Humanitarian Donorship Principles have reinforced this trend. Fortunately, the donor evaluation community has had a good basis for harmonising evaluation work: the DAC Principles for Evaluation which has provided a common frame of reference for evaluation across donors and countries.

 

This up-dated version of the evaluation policy captures these changes and makes collaboration and partnership central principles for Danida’s evaluation.

 

 

2. Objectives

 

An evaluation is an assessment, as systematic and objective as possible, of on-going or completed development activities, their design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, developmental efficiency, effectiveness, impact and sustainability[1].

 

Evaluations are carried out to generate knowledge and accountability information about development interventions:

 

-         Evaluations shall contribute to the improvement of development cooperation by collation, analysis and dissemination of experience from current and completed development activities. They shall seek the causes and explanations as to why activities succeed or fail to succeed and produce information to help improve relevance and effectiveness of future activities. The target group is Danida's management and staff, government authorities and other concerned parties in partner countries, stakeholders of a particular organisation under evaluation, and Danish and foreign development professionals.

 

-         Further, evaluations shall provide parliamentarians and the general public in Denmark and partner countries with professional documentation for the use and results of development cooperation. They shall also contribute to a better understanding of development cooperation and of its potential and limitations as an instrument for economic and social change.

 

Evaluations are distinct from reviews, which constitute a management tool to monitor whether an activity or programme is on track and produces the immediate outputs agreed upon.

 

 


3. Guiding Principles

 

The following principles reflect the core values of Danida’s evaluation work. They are interdependent and mutually reinforcing and, as such, they form an overarching frame of reference with which all aspects of an evaluation must be consistent.

 

The guiding principles articulate fundamental expectations of evaluators, of evaluation processes and products, of methodology and of those managing the evaluation.

 

Danida’s guiding principles draw on, and are consistent with, the DAC Principles for Aid Evaluation as well as with the DAC Evaluation Standards[2].

 

All Danida evaluations must be consistent with the principles of:

 

Independence – the evaluator’s judgments are not influenced by pressure or conflict of interest. Members of evaluation teams must not in person have been engaged in the activities to be evaluated. Companies involved in the preparation or implementation of the activities to be evaluated cannot be evaluator of these activities.

 

Impartiality – the personal preferences of the evaluator do not affect the evaluation. Evaluations must give a balanced presentation of strengths and weaknesses. Although evaluators are responsible for all conclusions, different views of interested parties should be reflected in the evaluation report.

 

Objectivity – the evaluation rests on verifiable findings.  Judgments must be clearly separated from factual statements.

 

Transparency – features, issues and decisions significant to the evaluation are identified and explained clearly. Relevant parties in Denmark and the partner country must be consulted during the preparation and implementation of evaluations, drafting of Terms of Reference and discussion of the draft report.

 

Partnership – in conformity with the Rome and Paris declarations on harmonisation and partnership as well as the Good Humanitarian Donorship Principles evaluations should to the extent possible be undertaken in partnership with stakeholders in partner countries and other development partners. 

 

Feasibility – the appropriate methodology and resources required by the evaluation are available.

 

Propriety – the evaluation does not harm individuals.

 

Cost-efficiency – the evaluation is realised at least cost.

Accuracy – the data do not contain errors of significance to the evaluation.

 

Fairness – evaluations give a balanced presentation of strengths, weaknesses and different views.

 

Credibility – the evaluation is conducted in such a way that the results are credible.

 

Usefulness – users and stakeholders make use of the evaluative process and the information it produces to improve development cooperation.

 

 

4. The Evaluation Department

 

The Evaluation Department is responsible for evaluating the performance of development activities to which Denmark has contributed. It provides feedback to the Ministry of Foreign Affairs about development cooperation processes and results, as well as accountability to Parliament and the public for the results of Danish development assistance.

 

The Evaluation Department is an independent department in the Ministry of Foreign Affairs. The Head of the Evaluation Department refers to the Head of the South Group – the State Secretary. While evaluation is part of the Performance Management Framework, the Evaluation Department holds no responsibility for daily administration, implementation and monitoring of development cooperation.

 

Core responsibilities and duties of the Evaluation Department include:

 

·        Programming, formulating and managing evaluations of development activities funded or co-funded by Denmark, including multilateral and NGO activities, to the extent possible in the form of joint evaluations with national authorities in partner countries and development partners.

 

·        Contributing to learning processes within the Ministry of Foreign Affairs and in partner countries by providing feedback to operational departments and management about relevance, impact and operational performance of development activities. Participation in the Programme Committee is a key vehicle for feeding back information to operational departments and embassies.

 

·        Contributing to increased accountability of Danish development assistance by disseminating evaluation results to the Danish public, Parliament, the Ministry’s management and staff, the Danida Board, partner countries, development partners  and other interested parties.

 

·        Developing and improving evaluation methodology and guidelines, as well as methods for disseminating results.

·        Participating in international co-operation on evaluation principally in DAC, EU and Nordic contexts.

 

·        Analysing Programme and Project Completion Reports, maintaining the filing system for the PCRs and facilitating more efficient use of lessons learned from the PCRs in the design and development of future development cooperation.

 

·        Contributing to the development of evaluation capacity in partner countries through bilateral and multilateral co-operation.

 

·        Contributing to the development of evaluation capacity in NGOs and the Danish resource base.

 

 

5. Evaluation Programming

 

The Evaluation Department is responsible for preparing two-year rolling programmes on the basis of the strategic and operational needs of Danida and partners. The programme is prepared in consultation with relevant stakeholders: the Ministry of Foreign Affairs, partner authorities and other donors with a particular view to carry out evaluations jointly with these partners.

 

The evaluation programme must achieve a suitable coverage of geographical areas, large and small partner countries and thematic areas over time:

 

·        Danish development cooperation with a particular country, one or more sectors in a partner country, principally in partnership with relevant stakeholders.

 

·        Cross cutting issues (gender equality, environment, human rights and democracy) and priority themes (HIV/AIDS, children and young people, private sector involvement, sexual and reproductive health, etc.) as well as other themes such as planning and implementation of policies, strategies and programmes.

 

·        Development cooperation instruments (programme and project assistance, assistance via NGOs, mixed credits, budget support, humanitarian assistance, research support, fellowship programmes).

 

·        Multilateral development cooperation, principally in co-operation with other donors. Joint donor assessments of the evaluation capacity of multilateral organisations will gradually replace evaluations of multilateral organisations.

 

·        Impact evaluations of ongoing or completed projects and programmes to which Denmark has contributed.

The evaluation programme shall contain a brief rationale for each evaluation: the primary objective and which features in particular the evaluation shall comprise. It will be presented to The Board of Danida and to Parliament for information and comment.

 

6. The Evaluation Process

 

6.1 Preparation

The management and staff of Danida as well as the relevant partners (the country, other development partners, NGOs or multilateral organisations) should be involved in the preparation of evaluations by participating, to the greatest possible degree, in the formulation of Terms of Reference with a view to focusing the evaluation on relevant subjects. Before an evaluation is set in motion, the Evaluation Department prepares, in consultation with relevant interested partners, a description of the evaluation that covers:

 

·        The main objective of the evaluation.

 

·        The scope of the evaluation: Whether the evaluation shall cover the entire or part of a programme or project, a sector, an instrument of development cooperation or a particular theme (in one or more countries).

 

·        Specific issues or features to be covered by the evaluation.

 

·        Prospective approach and methodology.

 

·        Time schedule.

 

On this basis, the actual Terms of Reference are prepared. The Terms of Reference form the basis for the selection of the evaluation team and the conduct of the evaluation and shall outline  which specific questions the evaluation shall seek to answer.

 

An evaluation should normally consider the following issues:

 

Relevance – Are the development interventions relevant to Danish and partners’ development policies, goals and strategies as well as global priorities: poverty reduction, a sustainable environment, gender equality and democratisation and human rights? Is the activity relevant in relation to the needs and priorities of the intended beneficiaries?

 

Effectiveness – Achievement of objectives: Have the primary objectives identified for the activity been achieved? Have the planned or expected results been achieved?

 

Efficiency – How economically have resources/inputs (funds, expertise, time, etc.) been converted to results? Are the investment and recurrent costs justified? Could the same results have been achieved with fewer resources?

 

Impact – What positive and negative, primary and secondary long-term effects have been produced by a development intervention, directly or indirectly, intended or unintended?

 

Sustainability – The probability of long-term benefits. Will the intended benefits continue when development cooperation is terminated? Is local ownership established?

 

6.2 Selection of Evaluation Team

Evaluation teams are selected through international competitive bidding in accordance with prevailing regulations.

 

The criteria for the selection of the evaluation team are professional competence, experience in relation to the task, independence (no conflict of interest) and the quality of the evaluator’s proposal. The team must constitute a representation of the relevant professional expertise. Professional expertise from the partner country shall as far as possible be represented on the team.

 

6.3 Implementation

Evaluations are typically carried out in three phases. In the first phase the evaluator prepares an operational evaluation plan, consistent with the Terms of Reference. The operational evaluation plan details specific questions, assessment criteria, approach, design, data collection methods, analytical framework, preliminary findings based on documentary review and interviews, provisional conclusions, report outline, and a detailed work plan for the second phase. This may cover further documentation studies, interviews, primary data collection, field observations etc. Any changes from the Terms of Reference are justified and agreed with the Evaluation Department.

 

Upon approval of the detailed operational plan by the Evaluation Department or, in the case of joint evaluations the steering committee, the evaluator proceeds to the second phase that includes collecting, consolidating and analysing data, establishing and clearing findings of fact, formulating conclusions and recommendations, preparing and clearing the draft report. The third phase entails finalising the report.

 

 

6.4 Independent Reporting

Danida favours a participatory approach to the conduct of evaluations. Danida management and staff as well as relevant representatives of partners should be involved in the implementation of evaluations to the greatest extent possible especially at (but not limited to) key stages of the evaluation process: preparation and planning, clearance of factual findings, and discussion of conclusions and recommendations. This involvement takes place through regular and systematic communication with the Evaluation Department and stakeholders through meetings, workshops and seminars as needed over the course of the evaluation.

 

The evaluation team has the final responsibility for the contents of the report. Any disagreements among the evaluation team or between the evaluation team, Danida, and relevant partners that are significant to conclusions and recommendations must be reflected in the report, either in the form of comments in the text, footnotes or as a special section.

 

Evaluation reports must be brief and concise, and the presentation must be clear and adjusted to the target group. The normal language is English, and where relevant reports are translated into French, Spanish or Portuguese.

 

 

 


7. Dissemination

 

All evaluations to which Danida is a partner are published in the form of printed reports and summaries as well as electronically on the Evaluation Department’s web-site.

 

For every evaluation a brief summary in Danish, English and other relevant languages must be compiled with a view to publication. The summary must cover the most important observations and conclusions of the evaluation. The presentation must be made in language accessible to non-professionals.

 

If an evaluation is thought to be of interest to a broader audience Danish and local language versions will be produced (possibly abbreviated) and edited to communicate effectively to readers that are not professional specialists.

 

The Evaluation Department contributes actively to disseminating the evaluation experience of Danida as well as other development organisations via workshops and seminars for staff and partners. Furthermore, the Evaluation Department assists Danida's Centre for Competence Development in the dissemination of evaluation experience.

 

The Evaluation Department will contribute to the incorporation of evaluation experience in policies, strategies and guidelines etc. The participation in the Programme Committee is a key vehicle for this.

 

The Evaluation Department reports annually directly to the Board of Danida and to the Foreign Affairs Committee of Parliament about the activities of the Evaluation Department, the findings of evaluations, and the follow up on previous years’ evaluations.

 

In addition, the Evaluation Department contributes to Danida’s Annual Report and the Annual Performance Report.

 

8. Follow-up

 

At the conclusion of an evaluation, a follow up memo is prepared by the relevant department/embassy. This takes note of Danida's position on the conclusions and recommendations as well as identifying which departments are responsible for the agreed follow up activities. The follow up memo is discussed in the Programme Committee and signed off by the State Secretary. The Evaluation Department undertakes to monitor the implementation of the follow up activities at regular intervals.

 

9. Monitoring implementation of the policy

 

Implementation of the Evaluation Policy is guided by Danida’s Evaluation Guidelines[3] which specify in greater detail how evaluations are conducted, the quality standards required and codes of conduct for parties to an evaluation.

 

Implementation of the policy will be reported in conjunction with the annual report on evaluation activities submitted to the Board of Danida and to the Foreign Affairs Committee of Parliament. Implementation of the evaluation programme will be part of the annual results contract between the Evaluation Department and Management.

 

The Evaluation Policy should be assessed and, if needed, revised after five years.

 

 

 

 



[1]  DAC Principles for Aid Evaluation, OECD-DAC, 1991. (The definition was reconfirmed in DAC’s Glossary of Key Terms in Evaluation and Results Based Management in 2002)

 

[2] DAC Evaluation Standards, OECD/DAC, April 2006

[3] The current guidelines from 1999 will be revised to reflect the up-dated Evaluation Policy new