FY2020 MOFA ODA Evaluation Results
Review of Past ODA Evaluations (Country Assistance Evaluations) and Study of Country Assistance Evaluation Methodologies
Chief Evaluator | Prof. HAYASHI Kaoru Faculty of International Studies, Bunkyo University |
Consultant | Foundation for Advanced Studies on International Development |
Study Target Period | FY2005 to FY2019 |
Study Implementation Period | October 2020 to March 2021 |
Background, Objectives, and Scope of Study
MOFA implements ODA evaluations mainly focused on the policy level every year in order to improve ODA management and ensure public accountability. Externally commissioned third-party evaluations are implemented for further transparency and objectivity. This study examines Country Assistance Evaluations implemented over the past 15 years from the perspective of applying their evaluation results to future ODA policymaking, with the objectives of (1) organizing repeatedly recommended items and recommendations common to certain regions or attributes such as small (in size) and/or island countries so that they can be compiled into useful lessons also applicable to other countries, and (2) obtaining proposals for evaluation methodologies better suited to Country Assistance Evaluations while keeping in mind the evaluation methodologies prescribed in the current ODA Evaluation Guidelines, as well as recommendations for a standard evaluation framework that can be used for all Country Assistance Evaluations.
Summary of Evaluation Results
(1) Review of Recommendations and Lessons
The reports on Country Assistance Evaluations implemented from FY2005 to FY2019 (56 reports) contained a total of 442 recommendations and lessons. Comparing the number of recommendations and lessons in five-year increments, the number of recommendations decreased starting in 2015, and the content of the recommendations indicated in the reports became more concise and easily understandable. Starting in FY2015, more and more evaluations extracted “Lessons Learned.” However, many of these lessons were rewordings of the recommendations, and information was not organized in line with the definition of “Lessons.”
(2) Pattern Classification of Recommendations/ Lessons
The recommendations and lessons contained in Country Assistance Evaluation Reports from FY2014 to FY2019 (122) were classified into five categories and then into 24 subcategories and compared in terms of the implementation period of the evaluation. Although the category comparison did not show any differences between the implementation periods (FY2003- 2013 and FY2014-2019), the subcategory comparison revealed differences such as changes in recommendations relating to “strategies/priorities.”
(3) Useful Lessons for ODA Policies
Based on the results of the analysis in (2), recommendations and lessons that were not limited to a specific country or region but rather applicable to other countries and regions were extracted into a compilation of lessons. In evaluations from FY2014 to FY2019, subcategories containing numerous recommendations and lessons were analyzed with a focus on four aspects, namely, (1) concrete recommendations/countermeasures, (2) factors and events that contributed to the recommendations, (3) objectives of recommendations/countermeasures, and (4) content of items “that could serve as a reference for other countries/regions.” From this analysis, seven themes that provided lessons were identified and compiled.
(4) Evaluation Framework and Rating Results
Thirteen Country Assistance Evaluations that implemented ratings were analyzed in terms of the verification items, verification content, and derivation of ratings for each of the three evaluation criteria (Relevance of Policies, Effectiveness of Results, and Appropriateness of Processes). The results of the analysis revealed the following points.
- In many cases, the same evaluator adopted the same assignment pattern of verification items in multiple evaluations. Instead of setting verification items according to the situation of the target country or region, there was a tendency for item assignments to be determined based on the mindset of the evaluator (commissioned consultant) regardless of the target country.
- Although several of the Country Assistance Evaluations implemented sub-ratings, the method of sub-rating varies by evaluation, and the criteria for the sub-ratings are unclear.
- Of the five verification items indicated by the ODA Evaluation Guidelines for “Relevance of Policies,” three were set as verification items for all of the evaluations. However, in some of the evaluations, the remaining two were not set as verification items, or two to three verification items were condensed into one.
- For “Effectiveness of Results,” the verification method varied more by evaluator than for other evaluation criteria. Furthermore, when it came to verification beyond the level of outcomes, there were cases in which the objectives stated by assistance policy documents were not described clearly, and none of the evaluations was found to have verified the achievement of the results of Japan’s assistance against the “Objectives.”
- For “Appropriateness of Processes,” the number of verification items set varied greatly by evaluation, from two to nine. The number of items set as verification items was also large, totaling 22 items.
- In some evaluation reports, the actual verification items did not match the verification items indicated in the “Evaluation Framework” established at the start of evaluation with the agreement of stakeholders.
(5) Timing of Evaluation Implementation and Reflection in ODA Policies
Of the nine Country Assistance Evaluations whose results have been effectively utilized in assistance policies, a Country Development Cooperation Policy (formerly Country Assistance Policy) was formulated within two years of implementation of the Country Assistance Evaluation for four of them. Closer timing of the two would be expected to make it easier to reflect the results of Country Assistance Evaluations in assistance policies.
Recommendations Based on Evaluation Results
(1) Recommendations for Further Utilization of Country Assistance Evaluations in ODA Policymaking
(a) Extraction of Useful Lessons for ODA Policymaking
Information applicable to other countries or regions should be organized and indicated in the evaluation reports as “Lessons Learned.” Accordingly, it would be desirable for the ODA Evaluation Guidelines to explain the sorts of items that should be included as lessons, just as they do for “Recommendations.”
(b) Country Assistance Evaluation Implementation Planning (Timing of Implementation of Evaluations and Selection of Targets)
Making the scope of implementation of Country Assistance Evaluations more explicit by instituting a policy of incorporating Country Assistance Evaluations into part of the revision process for Country Development Cooperation Policies would be desirable for both evaluators and evaluation users from the perspective of facilitating shared understanding. The timing of the Country Assistance Evaluation and revision of the Country Development Cooperation Policy should also be considered when selecting target countries from the viewpoint of enhancing the PDCA cycle.
(2) Recommendations for More Effective Implementation of Country Assistance Evaluations
(a) Reconsideration of Sub-Ratings
The criteria for sub-ratings need to be defined in order to make the evaluations more convincing to readers with clearer evidence.
(b) Clarification of Verification Items
In parallel with (a) above, the items to be verified in Country Assistance Evaluations should be organized, and standard verification items should be proposed in the Guidelines. Clarifying the verification items and indicating them in the Guidelines would make it possible to prevent variances in verification items according to the evaluator.
(c) Clarification of Objectives and Intervening Logic When Planning Evaluations
With regard to “Effectiveness of Results,” a “road map for achievement of results,” including the respective changes aimed at by the Basic Policy of Assistance (primary objective) and Priority Areas (secondary objectives) indicated in the Country Development Cooperation Policy as well as the Development Issues (tertiary objectives) indicated in the Rolling Plan, how the objectives can be accomplished, and the external factors that influence them, should be organized for the purpose of objective verification. Undergoing this process would facilitate the identification of survey items and also enable judgment of the results of information gathering and goal achievement as well as the specification of contributing and constraining factors, thereby clarifying the verification of effectiveness. In addition, proposals for realignment of the objectives could be expected ahead of the revision of the Development Cooperation Policy.
(d) Management of Evaluation Surveys by the ODA Evaluation Division of MOFA
When the ODA Evaluation Division, which manages evaluations, checks Country Assistance Evaluation Reports, it should make sure that the actual verification items match the verification items indicated in the evaluation framework. If the verification items were changed from the evaluation framework, the reasons for the changes should be confirmed with the evaluation team and explained in the report.
(3) Presentation Format of Evaluation Results Consistent with the Objectives of Country Assistance Evaluations
The desired presentation format of evaluation results varies depending on the scope of evaluation. Thus, for Country Assistance Evaluations, the presentation format of the evaluation results (including the presence or absence of ratings) should be determined after clarifying the scope of evaluation. Although the use of alphabetical or numeric ratings improves the ease of understandability, it could also prevent the subjects of evaluation from perceiving the evaluation results in a constructive manner. Accordingly, the presentation format of the evaluation should be determined with consideration for both of these factors.