2016 Federal Index

Millennium Challenge Corporation


Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY16?

  • MCC’s Monitoring and Evaluation (M&E) Division, which falls within the Department of Policy and Evaluation (DPE), has a staff of 23 and an FY16 budget of $20.6 million in due diligence (DD) funds to be used directly for measuring high-level outcomes and impacts in order to assess the effects of its programs and activities. Departments throughout the agency have a total of $75 million in DD funds in FY16. The M&E Managing Director as well as the Departmental Vice President have the authority to execute M&E’s budget and inform policy decisions affecting independent evaluations. The M&E Managing Director participates in technical reviews of proposed investments as well as in regular monitoring meetings in order to inform policy and investment decisions. The Vice President sits on the Agency’s Investment Management Committee which examines the evidence base for each investment before it is approved by the MCC Board and conducts regular oversight over the compact (i.e., grant program) development process.
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/ learning agenda(s) and did it publicly release the findings of all completed evaluations in FY16?

  • MCC has developed a Policy for Monitoring and Evaluation of Compacts and Threshold Programs in order to ensure that all programs develop and follow comprehensive Monitoring & Evaluation (M&E) plans that adhere to MCC’s The monitoring component of the M&E Plan lays out the methodology and process for assessing progress towards Compact (i.e., grant) objectives. It identifies indicators, establishes performance targets, and details the data collection and reporting plan to track progress against targets on a regular basis. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation and analysis to assess MCC’s impact. Once evaluation reports are finalized, they are published on the MCC Evaluation Catalog. To date, fifty-three interim and final reports have been publicly released, with several additional evaluations expected to be completed and released in the coming months. MCC also produces periodic reports for internal and external consumption on results and learning, and holds agency-wide sessions that help to translate evaluation results into lessons learned for future compact development. Finally, in February 2016, MCC launched “NEXT: A Strategy for MCC’s Future” which outlines new strategic directions on how it will invest more in strengthening feedback systems to harness this learning for ongoing adaptation of design and implementation, both for its own effectiveness and for the benefit of country partners and others in the development community. NEXT is designed to be a five-year strategic plan for MCC, but also includes MCC’s learning agenda by incorporating agency-wide learning and knowledge goals to be pursued within that timeframe.

Did the agency invest at least 1% of program funds in evaluations in FY16? (Note: Meeting this criteria requires both Agency and Congressional action.)

  • In FY15, M&E invested over $17.1 million on monitoring and evaluation of Compact projects, which amounted to 9% of Compact spending for FY15 ($570.7 million). Calculations are still ongoing for FY16. However, MCC expects to disburse amounts similar to FY15. This is reflected in numbers for Q1 & Q2 in FY16, as of March 30, which show M&E investments of $7.5 million.
Performance Management / Continuous Improvement

Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY16?

  • MCC monitors progress towards compact results on a quarterly basis using performance indicators that are specified in the Compact M&E Plans. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each country partner submits an Indicator Tracking Table (ITT) that shows actual performance of each indicator relative to the baseline level that was established before the activity began and the performance targets that were established in the M&E Plan. Some of the key performance indicators and their accompanying data by country are publicly available. MCC reviews this data every quarter to assess whether results are being achieved and integrates this information into project management decisions.
  • MCC also supports the creation of multidisciplinary ‘compact development teams’ to manage the development and implementation of each Compact program. Teams usually include the following members: Coordinator, economist, private sector development specialist, social inclusion and gender integration specialist, technical specialists (project specific), M&E specialist, environmental and social performance specialist, Legal, and financial management and procurement specialists. From the earliest stages, these teams develop project logics and M&E frameworks supported by data and evidence, and use them to inform the development of the projects within each Compact Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems; and they are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • MCC hosts regular “colleges” in which MCC counterparts from partnering countries are invited to a weeklong set of meetings and workshops to discuss best practices, strengthen collaboration, and improve strategies for effectively implementing projects.

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY16?

  • MCC’s M&E Division oversees the upload of anonymized evaluation data to MCC’s public Evaluation Catalog. There, partner countries, as well as the general public, can access spreadsheets that show economic rates of return calculations, performance indicator tracking tables, results of independent evaluations for MCC-funded projects, and public use versions of the data used in those evaluations. All evaluation data is meticulously reviewed by MCC’s internal Disclosure Review Board prior to posting to ensure that respondents’ privacy is protected.
  • As part of its Data2x commitment, MCC and other donors are increasing the amount of gender data released and helping to improve international data transparency standards.
  • MCC is also a founding partner of the Governance Data Alliance, a collaborative effort by governance data producers, consumers, and funders to improve the quality, availability, breadth, and use of governance data.
  • MCC also has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR) which is helping to increase the availability and quality of development-related data in selected countries. MCC partnered with PEPFAR to create local data hubs that would engage stakeholders around the availability, accessibility and analysis of data. The data hubs have a local board drawn from partner country governments, the private sector and civil society. The hubs will comprise both a physical space for data analysts and other staff and virtual engagement among such stakeholders as donors, foundations, researchers, and NGOs.
  • MCC also hosted a publicly available webinar, “Monitoring and Evaluation in the Water Sector,” in which a presentation was given on MCC’s rigorous evidence-based approach to monitoring and evaluation, followed by a closer look at lessons learned in the water sector and a discussion of ways in which monitoring and evaluation can contribute to aid effectiveness.
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY16?

  • MCC uses common, rigorous, evidence-based selection criteria to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC scorecard – a collection of 20 independent, third-party developed indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in its people, and ruling The criteria for passing the scorecard are applied universally to all candidate countries. MCC’s Board of Directors then considers 3 key factors for selecting countries: 1) a country’s performance on the scorecard; 2) the opportunity to reduce poverty and generate economic growth; and 3) availability of funds. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
  • MCC’s model is based on a set of core principles essential for development to take place and for development assistance to be effective – good governance, country ownership, focus on results, and in pursuing these, MCC has created a Principles into Practice series which describes how to make these principles operational. Finally, all of MCC’s evaluations are then published on the MCC Evaluation Catalog. Associated data, upon which evaluations are based, are published when confidentiality concerns are adequately addressed.
  • MCC is also developing an enhanced consolidated results framework that will assist it in telling the full picture of the impact of its programs and enrich programmatic learning. Currently in draft form, the framework will help MCC consolidate impacts across projects, compacts and sectors to assess an overall impact at an organizational level.

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY16?

  • In September 2014, MCC’s Monitoring and Evaluation division launched the agency’s first Open Data Challenge, a call-to-action to Masters and PhD students working in economics, public policy, international development, or other related fields who were interested in exploring how to use publicly available MCC-financed primary data for policy-relevant analysis. The Challenge was intended to facilitate broader use of MCC’s US-taxpayer funded data. Due to the success of the first Open Data Challenge, a second Open Data Challenge was launched in February 2016 in order to encourage innovative ideas and maximize the use of data that MCC finances for its independent evaluations.
  • MCC is launching a gender data competition in Côte d’Ivoire in partnership with the Data2x initiative of the UN Foundation and the World Wide Web Foundation. The competition and larger partnership will spur interest in, creative use of, and new learning from data related to women and girls.
  • In 2014, MCC launched an internal “Solutions Lab” that was designed to encourage innovation by engaging staff to come up with creative solutions to some of the biggest challenges MCC faces.
  • MCC is conducting an “Innovation Grant Program” in Zambia in order to encourage local innovation in pro-poor service delivery in the water sector through grants to community-based organizations, civil society and/or private sector entities.
  • MCC regularly engages in implementing pilot projects as part of its overall Compact A few examples include: 1) in Morocco, an innovative pay for results (PFR) mechanism to replicate or expand proven programs that provide integrated support including short-term (one to six months) job readiness skills training, technical training, job matching, follow-up to ensure longevity, and other services and 2) a “call-for-ideas” in Benin in 2015 that extended an invitation to interested companies and organizations from around the world to submit information regarding potential projects that would expand access to renewable off-grid electrical power in Benin, and 3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system designed to strengthen the regulator’s role, incentivize performance by the utilities, and enhance accountability.
Use of Evidence in 5 Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY16?

  • MCC awards all of its agency funds through two competitive grant programs: Compact and Threshold programs (whose budgets for FY16 were $667 and $30 million respectively). Both require demonstrable, objective evidence to support the likelihood of success in order to be awarded funds. For country partner selection, MCC uses twenty different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These indicators (see MCC’s FY2016 Guide to the Indicators) are collected by independent third parties. When considering granting a second compact, MCC considers 1) the degree to which there is evidence of strong political will and management capacity, 2) the degree to which the country has exhibited commitment and capacity to achieve program results, and 3) the degree to which the country has implemented the compact in accordance with MCC’s core policies and standards.
  • Following country selection, MCC conducts a constraints analysis (CA) to identify the most binding constraints to private investment and entrepreneurship that hold back economic development. The results of this analysis enable the country, in partnership with MCC, to select compact or threshold activities most likely to contribute to sustainable poverty-reducing growth. Due diligence, including feasibility studies where applicable, are conducted for each potential investment. MCC also performs Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return. MCC projects generally have an ERR above 10% at project inception, and MCC recalculates ERRs at compact closeout in order to test original assumptions and assess the cost effectiveness of MCC programs. In connection with the ERR, MCC conducts a Beneficiary Analysis, which seeks to describe precisely which segments of society will realize the project benefits. It is most commonly used to assess the impact of projects on the poor, but it has broader applicability that allows for the estimation of impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations. In line with MCC’s M&E policy, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. MCC also requires independent evaluations of every project to assess progress in achieving outputs and outcomes throughout the lifetime of the project and beyond.
Use of Evidence in 5 Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY16? (Note: Meeting this criteria requires both Agency and Congressional action.)

  • MCC does not administer non-competitive grant programs.
Repurpose for Results

In FY16, did the agency shift funds away from any practice, policy, or program which consistently failed to achieve desired outcomes? (Note: Meeting this criteria requires both Agency and Congressional action.)

  • MCC has established a Policy on Suspension and Termination that describes the process and procedures for suspension and termination of MCC assistance in cases in which partner countries are not living up to their MCC has suspended or terminated a compact partnership, in part or in full, seven times out of 33 compacts approved to date, and has suspended partner country eligibility to develop a compact an additional four times (most recently with the suspension of Tanzania in March 2016). In 2012 MCC suspended Malawi’s Compact due to a pattern of actions by the Government of Malawi that was inconsistent with the democratic governance evidence criteria that MCC uses for selection. However, the Government of Malawi took a number of decisive steps to improve the human rights environment and to ensure that laws and institutions support democratic rights and processes. These steps and the resumption of sound economic policy led to the reinstatement of Malawi’s Compact in 2012.
  • MCC also consistently monitors the progress of Compact programs, and makes changes as necessary. For example, an activity in the Philippines, the Electronic Tax Information System (eTIS), an activity under the Revenue Administration Reform Project, was reduced in scope in FY15 due to time and completion risks. This proactive approach allowed MCC to judiciously reallocate funds to finance additional subprojects under the Kalahi-CIDSS Community-Driven Development Project (K-C) and further maximize the project’s benefits.
Back to the Index

Visit Results4America.org