2016 Federal Index
U.S. Agency for International Development
Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY16?
- USAID’s Office of Learning, Evaluation and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) provides guidance, tools and technical assistance to USAID staff and partners to support monitoring, evaluation and learning practices, some of which can be found online. The LER Director oversaw approximately 20 staff and a $17.5 million budget in (The FY16 budget is estimated to be close to the same level as in FY2015.)
- LER holds several contracts that USAID missions and offices can use for building staff capacity in monitoring, evaluation and learning, and for commissioning evaluations and monitoring services. For example, LER manages the Monitoring and Evaluation Services Indefinite Delivery Indefinite Quantity (EVAL-ME IDIQ) contract, which allows missions, using their own funds, to competitively bid statements of work among 14 pre- approved companies that have been selected for their monitoring and evaluation capabilities, shortening and simplifying the process for contracting an independent evaluation team. LER also manages a classroom training program in monitoring and evaluation for USAID staff.
- The LER Director participates in the USAID Administrator’s Leadership Council (ALC), a senior level bi-weekly meeting chaired by the USAID Administrator and attended by Assistant Administrators and select Agency Senior Staff, when the agenda includes issues related to evaluation. The LER Director also informs policy decisions across the agency by providing input into working groups and reviewing statements, draft memos and other policy products.
- One of LER’s primary objectives is to build USAID’s capacity in the field of Monitoring, Evaluation and Learning.
- For example, under a contract to build Monitoring and Evaluation capacity at USAID (MECap) individual USAID Offices and Missions can access Monitoring & Evaluation Fellows and Learning Fellows. These fellows work with a specific mission or office for 6 months to up to 2 years. MECap can also field experts for short- term technical assistance for a specific monitoring or evaluation-related task, like evaluation design or developing a mission-wide performance management plan. Another contract held by LER, LEARN, provides support to missions to more intentionally learn from monitoring, evaluation and experience and apply that learning. To build staff capacity in designing or commissioning impact evaluations funded by missions or offices, LER has hosted clinics on Impact Evaluation to provide USAID field Missions with tools, resources and hands-on support to design an impact evaluation for a future program activity. In addition to providing general capacity-building services in the form of training, clinics, technical assistance, and fellowships, LER staff occasionally manage evaluations directly or participate on evaluation teams for evaluations funded by LER or for those funded by other parts of the Agency. LER also coordinates several cross-agency working groups organized to support Learning champions and monitoring and evaluation specialists throughout the Agency.
Evaluation & Research
Did the agency have an evaluation policy, evaluation plan, and research/ learning agenda(s) and did it publicly release the findings of all completed evaluations in FY16?
- USAID has an agency-wide Evaluation Policy. The agency just released a report to mark the five-year anniversary of the policy.
- USAID field missions are required to have an evaluation plan, and all USAID missions and offices provide an internal report on an annual basis on completed, ongoing and planned evaluations, including evaluations planned to start anytime in the next three fiscal USAID provides a Performance Management Plan (PMP) Toolkit to assist missions worldwide.
- Given USAID’s decentralized structure, individual programs, offices, bureaus and missions may develop learning agendas, which several have done, including the USAID’s Bureau for Food Security for the US government’s Feed the Future initiative and USAID’s Democracy, Human Rights, and Governance (DRG) Center. All Washington Bureaus have annual evaluation action plans that look at quality and use and identify challenges and the priorities for the year ahead.
- All final USAID evaluation reports are available on the Development Experience Clearinghouse except for approximately five percent of evaluations completed each year that are not public due to principled exceptions to the presumption in favor of openness guided by OMB Bulletin 12-01 Guidance on Collection of U.S. Foreign Assistance Data.
- USAID is currently updating its operational policy for planning and implementing country programs. A key change in the policy is that missions will include a learning plan as part of their five-year strategic plan, also known as the CDCS. The plan will outline how missions will incorporate learning into their programming, including activities like regular portfolio reviews, evaluation tracking and dissemination plans, and other analytic processes to better understand the dynamics of their programs and their country contexts.
Did the agency invest at least 1% of program funds in evaluations in FY16? (Note: Meeting this criteria requires both Agency and Congressional action.)
- In FY15, USAID missions and offices reported completing 244 evaluations with resources totaling approximately $69.3 million and managing another 251 ongoing evaluations, many that span more than one year, with total ongoing budgets estimated to reach $168.9 million. Overall spending on evaluations completed or ongoing in FY15 ($238.2 million) represents about 1.1% of USAID’s $21.1 billion FY15 program budget.
- This amount does not include the Office of Learning, Evaluation, and Research budget which primarily focuses on evaluation capacity building and technical assistance ($17.5 million FY15) or the investment in the Demographic and Health Surveys (DHS) ($189 million total in FY13-FY18) or surveys funded by other sector programs that often make up some of the underlying data used in many evaluations.
Performance Management / Continuous Improvement
Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY16?
- USAID partners with the U.S. Department of State to jointly develop and implement clear strategic goals and objectives. USAID’s Performance Improvement Officer (PIO) leads Agency efforts to use data for decision-making and improve performance and operational efficiency and effectiveness. The Assistant Administrator for the Management Bureau, Angelique M. Crumbly, also serves as the Performance Improvement Officer. The Office of Management and Budget’s circular A-11 “Preparation, Submission, and Execution of the Budget,” Part Six describes the role of the PIO. Specifically, the PIO coordinates tracking of Cross Agency Priority (CAP) and Agency Priority Goal (APG) progress; leverages stat reviews, such as PortfolioStat, HRStat, and CyberStat, to conduct deep-dives into evidence; and oversees business process reviews and other assessments to ensure that the Agency more efficiently and effectively achieves its mission and goals.
- USAID’s strategic plan, annual performance plan and report, and other performance reports are publicly available:
- USAID reports on three Agency Priority Goals and nine Cross Agency Priority Goals on performance.gov. These goals help the Agency improve performance and cut costs, while holding the Agency accountable to the public. USAID assesses progress and challenges toward meeting the goals annually during data-driven reviews with Agency leadership. USAID also measures progress toward its USAID Forward reform agenda through eight public indicators, which help the Agency adapt business processes to improve performance.
- USAID field missions develop Country Development Cooperation Strategies (CDCS) with clear goals and objectives and a performance management plan that identifies expected results, performance indicators to measure those results, plans for data collection and analysis, and periodic review of performance measures to use data and evidence to adapt programs for improved outcomes.
- In addition to measuring program performance, USAID measures operations performance management to ensure that the Agency achieves its development objectives; aligns resources with priorities; and institutionalizes USAID Forward reforms.
Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY16?
- USAID has an open data policy which:
- Establishes the Development Data Library (DDL) as the Agency’s repository of USAID-funded, machine readable data created or collected by the Agency and its implementing partners;
- Requires USAID staff and implementing partners (via associated changes to procurement instruments) to submit datasets generated with USAID funding to the DDL in machine-readable, non-proprietary formats;
- Implements a data tagging protocol in keeping with the President’s Executive Order and Office of Management and Budget policy on Open Data;
- Defines a data clearance process to ensure that USAID makes as much data publicly available as possible, while still affording all protections for individual privacy, operational and national security, and other considerations allowable by law; and
- Ensures data is updated quarterly, at minimum.
- In November 2011, the United States became a signatory to the International Aid Transparency Initiative (IATI). IATI developed a standard for publishing foreign assistance spending data that allows for comparison across Publish What You Fund (PWYF), a United Kingdom- based nongovernmental organization advocating for greater aid transparency, assesses 60+ bilateral and multilateral donors’ overall commitment to aid transparency and the information they publish in an annual Aid Transparency Index (ATI). In 2014, USAID ranked 31st out of 68 donors and was at the bottom of the “Fair” category. In July 2015, USAID produced a cost management plan (CMP) in order to improve its reporting to IATI and, thereby, improve the Agency’s score in the ATI. The plan elaborates on the necessary requirements (for example, political movement/discussions, technical work, system upgrades) and estimated timeline for implementation to advance in these areas. Recognizing the level of effort involved with the improvements varies greatly, the CMP outlines a four-phased approach. USAID is already seeing results. USAID’s score in PWYF’s 2015 Aid Transparency Review jumped by more than 20 points, propelling USAID to the “Good” category.
- USAID continues to expand the data it publishes on ForeignAssistance.gov (The Foreign Assistance Dashboard) and the International Aid Transparency Initiative. USAID recently launched the Foreign Aid Explorer which shares 40 years of data through an easy to navigate website. USAID publishes its core datasets, as well as program specific data, in application program interface (API) formats. In 2014, USAID also began publicly sharing data files and its open data plan through its new Open Government website as part of the U.S. Government’s open data initiative.
- The USAID GeoCenter uses data and analytics to improve the effectiveness of USAID’s development programs by geographically assessing where resources will maximize impact. The GeoCenter team works directly with field missions and Washington-based bureaus to integrate geographic analysis into the strategic planning, design, monitoring, and evaluation of USAID’s development programs. To date, the GeoCenter has leveraged $32 million worth of high-resolution satellite imagery for development projects, at no cost to the Agency.
- USAID’s Economic Analysis and Data Services (EADS) unit has a public web site to share data and also provides data analysis The unit also works to provide analysis upon request. In particular, the International Data and Economic Analysis part of EADS provides USAID staff, partners, and the public with analytical products and a platform for querying data.
- USAID uses data to inform policy formulation, strategic planning, project design, project management and adaptation, program monitoring and evaluation, and learning what works. The Program Cycle is USAID’s particular framing and terminology to describe this set of processes and the use of data and evidence to inform decisions is a key part of the process.
- USAID’s Monitoring Country Progress (MCP) system is an empirical analytical system which tracks and analyzes country progress along five dimensions: (1) economic reforms; (2) governing justly and democratically; (3) macro-economic performance; (4) investing in people; and (5) peace and security. It is used to facilitate country strategic planning including country graduation from USG foreign assistance programs.
- USAID has also begun publishing funding data alongside program results on the Dollars to Results page of the USAID Dollars to Results provides information on USAID’s impact around the world by linking annual spending (inputs) to results (outputs and outcomes) in some of the more than 100 developing countries where we work. There are plans to expand Dollars to Results in the future. Due to the nature of foreign assistance programs, it is difficult to directly link Fiscal Year disbursements to Fiscal Year results. There is often a time lag between when a dollar is disbursed and when a result is achieved from that investment. For example, if USAID builds a school, most of the spending takes place in the first several years of the project as construction begins. However, results may not be achieved until years later when the school opens and classes begin. Results shown on the website give a snapshot of the type of results achieved by USAID.
- To help inform the U.S. Government’s aid transparency agenda, USAID conducted three aid transparency country pilot studies in Zambia (May 2014), Ghana (June 2014), and Bangladesh (September 2014). The country pilots assessed the demand for and relevance of information that the U.S. Government is making available, as well as the capacity of different groups to use it. The final report summarizes findings from the three pilots and provides recommendations to help improve the transmission of foreign assistance data to ensure that the transparency efforts of the U.S. Government create development impact.
Common Evidence Standards / What Works Designations
Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY16?
- USAID has a scientific research policy that sets out quality standards for USAID’s Program Cycle guidance includes specific evidence standards for strategic planning, project design, monitoring, and evaluation. For example USAID has guidance that requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase in the Project Appraisal Document.
- USAID does most of its Agency-wide engagement around evidence and frameworks for “what works” through its board membership and funding (along with other donors) of the International Initiative for Impact Evaluations (3ie) which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (includes over 2,500 entries to date), knowledge gap maps and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence.
- USAID technical bureaus provide guidance based on evidence of “what works” by sector that applies to all relevant Agency USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance, which publishes evidence-based standards for what works in this field. The DRG Center convenes leading scholars from a range of fields to work with USAID to study, analyze, and assess the effectiveness of its initiatives and programs in DRG, using this data to shape programming. In addition, USAID established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, with the objective to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio. In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
- Several USAID Bureaus also synthesize all the evaluations relevant to a specific sector to summarize key findings and identify gaps in knowledge that then inform sector learning For example, in March, the Bureau for Food Security (BFS) published a synthesis report summarizing findings from 196 evaluations of Feed the Future projects that focused on the six themes outlined in the BFS Learning Agenda. Across the themes, the synthesis illuminated trends and patterns summarized in the points found below the graphic. These trends can be shared with relevant staff and stakeholders engaged in designing new projects, or updating sector strategies and policies. The synthesis also identified gaps where more evaluation research is needed, helping to inform the design of future evaluations that can contribute to the body of knowledge on food security to improve the design and management of interventions in the agriculture and nutrition sectors by specifically addressing Learning Agenda questions.
Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY16?
- USAID established the S. Global Development Lab (the Lab) in 2014 to increase the application of technology, innovation, and partnerships to extend the Agency’s development impact in helping to end extreme poverty. The Lab does this by working closely with colleagues across the Agency and by bringing together a diverse set of partners to discover, test, and scale breakthrough innovations to solve development challenges faster and cheaper and more sustainably. The Lab is the home for the Monitoring, Evaluation, Research and Learning Innovations program (MERLIN) to source, co-design, implement and test solutions that innovate on traditional approaches to monitoring, evaluation, research and learning.
- USAID has also launched six grand challenges to engage the public in the search for solutions to development problems.
- The Development Innovation Ventures (DIV) awards grant financing to winners in three distinct stages of financing. Funding ranges from under $100,000 to $15 million, and is based on where a project is in its development and to what extent it has previously gathered evidence of success. The DIV model is designed to find breakthrough solutions, minimize risk and maximize impact through stage financing, rigorously test impacts and cost effectiveness, and scale proven solutions through the public or private sectors.
Use of Evidence in 5 Largest Competitive Grant Programs
Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY16?
- USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements and grants, which comprise the majority of the Agency’s USAID has rebuilt its planning, monitoring, and evaluation framework to produce and use evidence through the introduction of a new Program Cycle, which systematizes use of evidence across all decision-making regarding grants and all of USAID’s work. The Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning. Thus the different components of the Program Cycle mutually reinforce each other by having learning and adapting integrated throughout. The Program Cycle encourages planning and project management innovations to increase the cost-effectiveness and lasting impact of development cooperation.
- In 2013, USAID reformed its policy for awarding new contracts to elevate past performance to comprise 20 to 30 percent of the non-cost evaluation criteria. For assistance, USAID does a “risk assessment” to review an organization’s ability to meet the goals and objectives outlined by the Agency. This can be found in ADS 303, section 303.3.9. Contractor performance is guided by USAID operational policy ADS 302, section 302.3.8.7. As required in FAR Subpart 42.15, USAID must evaluate contractor performance using the Contractor Performance Assessment Reporting System (CPARS). Information in CPARS, while not available to the public, is available for Contracting Officers across the Government to use in making determinations of future awards.
- USAID has also instituted a policy called the Acquisition and Assistance Review and Approval Document (AARAD) process where all contracts, grants, and cooperative agreements over $75 million are reviewed by the Administrator prior to being awarded and all awards over $25 million are reviewed by the relevant Assistant Included in the AARAD review are several key factors that include: Policy Relevant, Commitment to Sustainable Results, Feasibility, and Value for Money. This policy ensures that results, evidence, and long-term strategies are incorporated into all of USAID’s major programs. In addition, it ensures senior level accountability on USAID’s biggest programs. This policy is outlined in ADS 300. USAID guidance for competitive grants is also available online.
- The Development Innovation Ventures program ($22.4 million in FY16) provides funding for proof of concept through rigorous evaluation of innovative solutions, and scale-up funding when a solution is proven to DIV’s approach is unique in three ways:
- DIV recognizes that good ideas can come from anywhere, so they welcome a wide range of potential partners to propose their concepts for high-impact development.
- Borrowing from the experience of venture capital, DIV takes advantage of a staged financing They pilot promising new ideas with small amounts of money, and we scale only those solutions that rigorously demonstrate their impact.
- DIV emphasizes a high standard of evidence, including the use of impact evaluations and randomized control trials whenever possible.
Use of Evidence in 5 Largest Non-Competitive Grant Programs
Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY16? (Note: Meeting this criteria requires both Agency and Congressional action.)
- USAID does not administer non-competitive grant programs.
- USAID does contribute funding to multilateral institutions known as Public International Organizations (PIOs), which are listed here, and include the World Bank, UN, and multi-donor funds such as the Global A Public International Organization (PIO) is an international organization composed principally of countries. In these specific cases, USAID funds are part of overall US Government funding for these partner institutions. These funds become subject to the monitoring and evaluation requirements of the organization that receives them. For example, the Global Fund has a performance-based funding system, which bases funding decisions on a transparent assessment of results against time-bound targets. USAID’s ADS chapter 308 provides more information on how PIOs are defined and includes guidance related to due diligence required prior to awarding grants to PIOs.
Repurpose for Results
In FY16, did the agency shift funds away from any practice, policy, or program which consistently failed to achieve desired outcomes? (Note: Meeting this criteria requires both Agency and Congressional action.)
- USAID uses rigorous evaluations to maximize its investments. A recent independent study found that 71 percent of USAID evaluations have been used to modify and/or design USAID projects. Below are a few examples where USAID has shifted funds and/or programming decisions based on performance:
- Mozambique: Many donors working in the education sector in Mozambique were using traditional reading programs to improve early grade reading. USAID recently designed an impact evaluation to test whether reading interventions alone or reading interventions paired with school management support led to improved reading outcomes. Findings from a mid-term impact evaluation found that pairing reading instruction interventions with school management support improved reading outcomes more than reading instruction alone, and was more cost effective. Based on these findings. USAID Mozambique changed the way it worked in Mozambique, and the findings prompted the Government of Mozambique to request that this approach be scaled from 120 schools to 1,060 new schools. More information can be found in the recently published report on USAID evaluation practice.
- Armenia: A 2013 mid-term evaluation of USAID/Armenia’s flagship health program revealed a number of significant design and implementation flaws, which prompted the Mission to terminate the program early and saved USG resources. Since then, USAID/Armenia has redesigned its health portfolio to focus on areas where it can make difference and leave a positive legacy, as it phases out from the sector.
- Latin America and Caribbean Bureau: USAID’s Latin America and Caribbean (LAC) Bureau no longer funds expensive out-of-country scholarship programs such as the Scholarship for Education and Economic Development (SEED) and other precursors. A 2013 evaluation of the Latin America and Caribbean region’s Higher Education Scholarships Program looked at the cost-effectiveness of providing students with scholarships to study at US institutions and determined that USAID could provide improved training opportunities for many more poor youth by focusing resources on improving the quality of LAC regional or in-country training institutions. This finding informed a redesign of the program and the issuing of a new Request for Applications (RFA).
- Indonesia: In 2013, a USAID Indonesia changed the geographic targeting of a forestry program based on a USAID-commissioned evaluation that found that the program was spread out among too many geographic locations and could be more effective by focusing on fewer. This example can be found in the recently published independent study on evaluation use at USAID.
- In 2011, a USAID-commissioned evaluation of a USAID/Afghanistan road rehabilitation program found that cooperative agreements and grants are not as effective implementing mechanisms as contracts in terms of the levels of implementing partner accountability to USAID, particularly in regards to infrastructure activities. In part as a result of this evaluation’s findings, in 2013 USAID released a new operating policy, entitled “USAID Implementation of Construction Activities,” that mandates the use of contracts rather than grant or cooperative agreement mechanisms for projects that involve construction.