2016 Federal Index


Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY16?

Score
9
Administration for Children and Families (HHS)
  • ACF has established a common evidence framework adapted for the human services context from the framework for education research developed by the S. Department of Education and the National Science Foundation. The ACF framework, which includes the six types of studies delineated in the ED/NSF framework, aims to (1) inform ACF’s investments in research and evaluation, and (2) clarify for potential grantees and others the expectations for different types of studies.
  • ACF maintains an online clearinghouse of evidence reviews of human services These reviews rate the quality of evaluation studies using objective standards vetted by technical experts and applied by trained, independent reviewers, and similar to those used by other agencies such as the U.S. Department of Education’s What Works Clearinghouse and the U.S. Department of Labor’s CLEAR. The clearinghouse includes results of the reviews in a searchable format as well as comprehensive details about the review standards and process. Reviews to date have covered teen pregnancy prevention; home visiting; relationship education and responsible fatherhood; and employment and training; and include both ACF- sponsored and other studies.
Score
8
Corporation for National and Community Service
  • CNCS’s Office of Research and Evaluation (R&E) Office is actively involved with 3 other federal agencies in the interagency Common Evidence Framework working group in order to ensure consistency in definitions and use of evidence standards in grant-making. CNCS uses the Cross-agency  Federal Evidence Framework for evaluation planning and dissemination.
  • CNCS also adapted the evidence framework used by its Social Innovation Fund and the Investing in Innovation Fund at ED and included it as part of the AmeriCorps State and National program’s FY16 grant competition. The evidence framework used in the FY16 AmeriCorps competition was revised from FY15 to make it more consistent with what is used in other federal agencies.
  • In March 2015, CNCS released Phase I of the CNCS Evidence Exchange, a virtual repository of reports intended to help CNCS grantees and other interested stakeholders find information about evidence- and research- based national service and social innovation programs. Phase 1 includes a database of single study reports with some additional descriptive information about the study, as well as a systematic review of the national service evidence base. Phase 2 in FY16 added studies as grantees completed their independent evaluations and submitted reports to CNCS.
Score
8
Millennium Challenge Corporation
  • MCC uses common, rigorous, evidence-based selection criteria to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC scorecard – a collection of 20 independent, third-party developed indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in its people, and ruling The criteria for passing the scorecard are applied universally to all candidate countries. MCC’s Board of Directors then considers 3 key factors for selecting countries: 1) a country’s performance on the scorecard; 2) the opportunity to reduce poverty and generate economic growth; and 3) availability of funds. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
  • MCC’s model is based on a set of core principles essential for development to take place and for development assistance to be effective – good governance, country ownership, focus on results, and In pursuing these, MCC has created a Principles into Practice series which describes how to make these principles operational. Finally, all of MCC’s evaluations are then published on the MCC Evaluation Catalog. Associated data, upon which evaluations are based, are published when confidentiality concerns are adequately addressed.
  • MCC is also developing an enhanced consolidated results framework that will assist it in telling the full picture of the impact of its programs and enrich programmatic learning. Currently in draft form, the framework will help MCC consolidate impacts across projects, compacts and sectors to assess an overall impact at an organizational level.
Score
8
U.S. Agency for International Development
  • USAID has a scientific research policy that sets out quality standards for USAID’s Program Cycle guidance includes specific evidence standards for strategic planning, project design, monitoring, and evaluation. For example USAID has guidance that requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase in the Project Appraisal Document.
  • USAID does most of its Agency-wide engagement around evidence and frameworks for “what works” through its board membership and funding (along with other donors) of the International Initiative for Impact Evaluations (3ie) which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (includes over 2,500 entries to date), knowledge gap maps and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence.
  • USAID technical bureaus provide guidance based on evidence of “what works” by sector that applies to all relevant Agency USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance, which publishes evidence-based standards for what works in this field. The DRG Center convenes leading scholars from a range of fields to work with USAID to study, analyze, and assess the effectiveness of its initiatives and programs in DRG, using this data to shape programming. In addition, USAID established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, with the objective to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio.In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
  • Several USAID Bureaus also synthesize all the evaluations relevant to a specific sector to summarize key findings and identify gaps in knowledge that then inform sector learning For example, in March, the Bureau for Food Security (BFS) published a synthesis report summarizing findings from 196 evaluations of Feed the Future projects that focused on the six themes outlined in the BFS Learning Agenda. Across the themes, the synthesis illuminated trends and patterns summarized in the points found below the graphic. These trends can be shared with relevant staff and stakeholders engaged in designing new projects, or updating sector strategies and policies. The synthesis also identified gaps where more evaluation research is needed, helping to inform the design of future evaluations that can contribute to the body of knowledge on food security to improve the design and management of interventions in the agriculture and nutrition sectors by specifically addressing Learning Agenda questions.
Score
9
U.S. Department of Education
  • ED’s evidence standards for its grant programs, as outlined in the Education Department General Administrative Regulations (EDGAR), build on ED’s What Works ClearinghouseTM (WWC) evidence standards. ED often includes these evidence standards in its discretionary grant competitions to direct funds to applicants proposing to implement projects that have evidence of effectiveness and/or to build new evidence through evaluation (see Question #8 below for more detail). Additionally, IES and the National Science Foundation issued a joint report that describes six types of research studies that can generate evidence about how to increase student learning in 2013. These principles are based, in part, on the research goal structure and expectations of IES’s National Center for Education Research (NCER) and National Center for Special Education Research (NCSER). NCER and NCSER communicate these expectations through their Requests for Applications and webinars that are archived on the IES website and available to all applicants.
  • ED’s What Works ClearinghouseTM (WWC) identifies studies that provide credible and reliable evidence of the effectiveness of a given practice, program, or policy (referred to as “interventions”), and disseminates summary information and reports on the WWC website. The WWC has reviewed more than 11,325 studies that are available in a searchable database.
Score
7
U.S. Dept. of Housing & Urban Development
  • HUD’s Policy Development and Research (PD&R) office provides evidence of “what works” primarily through HUD USER, a portal and web store for program evaluations, case studies, and policy analysis and research; the Regulatory Barriers Clearinghouse; and through initiatives such as Innovation of the Day, Sustainable Construction Methods in Indian Country, and the Consumer’s Guide to Energy-Efficient and Healthy Homes. This content is designed to provide current policy information, elevate effective practices, and synthesize data and other evidence in accessible formats. Through these resources, researchers and practitioners can see the full breadth of work on a given topic (e.g., rigorous established evidence, case studies of what’s worked in the field, and new innovations currently being explored) to inform their work.
Score
9
U.S. Department of Labor
  • DOL uses the Cross-agency Federal Evidence Framework for evaluation planning and dissemination.
  • DOL’s Clearinghouse for Labor Evaluation and Research (CLEAR) is an internet-based evidence clearinghouse of evaluation reports that reviews designs, methodologies, and findings according to specific standards developed by technical work groups. Each study is scored and given a “causal evidence rating” according to the scoring rubric in the standards. CLEAR is a user-friendly, searchable website, that includes academic quality reviews for each study included in the system, appropriate for peer academic researchers, potential evaluation contractors submitting technical proposals, program practitioners seeking information on “what works”, policy makers, and the general public.
  • DOL uses the CLEAR evidence guidelines and standards when discretionary program grants awarded using evidence-informed or evidence-based criteria. The published guidelines and standards are thus used in grants for evidence-based programs demonstrations and in reviewing evaluations in the structured evidence reviews conducted in CLEAR. Requests for proposals also indicate the CLEAR standards apply to all CEO evaluations. Also, DOL has a “Department Evaluation Policy Statement” that formalizes the principles that govern all program evaluations in the Department, including methodological rigor, independence, transparency, ethics, and relevance. In addition, CEO publicly communicates the standards and methods expected in all DOL evaluations, and the standards are incorporated into formal procurement statements of work, with scoring for awards based on the standards.
  • Additionally, DOL collaborates with other agencies (HHS, ED-IES, NSF, CNCS) on refining cross-agency evidence guidelines and developing technological procedures to link and share reviews across clearinghouses. The Interagency Evidence Framework conveys the categories of evaluations, the quality review of evaluation methodologies and results, and the use of evaluation finings. The framework is accepted Department- wide.
Back to the Index

Visit Results4America.org