Data & Research RESEARCH
Identifying and Implementing Educational Practices Supported By Rigorous Evidence: A User Friendly Guide
Downloadable File PDF (140 KB)

Purpose and Executive Summary

This Guide seeks to provide educational practitioners with user-friendly tools to distinguish practices supported by rigorous evidence from those that are not.

The field of K-12 education contains a vast array of educational interventions - such as reading and math curricula, schoolwide reform programs, after-school programs, and new educational technologies - that claim to be able to improve educational outcomes and, in many cases, to be supported by evidence. This evidence often consists of poorly-designed and/or advocacy-driven studies. State and local education officials and educators must sort through a myriad of such claims to decide which interventions merit consideration for their schools and classrooms. Many of these practitioners have seen interventions, introduced with great fanfare as being able to produce dramatic gains, come and go over the years, yielding little in the way of positive and lasting change - a perception confirmed by the flat achievement results over the past 30 years in the National Assessment of Educational Progress long-term trend.

The federal No Child Left Behind Act of 2001, and many federal K-12 grant programs, call on educational practitioners to use "scientifically-based research" to guide their decisions about which interventions to implement. As discussed below, we believe this approach can produce major advances in the effectiveness of American education. Yet many practitioners have not been given the tools to distinguish interventions supported by scientifically-rigorous evidence from those which are not. This Guide is intended to serve as a user-friendly resource that the education practitioner can use to identify and implement evidence-based interventions, so as to improve educational and life outcomes for the children they serve.

If practitioners have the tools to identify evidence-based interventions, they may be able to spark major improvements in their schools and, collectively, in American education.

As illustrative examples of the potential impact of evidence-based interventions on educational outcomes, the following have been found to be effective in randomized controlled trials - research's "gold standard" for establishing what works:

  • One-on-one tutoring by qualified tutors for at-risk readers in grades 1-3 (the average tutored student reads more proficiently than approximately 75% of the untutored students in the control group).1
  • Life-Skills Training for junior high students (low-cost, replicable program reduces smoking by 20% and serious levels of substance abuse by about 30% by the end of high school, compared to the control group).2
  • Reducing class size in grades K-3 (the average student in small classes scores higher on the Stanford Achievement Test in reading/math than about 60% of students in regular-sized classes).3
  • Instruction for early readers in phonemic awareness and phonics (the average student in these interventions reads more proficiently than approximately 70% of students in the control group).4

    In addition, preliminary evidence from randomized controlled trials suggests the effectiveness of:

  • High-quality, educational child care and preschool for low-income children (by age 15, reduces special education placements and grade retentions by nearly 50% compared to controls; by age 21, more than doubles the proportion attending four-year college and reduces the percentage of teenage parents by 44%).5

    Further research is needed to translate this finding into broadly-replicable programs shown effective in typical classroom or community settings.

The fields of medicine and welfare policy show that practice guided by rigorous evidence can produce remarkable advances.

Life and health in America has been profoundly improved over the past 50 years by the use of medical practices demonstrated effective in randomized controlled trials. These research-proven practices include: (i) vaccines for polio, measles, and hepatitis B; (ii) interventions for hypertension and high cholesterol, which have helped bring about a decrease in coronary heart disease and stroke by more than 50 percent over the past half-century; and (iii) cancer treatments that have dramatically improved survival rates from leukemia, Hodgkin's disease, and many other types of cancer.

Similarly, welfare policy, which since the mid-1990s has been remarkably successful in moving people from welfare into the workforce, has been guided to a large extent by scientifically-valid knowledge about "what works" generated in randomized controlled trials.6

Our hope is that this Guide, by enabling educational practitioners to draw effectively on rigorous evidence, can help spark similar evidence-driven progress in the field of education.

The diagram on the next page summarizes the process we recommend for evaluating whether an educational intervention is supported by rigorous evidence.

In addition, appendix B contains a checklist to use in this process.

 


How to evaluate whether an educational intervention is supported by rigorous evidence: An overview

 

Step 1. Is the intervention backed by "strong" evidence of effectiveness?

Quality of studies needed to establish "strong" evidence:
  • Randomized controlled trials (defined on page 1) that are well-designed and implemented (see pages 5-9).
+
Quantity of evidence needed:

Trials showing effectiveness in

  • Two or more typical school settings,
  • Including a setting similar to that of your schools/classrooms.
    (see page 10)
=

"Strong"
Evidence

Step 2. If the intervention is not backed by "strong" evidence, is it backed by
"possible" evidence of effectiveness?

Types of studies that can comprise "possible" evidence:
  • Randomized controlled trials whose quality/quantity are good but fall short of "strong" evidence (see page 11); and/or
  • Comparison-group studies (defined on page 3) in which the intervention and comparison groups are very closely matched in academic achievement, demographics, and other characteristics (see pages 11-12).
Types of studies that do not comprise "possible" evidence:
  • Pre-post studies (defined on page 2).
  • Comparison-group studies in which the intervention and comparison groups are not closely matched
    (see pages 12-13).
  • "Meta-analyses" that include the results of such lower-quality studies (see page 13).

Step 3. If the answers to both questions above are "no," one may conclude that the intervention is not supported by meaningful evidence.

 


   1 | 2 | 3
TOC
Print this page Printable view Send this page Share this page
Last Modified: 11/23/2004