The Featured Evaluations
This guide intentionally features a variety of online programs and resources, including virtual schools, programs that provide courses online, and Web sites with broad educational resources. Some serve an entire state, while others serve a particular district. This guide also includes distinct kinds of evaluations, from internally led formative evaluations (see Glossary of Common Evaluation Terms, p. 65) to scientific research studies by external experts. In some cases, program insiders initiated the evaluations; in others, there were external reasons for the evaluation. The featured evaluations include a wide range of data collection and analysis activities—from formative evaluations that rely primarily on survey, interview, and observation data, to scientific experiments that compare outcomes between online and traditional settings. In each instance, evaluators chose the methods carefully, based on the purpose of the evaluation and the specific set of research questions they sought to answer.
The goal in choosing a range of evaluations for this guide was to offer examples that could be instructive to program leaders and evaluators in diverse circumstances, including those in varying stages of maturity, with varying degrees of internal capacity and amounts of available funding. The featured evaluations are not without flaws, but they all illustrate reasonable strategies for tackling common challenges of evaluating online learning.
To select the featured evaluations, researchers for this guide compiled an initial list of candidates by searching for K-12 online learning evaluations on the Web and in published documents, then expanded the list through referrals from a six-member advisory group (see list of members in the Acknowledgments section, p. vii) and other knowledgeable experts in the field. Forty organizations were on the final list for consideration.
A matrix of selection criteria was drafted and revised based on feedback from the advisory group. The three quality criteria were:
The evaluation considered multiple outcome measures, including student achievement.
The evaluation findings were widely communicated to key stakeholders of the program or resource being studied.
Program leaders acted on evaluation results.
Researchers awarded sites up to three points on each of these three criteria, using publicly available information, review of evaluation reports, and gap-filling interviews with program leaders. All the included sites scored at least six of the possible nine points across these three criteria.
Since a goal of this publication was to showcase a variety of types of evaluations, the potential sites were coded as to such additional characteristics as: internal vs. external evaluator, type of evaluation design, type of online learning program or resource, whether the program serves a district- or state-level audience, and stage of maturity. In selecting the featured evaluations, the researchers drew from as wide a range of characteristics as possible while keeping the quality criteria high. A full description of the methodology used to study the evaluation(s) of the selected sites can be found in appendix B: Research Methodology.
The final selection included evaluations of the following online programs and resources: Alabama Connecting Classrooms, Educators, & Students Statewide Distance Learning, operated by the Alabama Department of Education; Algebra I Online, operated by the Louisiana Department of Education; Appleton eSchool, operated by Wisconsin's Appleton Area School District; Arizona Virtual Academy, a public charter school; Chicago Public Schools' Virtual High School; Digital Learning Commons in Washington state; and Thinkport, a Web site operated by Maryland Public Television and Johns Hopkins Center for Technology in Education. Additional information about each program and its evaluation(s) is included in table 1.