Innovations in Education: Alternative Routes to Teacher Certification
Downloadable File PDF (1 MB)

Appendix A: Research Methodology

The project methodology is an adaptation of the four-phase benchmarking process used by the American Productivity and Quality Center (APQC),* including case descriptions of individual alternative route teacher preparation programs and a cross-site analysis of key findings. While classic benchmarking looks for best or promising practices, using quantitative measures and comparisons among organizations, alternative route programs are too new to fully support this methodology. A brief description of this project's adapted methodology follows.


First, a conceptual framework was developed from an analysis of research on teacher preparation, including alternative route programs. Experts in teacher preparation and alternative route programs were recruited to serve on an external advisory panel, which provided feedback to refine the framework and prioritize issues to investigate. The resulting study scope guided all aspects of the study (see figure 2 on page 5).

Site selection was a multistep process to ensure that the guide would feature an array of practices covering the elements of the framework and would represent a variety of geographic locations and contexts with which district administrators could identify. A list of possible sites was compiled through primary and secondary research conducted by Edvance, the education nonprofit created by APQC, and by WestEd and the expert advisory panel. All had some promising practices in place, required that candidates enter the program with at least a bachelor's degree, and had candidates work as the teacher of record as part of the program.

To narrow the selection, a screening template was developed to systematically analyze the weighted criteria for site selection identified by the advisers. The factors considered were whether the program had an operational track record beyond three years, was designed to meet local needs, gave credit to applicants with previous experience and skills, was field-based, appointed mentors to support candidates, tracked program retention and completion, and monitored student and teacher demographics. Multiple points were possible on each of these factors.

The template was completed for sixteen programs for which data were available based on public documents, such as program marketing materials, reports, and program Web sites, supplemented with targeted phone interviews with program staff. The six programs that were selected had relatively high ratings on the template. In addition, selection balanced different types of programs (e.g. district-based, regional, university partnerships), and geographic locations.

Collect Data

Collecting detailed descriptive information from program staff, partners, and participants was key to understanding the program's practices, the outcomes or impact achieved, and lessons learned from which others could benefit. The major steps to this phase were finalizing the site visit interview guide based on the study scope, and arranging and conducting program visits to the programs.

Each of the six sites hosted a two-day site visit that included interviews with administrators, program participants, and partners as well as observation of events if scheduling permitted. During the site visits, these key personnel were asked questions from the site visit discussion guide tailored to their role group. In addition, artifacts from the sites, such as applications, planning tools, and interview protocols, were collected to provide concrete examples of program practices. The study team collated the information collected during the site visits and developed a case study for each site.

Analyze and Report

The project team analyzed all collected data to understand the promising practices uncovered throughout the benchmarking project, both within and across programs. Four key findings discussed in the final report emerged from the cross-site analysis.

Two products resulted from this research: a report of the findings and this practitioner's guide. The report provides an analysis of key findings across sites, a detailed description of each site, a collection of artifacts, and key project documents. The practitioner's guide is a summary of the report intended for broad distribution.


Ultimately, readers of this guide will need to select, adapt, and implement practices that meet their individual needs and contexts. The guide will be broadly distributed nationwide through presentations at national and regional conferences, as well as through national associations and networks. The guide and report are also accessible online at

*American Productivity and Quality Center. (2001). Benchmarking in education: Pure and simple. Houston, Tex.: Author.

   16 | 17 | 18
Print this page Printable view Send this page Share this page
Last Modified: 04/18/2008