![]() |
PDF (1 MB) |
Engage in Continuous Reflection
All six of these programs are deeply attuned to outcomes. They take responsibility for preparing candidates to succeed in the classroom and to meet state certifi- cation or licensing requirements. They work with candidates, through training and support, to ensure that each candidate masters required skills and can demonstrate those skills on the job and in formal assessments. Moreover, the programs continually seek to improve outcomes, with a focus on how well they meet the needs of candidates and partner districts.
Assessing Candidate Performance
Alternative route programs focus squarely on candidates' on-the-job performance. "Traditional programs emphasize knowledge," says the coordinator of Hillsborough's program. "Our program is skill-based. During the whole year of the internship, we are seeing if the knowledge from course work is translating into a skill." This difference is evident across all six sites. Because candidates are classroom teachers fully in charge of groups of students, performance can be monitored over time, instruction is responsive to candidates' needs, and candidates have the opportunity to re-try strategies and re-teach material. As noted earlier, this kind of supportive assessment keeps candidates improving even as it keeps them afloat.
Programs vary in how they organize candidate assessment. Texas and Wichita incorporate performance tasks and work samples. New York's assessment mechanisms vary according to the university program in which candidates are placed. Virtually every program uses classroom observation to evaluate candidate performance. And three sites—Georgia, Hillsborough, and Chico—make extensive use of portfolios.
Ongoing formal observation in each program is accompanied by conferences with candidates and, often, written feedback as well. Programs like that in Texas' Region XIII deliberately emphasize formative observation, that is, classroom visits that are not used for evaluation. Most programs, however, include formal observation as part of the summative assessment required for teacher certification.
In Wichita, for example, mentor and administrator observations are required for certification. Mentors use an observation form adapted from the Professional Practice Scale published by the Association for Supervision and Curriculum Development.
Hillsborough's three-cycle observation and coaching system, described earlier, includes 10 observations, three of which are formal (see figure 7, page 19). For each cycle, the candidate and school-based mentor teacher develop a candidate action plan to address areas of nonmastery, and observations during that cycle focus on those targeted areas. For example, in speci- fied weeks of the first cycle, the school-based mentor needs to conduct at least 2 observations that address competencies the candidate has not yet successfully demonstrated, while also noting whether the candidate continues to improve in areas of proficiency.
Another key assessment strategy is the use of portfolios, which are used for both formative assessment, as noted earlier, and summative assessment. For Georgia's portfolio, candidates amass evidence that demonstrates proficiency in 24 competencies (see figure 6, page 17). To show capability in planning and preparation, for example, they include lesson plans and graphic organizers. Showing skill in creating an appropriate classroom environment calls for video clips and classroom floor plans. Candidates gather three to four samples for each competency.
Given the level of time and effort that goes into creating the portfolios, the Georgia programs take great care in evaluating them. The program employs a part-time supervisor for just that job. Using a rubric to rate each competency, the evaluator provides candidates with feedback and submits documentation to the program coordinator. When all members of the candidate support team agree that a candidate is proficient in all 24 competencies, they each sign a competency completion form and submit it along with a recommendation for clear, renewable certification.
In Hillsborough, site principals oversee portfolios. Staff from the district's Office of Training and Staff Development orient each principal to the portfolio creation process, including a checklist of required items. Annual portfolio auditing is handled by educators hired as consultants and trained by project staff.
Evaluating Program Effectiveness
Assessment of candidate performance is only one anchor point in continuous program improvement. Programs also must routinely monitor whether they are meeting critical needs—those of the candidates themselves as well as those of partner districts and multiple stakeholders.
To evaluate overall effectiveness, programs systematically gather and analyze data using a variety of tools, including questionnaires for candidate needs assessment; surveys and interviews of principals; course effectiveness ratings by candidates; support provider ratings of candidates; and follow-up surveys after graduation of former candidates and their employers.
Responding to Candidate Needs
To identify candidate needs, for example, survey information from candidates often is gathered as early as the beginning of their preservice experience. In New York, for instance, candidates complete a "temperature gauge," an online survey asking them to evaluate their first three weeks of preservice training, including course content and advisory time. The results allow staff to follow up with candidates as needed and to make adjustments that might improve their experiences for the remainder of preservice. A follow-up survey gauges how successful the adjustments have been.
Chico candidates fill out a pre-entry questionnaire to help staff accommodate their experience and characteristics. Instructors then conduct a candidate needs analysis at the beginning of each course to help them tailor instruction. At the end of each course, candidates let instructors know how well the course met their needs in terms of increased proficiency.
Region XIII in Texas, like several other programs, surveys its candidates at the end of the program on a wide range of issues. Questions cover the program's overall performance, the quality of the training, the caliber of support from mentors and supervisors, and candidates' expectations for the future. Texas and Chico survey candidates and their employers after graduation.
Data collected on the needs of candidates and local districts are used to continually improve every aspect of the programs. When candidates in Wichita, for example, reported strongly valuing the feedback on their teaching provided by their support providers and said they wanted more, the program increased the number of support-provider visits to classrooms. Most candidates now receive at least 10 visits in the school year and get written feedback from each. The program also accommodated candidates' logistical problems by purchasing new technology that allows candidates at remote sites to participate in classes via the Internet by streaming video rather than drive hundreds of miles.
One measure of success is the rate of program completion. Chico, for one, has seen its candidate retention rate rise from 86 percent of the cumulative pool of those who had completed the program in 1999-2000 to 91 percent in 2003-04. Program leaders credit their focus on gathering data and responding to them. It's important to note that the data are not just quantitative, says Chico's evaluator. "We try to collect candidates' voices. The survey at the end of each class is not just their rating but their words and their emotions connected to this course experience. Honesty is important. We break down the objectives of the courses and ask what students are not feeling satisfied with." Instructors see the exact words of the students at multiple points in the curriculum and use that feedback for tailoring. Coordinators, too, look at all the feedback and routinely revisit the question of curriculum sequence.
Responding to Regional Needs
Meanwhile, to stay on top of the changing needs of partner school districts and other local stakeholders, each program does yet another level of needs assessment. Chico, for example, regularly draws on information from a wide range of informants (see figure 9 for Chico's map of its multiple evaluation strands). One group is its advisory board, whose members-including local school officials, parents, and representatives from local special education support agencies -keep a finger on the region's pulse. Further information comes from supervisors. Because they are constantly in contact with school and county office administrators, their meetings frequently raise triggers for program change. Moreover, a number of part-time university faculty are also teachers in the public schools, affording yet another level of feedback. And because program leaders are almost constantly writing grants, formal surveys and interviews of local participants—including all 385 principals—provide further, up-to-date data.
Chico's regional needs assessment has led over time to shifts in the program's emphasis. For example, more attention has been paid to autism in recent years as that disability has become more prevalent. The program has shifted from an early focus on elementary, multiple-subject teaching to middle and high school teaching as the need for special education teachers at those levels has expanded. And the search for more candidates interested in serving students with moderate to severe disabilities remains a priority, in response to greater need.
Figure 9. Chico Continuous Improvement Cycle |
||||||||||||||||||||||||||||||||
|
Program leaders in Georgia see responsiveness to district needs as a way to model for candidates how good teachers assess and respond to student needs. They believe that one reason their program has enjoyed so much success is that the people involved, from the top down, truly value an open exchange of ideas. Program leaders know local school needs because they ask-and then they listen and act. For example, this process has led to adding strands in early childhood and special education.
Program Improvement Over Time
It's clear that continuous program improvement depends on committed, collaborative leadership and inclusive decision-making. In Texas's Region XIII program, analysis of all data collected is done at an annual retreat. Staff members get together for two days each year to analyze what is working well and what they want to improve. They pride themselves on being able to "turn on a dime" to make changes.
In New York, an advisory board consisting of program participants from each partner university works closely with the program directors and the chancellor. For the first couple of years, the focus was on the quality of what the university offered the candidates. Today the emphasis has shifted to encompass broader issues of the teaching experience in New York classrooms to continually address ways to support quality teaching.
Chico, at this point, is reaping the rewards of its years of careful development. It has enjoyed sustained leadership with its current director and other key leaders in place for more than a dozen years. During that time, the program has developed a deep base of expertise that constitutes its support network. Many of today's supervisors were once candidates themselves. Many returned to enroll in the university's master's program-for which 15 of their candidate credits applied. Often long-time residents, support providers understand the rural context and the needs of local schools.
A point of pride for all involved is that the Chico program has begun to have an effect beyond special education. "I see other teachers coming by when I come to a school," says one supervisor. "Staff in three or four other classes begin taking on the traits of the special education teacher who is doing a wonderful job-because of the supportive model." Seeing that the program's candidates bring cutting-edge skills to their sites, a number of administrators tap them to do consultations and modeling with other teachers, for example, or to present at board meetings.
The first part of this guide has presented some crosscutting design elements of a strong alternative teacher preparation program. The next part more fully describes each program, giving readers six variations of how these elements mesh to support the development of successful teachers.
![]() |
||||||||||||
|
TOC |
|
||||||||||
![]() |