Extending Learning Time for Disadvantaged Students - Volume 1 Summary of Promising Practices - 1995
A r c h i v e d I n f o r m a t i o n
Thoughtful Evaluation Of Program Success
Although most educators agree it is important to evaluate the effectiveness of programs and practices, these efforts often fall by the wayside because staff lack time, knowledge, or financial resources. Furthermore, program staff may resent the need to devote time to evaluation instead of the activities they have designed to enhance and increase learning. In spite of these challenges, program continuity, funding, and decision making depend on information collected through thoughtful student and program evaluations. Programs funded in part or wholly by competitive grants and supported by donations of equipment, materials, or volunteer time may face extra pressure to achieve their stated goals and to demonstrate this achievement through an evaluation. Title I-funded programs are required by law to conduct reviews of program effectiveness.
Collecting or Gaining Access to Data Ideally, evaluators analyze data at several points in time, using appropriate comparisons. Extended-time programs that operate within the public education system--whether school-based, districtwide, or statewide--have access to abundant student data including test scores, attendance, grades, discipline referrals, and portfolio assessment. The weaker the connection between extended-time programs and the local school, the more difficult it may be to track down, compile, and analyze these data.
In theory, assessment of extended-time programs that represent partnerships between schools and community agencies, organizations, or privately administered programs can include analysis of data collected by schools--depending on the formality of the partnership and the willingness of school staff to provide or analyze the data. For example, Northwestern University plans to conduct a longitudinal evaluation of ASPIRA's program in Chicago, using student grades, high school dropout rates, postsecondary education rates, and other indicators of academic progress. However, other programs sponsored by community agencies or organizations--such as the Kids Crew program in Brooklyn and the after-school study centers in Omaha--collect their own assessment data. The evaluation of the after-school study centers includes program attendance, informal follow-up of high school students who graduate and go on to postsecondary institutions, and anecdotal evidence. The evaluation of the Kids Crew program includes longitudinal case studies of students, focusing on problem-solving skills and improved community and cultural awareness. Raising Hispanic Academic Achievement, although sponsored privately, also conducts an evaluation that relies partially on observations reported by classroom teachers.
Some extended-time programs run by private, national organizations require local affiliates to collect data that are analyzed at the national level; both ASPIRA and the Teen Outreach Program do this. The outcome data often include several measures of school success, such as grades, course selection, and high school graduation rates. It is often up to affiliates to use these data to assess local success.
Guidelines for Program Evaluation
Whether analyzing data collected by schools or school systems or collecting new information about participants in extended-time programs, or both, extended-time program evaluation is time consuming and requires careful planning. As part of a multi-year research project conducted by the Center for Research on Evaluation, Standards, and Student Testing, Tracking Your School's Success: A Guide to Sensible Evaluation (1992) presents useful techniques for monitoring program development and implementing change. Combining ideas from various evaluation models, this guide outlines six steps:
- Focus the evaluation. Determine the purpose(s) of the evaluation, decisions to be made, possible audiences or people affected by these decisions, and questions to be asked.
- Identify tracking strategies. Determine what information is needed to answer questions or better understand the consequences of decisions. Make initial decisions about the kinds of instruments needed. How will information be collected? Will there be interviews? Observations? Focus groups? A review of data? Student work samples? Standardized test results?
- Manage instrument development and data collection. Determine who has the information and when it is needed. Plan instrument development, if necessary. How long will it take? Who will collect information? What will it cost in time and resources to purchase or develop and administer instruments?
- Score and summarize data. Think about the kind of scores needed to answer evaluation questions. Choose appropriate scores or scoring strategies for projects, performance tests, or portfolios.
- Analyze and interpret information. Organize score summaries to answer evaluation questions. Work with stakeholders to make sense of findings and conclusions in the light of shared experiences and possible conflicting interests. Look for trends over time to identify strengths and areas for improvement; relationships among program processes, student and staff characteristics; and outcomes. Negotiate a common understanding of findings; make meaning of the trends, profiles, and summaries of questionnaire, test, interview, observation, and/or performance information. Find promising courses of action related to these findings.
- Act on findings and continue program monitoring. Communicate findings in a timely and appropriate manner--through school action or improvement plans, informal meetings, panel discussions, formal presentations, and written reports.
[A Willingness To Resolve Or Work Around Obstacles]