Extending Learning Time for Disadvantaged Students - Volume 1 Summary of Promising Practices - 1995

A r c h i v e d  I n f o r m a t i o n

Thoughtful Evaluation Of Program Success

Although most educators agree it is important to evaluate the effectiveness of programs and practices, these efforts often fall by the wayside because staff lack time, knowledge, or financial resources. Furthermore, program staff may resent the need to devote time to evaluation instead of the activities they have designed to enhance and increase learning. In spite of these challenges, program continuity, funding, and decision making depend on information collected through thoughtful student and program evaluations. Programs funded in part or wholly by competitive grants and supported by donations of equipment, materials, or volunteer time may face extra pressure to achieve their stated goals and to demonstrate this achievement through an evaluation. Title I-funded programs are required by law to conduct reviews of program effectiveness.

Collecting or Gaining Access to Data

Ideally, evaluators analyze data at several points in time, using appropriate comparisons. Extended-time programs that operate within the public education system--whether school-based, districtwide, or statewide--have access to abundant student data including test scores, attendance, grades, discipline referrals, and portfolio assessment. The weaker the connection between extended-time programs and the local school, the more difficult it may be to track down, compile, and analyze these data.

In theory, assessment of extended-time programs that represent partnerships between schools and community agencies, organizations, or privately administered programs can include analysis of data collected by schools--depending on the formality of the partnership and the willingness of school staff to provide or analyze the data. For example, Northwestern University plans to conduct a longitudinal evaluation of ASPIRA's program in Chicago, using student grades, high school dropout rates, postsecondary education rates, and other indicators of academic progress. However, other programs sponsored by community agencies or organizations--such as the Kids Crew program in Brooklyn and the after-school study centers in Omaha--collect their own assessment data. The evaluation of the after-school study centers includes program attendance, informal follow-up of high school students who graduate and go on to postsecondary institutions, and anecdotal evidence. The evaluation of the Kids Crew program includes longitudinal case studies of students, focusing on problem-solving skills and improved community and cultural awareness. Raising Hispanic Academic Achievement, although sponsored privately, also conducts an evaluation that relies partially on observations reported by classroom teachers.

Some extended-time programs run by private, national organizations require local affiliates to collect data that are analyzed at the national level; both ASPIRA and the Teen Outreach Program do this. The outcome data often include several measures of school success, such as grades, course selection, and high school graduation rates. It is often up to affiliates to use these data to assess local success.

Guidelines for Program Evaluation

Whether analyzing data collected by schools or school systems or collecting new information about participants in extended-time programs, or both, extended-time program evaluation is time consuming and requires careful planning. As part of a multi-year research project conducted by the Center for Research on Evaluation, Standards, and Student Testing, Tracking Your School's Success: A Guide to Sensible Evaluation (1992) presents useful techniques for monitoring program development and implementing change. Combining ideas from various evaluation models, this guide outlines six steps:


-###-
[A Willingness To Resolve Or Work Around Obstacles] [Table of Contents] [Conclusions]