Alternative assessment. "Alternative" or "authentic" assessments have become catchwords of the current reform movement. Standardized and other short-answer tests are criticized for being biased in favor of white, middle-class children, for encouraging the teacher to "teach to the test" rather than to the child, and for emphasizing low-level skills and piecemeal knowledge rather than student understanding and performance. Forms of evaluation that test how much students know at a single point in time are being challenged by those arguing for assessment procedures that demonstrate how well students think and how well they articulate their ideas in a variety of media.
Alternative forms of assessment may offer new opportunities for success to students at risk for several reasons. Some students may not be able to demonstrate what they have actually learned on objective tests when the test assumes other skills in which they are weak, or the test situation does not encourage them to try hard. For example, a mathematics test with many word problems will be unfair to limited-English-proficient students and other young people who are poor readers but who have actually mastered the math problem-solving skills being tested. Also, many older students with a personal history of low scores on standardized tests may no longer strive to do well on current tests or may be deterred by high test anxiety.
Conventional tests may also short-change students who have a deeper command of the subject that is never called upon in the multiple-choice or short-answer forms typically used. Even when understanding is better measured by essay exams or term papers, students with poor writing skills will have difficulty showing what they have learned in a course. Such students may register their academic successes only through alternative forms of assessment. Examples of potentially better assessment methods include oral interviews, science experiments, portfolios of students work over an extended period, public exhibitions where students answer questions on their senior projects, and performances of skills in simulated situations (Perrone, 1991; Wolf et al., 1991; U.S. Congress, Office of Technology Assessment, 1992).
Although interest is now very strong in federal and state agencies to create new assessment methods, and several well-financed development projects are currently under way (see Gentile, 1992), it is still unclear how the interests of students at risk will fare in this area. The prospects of new, uniform high achievement standards and assessment methods are to be welcomed as long as all students are given opportunities to demonstrate advanced skills. But the question remains whether the resources will be provided to deliver these opportunities to all students, including those in many urban districts that are, at present, seriously underfunded (Natriello, McDill, and Pallas, 1990).
Recognition for progress. In addition to restricting the ways in which students demonstrate what they have learned, traditional assessment methods can be insensitive to the actual achievement or progress of individual students, particularly students at risk. As Mac Iver (1991) asserts, "traditional evaluation systems often do not adequately recognize the progress that educationally disadvantaged students make, because even dramatic progress may still leave them near the bottom of the class in comparative terms or far from the `percent correct' standard needed for a good grade" (p. 4). Individualized incentive and reward structures that value students' incremental improvements can motivate students to try harder, foster an intrinsic interest in the subject matter, and improve performance.
The Incentives for Improvement program is implementing such an evaluation and incentive system in four Baltimore public schools. Through the program, teachers help students develop "specific, individualized, short-range goals that are challenging but doable" based on the students' past performance (Mac Iver, 1991, p. 5). Students receive certificates and other awards for improvement as well as for high levels of achievement. Using a nonrandomized, matched control group, pre-test/post-test design to evaluate the program's effectiveness for student performance and on student's motivation to learn, students participating in the program on average received higher grades and had a 10 percent higher probability of passing than did control students. A modest positive impact on students' perceptions of the intrinsic value of the subject matter as well as overall student efforts also were found, although no effects on students' self-concept of their own ability were shown as a result of the program.