FIPSE has had an interest in assessment from the time of its founding. Of the 89 projects the Fund supported in its first round of grants in 1972, 15 concerned assessment of student learning outcomes. In 1990, 29 Comprehensive Program projects involved assessment, but now the focus has shifted to program effectiveness.
The ten successful programs described here range from large international conferences to particular projects in individual institutions. The prevalence of consortial projects and other kinds of large scale activities in this area provides a notable contrast with other program areas, where individual institutional activity is the rule. This phenomenon can be attributed to a variety of causes, including pressures from state legislatures and regional accrediting agencies, the manifest advantages of pooling experience in an area where strategy and technique are just beginning to develop, and the need to have a larger number of students with whom to validate outcome measurements than a single institution can provide.
The last two of these problems becomes particularly acute in the case of particular disciplines. Here even consortial arrangements, while solving the problem of having a sufficiently large pool of students, run up against the problem of differences of approach and emphasis from one participating department to another.
These programs illustrate a rich variety of assessment strategies and instruments. Departing from early efforts that relied on input factors, nationally normed tests and quantifiable outcomes like graduation rates and alumni satisfaction, the new assessment programs look at specific performance outcomes. Tests are likely to be faculty-made and tailored to measure success in achieving precisely defined learning goals. Comparative assessments are well controlled, use statistics carefully and employ a variety of assessment strategies.
For all the technical care that has gone into developing these assessment projects, conclusive demonstration of the degree and full extent of student learning as a result of a specific activity remains elusive. At present, it is not possible to measure all aspects of student learning, since only a few instruments can demonstrate differences that are both conclusive and substantial. The problem is compounded by the cumbersome logistics of following students for long enough to assess the full effects of their educational experiences and ensuring their participation in assessment activities in which they have no personal stake.
The proliferation of assessment programs further suggests a need, not directly addressed by these projects, to develop ways to assess assessment. As outcome assessments become a common feature of the higher education landscape, FIPSE will expect to see more effort in this direction.