Archived InformationAnswers in the Tool Box: Academic Intensity, Attendance Patterns, and Bachelor's degree Attainment June 1999
Beneath the cross-sectional portraits of U.S. higher education that appear annually in The Digest of Education Statistics and analogous volumes lie seething movements of students. To break through the pasteboard masks of these portraits we would do well to mark changing attendance patterns over the period, 1972-1995. Three NCES longitudinal studies are the sources for this story, though the HS&B/So is its core. The story advances on other accounts of attendance patterns (McCormick, 1997; Berkner, Cuccaro-Alamin, and McCormick, 1996; Hearn, 1992; Carroll, 1989; Peng, 1977) by distinguishing between "classic" modes of transfer and multi-institutional portfolios, and by combining these variations with temporal factors, including a different way of thinking about part-time status. The increasing complexity of attendance patterns is one of the most significant developments in higher education of our time, one that poses grave challenges to system-wide planning, quality assurance, and student advisement
The core hypothesis: the long tradition of institutional effects research in higher education is outmoded. The engines of its demise lie in student behavior. The changes in attendance patterns, which can be tracked only in national longitudinal studies, are part of the larger currents of a wealthy open market that produces dozens of specialty niches in every sub-sector. As a society, we have become more consumerist and less attached to organizations and institutions with which we "do business." By consistently selling itself in terms of how much more money students will earn in their lifetimes as a consequence of attendance, higher education has come to reflect other types of markets and marketplaces. As colleges continue to create new specialty majors, dividing academic space times-on-times, they have inevitably drawn a panoply of rival providers (Marchese, 1998). Convenience, of which location is a reflection, has become the governing filter of choice(35), and convenience applies not only to place, but also to time, subject, and price. It is thus not surprising to find students filling their undergraduate portfolios with courses and credentials from a variety of sources, much as we fill our shopping bags at the local mall.
One need not recite the mass of student history studies, starting with Feldman and Newcomb (1969) and Astin (1977, 1993), the integrity of which is predicated on students entering and attending only one institution. The major theoretical models of retention/attrition (e.g. Bean, 1980, 1982, 1983; Tinto, 1975, 1982, 1993) are based on this premise. After all, questions about academic and social growth, let alone those about retention and degree completion, are tainted by second and third institutions. When students disappear from an institution's radar screen, they are assumed to be drop-outs---unless they return. Until recently, we've never known where the stopouts who returned to their first institution of attendance have been during the stopout period. The drop-outs are entered as half of a dichotomous outcome variable in the standard multivariate analyses where institutional effects variables are all drawn from the same school or schools with similar characteristics (e.g. Carnegie class, size, urbanicity) and where demographic effects can be isolated (Astin, Tsui, and Avalos, 1996).
Even transfer from 2-year to 4-year colleges is usually excluded from these statistical models, an unfortunate phenomenon in light of the bachelor's degree attainment of transfer students (Lee, Mackie-Lewis, and Marks, 1993). There are exceptions to this exclusionary practice (Nora, 1987; Lee and Frank, 1990), but institutional variables seem to play less of a role in stories of this "classic" mode of transfer.
Analyses based on discrete events in students' lives begin to take account of the problems with traditional institutional effects research. Stop-out and transfer become part of the portrait (Guerin, 1997; Carroll, 1989), and reentry is acknowledged as a critical chapter in students' postsecondary histories (Smart and Pascarella, 1987; Spanard, 1990). Using the Beginning Postsecondary Students (BPS) longitudinal study of 1989-1994, McCormick (1997) takes Carroll's "persistence track" array one step further with an explication of dozens of transfer and multi-institutional sequences by adding three levels of institution (4-year, 2-year, and less-than-two-year) and level and control (public, private not-for-profit, and private for-profit) of the first institution of attendance. In this presentation, there are nine sets of "transfer origin and destination," and these can be subset by seven combinations of level and control.
All of these recent analyses reflect empirical realities. Something is going on. To what extent does that "something" require that we change the way we do research on institutional effects? To what extent does that "something" alter the process of state planning for the provision of higher education? What happens to the standard multivariate model of persistence/completion when different attendance behaviors and different constructions of stock attendance behaviors (e.g. full-time/part-time) are entered? Let us first introduce the variables in the analyses, with descriptive tables that provide some hints of how they might play out in regression equations where bachelor's degree completion is the dependent variable.
Table 18 documents the changes in the growth of multi-institutional attendance over the past quarter century, The definition of "attendance" is important to table 18. Without completely reconstructing the NLS-72 data base, its definitions were shaped as closely as possible to those used for the HS&B/So. The High School & Beyond variable for number of undergraduate transcripts requested was based on a hand-and-eye examination of the student's consolidated record. Graduate school transcripts were flagged and placed outside the basic calculation. At the same time, we added any institution the student did not mention in his/her interview but which was referenced on a transcript from another institution. Transcripts covering only summer school attendance were counted, but only when more than 6 credits in more than two courses were earned. Entries documenting study abroad(36) and cases of transcripts requested from educational institutions in other countries (none were ever received) were also counted as second, third, or fourth schools (only 2.8 percent of all HS&B/So college students were affected). For the HS&B/So, then, we had a fairly strict accounting of attendance. Even if some of that second or third school attendance was incidental-for example, 7 to 10 credits-it was not fragmentary.
I did not anticipate the emergence of a multi-institutional attendance theme when editing the NLS-72 postsecondary student records a decade ago. So the current accounting for the NLS-72 was wholly algorithmic, and involved but a minor (and obvious) adjustment: for students who had earned a B.A., transcripts that showed nothing but post-B.A. course work were not counted, as were out-of-scope responses from schools where transcripts were requested but not received. At the same time, by default algorithm, transcripts that included nothing but summer school work were included. The rate of multi-institution attendance is thus slightly overstated for the NLS-72.
|Number of Undergraduate Schools|
|Effect size for Basic Participants||= .36|
|Effect size for Bachelor's Recipients||= .24|
NOTES: (1) All within-cohort differences are significant at p<.05. (2) The effect size is used for measuring differences in means across cohorts. One takes an unweighted sample N, mean, and standard deviation for each cohort group. Then one creates a "pooled standard deviation" by the formula: (N1(SD1) + (N2(SD2) / (N1 + N2). Lastly, one determines the difference in the means and divides that difference by the pooled standard deviation. The resulting effect size functions like a Standard Deviation Unit or z-score, that is, it tells us whether the change in the mean is truly significant. In this case, the effect size for basic participants is moderate and that for bachelor's degree recipients small. For guidance on interpreting SDU-type changes, see Bowen, H.R., Investment in Learning (San Francisco: Jossey Bass), 1973, p. 103. SOURCES: National Center for Education Statistics: National Longitudinal Study of the High School Class of 1972, and High School & Beyond/Sophomore cohort.
There are some shaggy ends in this process, but the samples are robust enough so that the trends evident in table 18 command a reasonable degree of confidence. Table 18 presents data for two groups: all non-incidental students and bachelor's degree recipients. Bachelor's degree recipients include community college transfers, therefore will evidence a higher degree of multi-institutional attendance than the larger universe of basic participants in higher education. The rate of multi-institutional attendance increased for both groups between the 1970s and 1980s (the effect size clearly shows the increases to be significant, though more for the basic participants than the bachelor's degree recipients), but it is notable that the major shift was not from one to two schools but from one to three or more.
These trends continue. The more recent Beginning Postsecondary Students longitudinal study of 1989-1994 provides evidence of momentum toward even higher rates of multi-institutional attendance. This is obviously a shorter-term study than either of the age-cohort longitudinal studies used in table 18, and includes a more diverse population in terms of age. Since older beginning students are less likely to transfer (see McCormick, 1997, p. 46), table 19 excludes them so as to render the population more analogous to those of the age cohort studies. The table also drops the "basic participation" criterion applied to transcript-based longitudinal studies so as to render them more comparable with BPS. In other words, anyone who ever walked through the door of a postsecondary institution and generated a record--even if they dropped out permanently within a week--is included in table 19. With these criteria, by the fifth year following initial entry, the proportion of BPS90 students who attended more than one institution was already four percentage points higher than the 11-year history of the High School & Beyond/Sophomores.
Table 19 begins to mix in institutional type as a factor in the analysis. We want to know whether there are any significant differences or trends in the relationship between the first institution of attendance and the total number of institutions attended. In this respect, there are some bumps in the road across table 19. First, some institutions changed status over the 22-year period covered by the three cohorts. Secondly, the HS&B/So allows one to distinguish between the ostensible first institution of attendance and something we might call the "referent" first institution. The "referent" institution is a retrospective judgment with decision rules. It was determined by hand-and-eye examination of students' records by two readers who could draw on other information from the data set to answer the question: what was the first institution at which this student really "made a 'go' of it?" The determination thus excludes college course taking while the student was still enrolled in high school, enrollment during the summer term following high school graduation at an institution other than the school the student entered in the fall term, and false starts. A "false start" would be indicated, for example, by an initial one-term enrollment at a state college with the student attempting 15 credits and withdrawing from 12 of them, and nothing else on the record until two years later, when the student enrolled in a hospital school for radiologic technology and completed a certificate program. The referent first institution in this case is the hospital school. The "referent" first institution is missing for three percent of the students, principally because their records were incomplete, and these students are excluded from correlations and regression models involving attendance pattern variables.
|Sector Share of
NOTES: (1) The universe for the BPS/90 distribution consists of "typical" college-age (17-23) students only, so as to render it more comparable to the age-cohort longitudinal studies. (2) The universes for both the NLS-72 and HS&B/So consist of all students with postsecondary records. (3) Definition of first institution of attendance for the HS&B/So differs from that of the other studies. See text. (4) All within-cohort differences are statistically significant at p<.05 except those column pairs indicated by asterisks. SOURCES: National Center for Education Statistics: (1) National Longitudinal Study of the High School Class of 1972; (2) High School & Beyond/Sophomore Cohort, NCES CD #98-135; (3) Beginning Postsecondary Students, 1989-1994, Data Analysis System.
Approximately 10 percent of the HS&B/So students who entered postsecondary education are affected by the "referent" first institution distinction, that is, there is a difference between their ostensible date/place of entry and their "true" date/place. But these effects are not evenly distributed, and there is a clear tone to the distribution: overrepresented are students who entered selective and doctoral degree-granting institutions, those who came from the highest SES quintile, and those who eventually earned associates and/or bachelor's degrees. These students were more likely than others to earn college credits while they were still in high school and/or during the summer immediately following high school graduation. If we had not created the "referent" variable, estimates for these students would have been distorted, and, because the size of the group is not insignificant, these estimates would affect multivariate analysis. For example, the majority of these students would show either a community college or a non-selective 4-year college for the ostensible first institution of attendance. In a multivariate analysis, controlling for other attendance pattern variables, that would result in underestimating the role of the selectivity and type of the first institution.
More than one college means not only schools of different types, but also in different states (and countries), in alternating and simultaneous patterns. Table 20 presents a portrait of those patterns derived from the HS&B/So transcript files. It is immediately obvious that traditional factors involved in the analyses of academic and social integration (frequency of contacts with faculty, staff concern for student development, peer relations, and so forth) take on a very different coloration among students who attend more than one school, particularly if they did not return to the first institution of attendance--something we've always known about community college transfers who spent any appreciable time at the community college. The universe of variables involving key institutional effects contracts as the number of institutions attended rises, even when students return to the first institution of attendance (as did 61 percent of those who attended two schools, and 48 percent of those who attended three or more).
Drawing on the Beginning Postsecondary Students study of 1989-1994, McCormick (1997) employs a different scheme but with a similar tone and results. In that five-year span (as opposed to the eleven years of the HS&B/So), the 45 percent of students who had enrolled in more than one institution as undergraduates displayed most of the inter-sectoral and intra-sectoral patterns shown in the "institutional combinations" section of table 20. The considerable mobility evidenced by the HS&B/So student population, then, was not a temporary blip on the radar screen of higher education.
Table 20 uses nine categories of institutional combinations, six of which describe inter-sectoral movement, and caps the final date for calculation at that of the bachelor's degree (if earned) or (if the bachelor's was not earned) at the last term date of undergraduate attendance. Temporal considerations enter in the construction of three of the inter-sectoral categories in order to clarify both enduring and emerging policy concerns. The first involves the notion of "reverse transfer" under which the referent first institution of attendance is a 4-year college and the last institution is a 2-year college, with no bachelor's degree earned in between. Students in the HS&B/So who attended a community college after earning the bachelor's degree sport a separate flag on their records. Only 3.3 percent of the bachelor's degree recipients in this cohort (weighted N=26,751) carry this flag(37).
|Number of Undergraduate Institutions|
|One||Two||Three+||% of All||% Earning
|All who earned >10 credits||47||35||18||100%||45|
|Type of Referent First Institution|
|Selectivity of First Institution|
|4-year to 2-year||--||71||29||4||--|
|2-year to 4-year||--||63||37||10||66|
|4-year and other||--||78||22*||3||17*|
|2-year and other||--||77||23||3||--|
|2-year, 4-year & other||--||--||100||2||14*|
|Timing of Entry|
|7-18 Month Delay||52||29||19||9||20|
|>18 Month Delay||70||22||9*||10||10|
Notes: (1) The universe consists of all students who earned more than 10 credits. Weighted N=1.87M. (2) "Other" institutions consist principally of non-degree granting vocational schools, and these are "not rated" in terms of selectivity. (3) Institutional type based on the Carnegie classification system in effect in 1987. (4)* Low-N cells. Estimates are not significant. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore cohort, NCES CD#98-135.
The second of these inter-sectoral values is that of traditional 2-year to 4-year transfer. This value was assigned only on the basis of sequence, without consideration of the number of credits earned at each institution or what proportion of those credits were accepted in transfer (something often not indicated on the record of the receiving institution). The "transfer" variable used in the multivariate analyses in Part IV of this study is not derived from this simple configuration, rather from a construction involving credit thresholds developed for analyses of the NLS-72 (Adelman, 1994). This construction requires the student to start at a community college, earn more than 10 credits from the community college, and subsequently attend a 4-year college and earn more than 10 credits from that institution.
Some students assigned the Alternating/Simultaneous enrollment value turn out to be "classic transfer" students. The "alternating" portion of this category involves 2-year and 4-year institutions only, isolating the fact that not all transfers from 2-year to 4-year institutions involve one event. The other portion of the category includes any case of simultaneous enrollment (determined by overlapping term dates on transcripts from two institutions). Students in the Alternating/Simultaneous category exhibit a very high proportion of attendance at at least three institutions. Alternating attendance is also possible among institutions of the same type, and if we put these cases together with those of the inter-sectoral Alternating/Simultaneous category, we describe the oscillations of 16 percent of the HS&B/So postsecondary universe, and nearly 18 percent of those who earned bachelor's degrees. These are not insignificant portions.
One of the key questions involving any case of multi-institutional attendance is whether the student returned to the referent first institution. As we'll see in the multivariate analyses, the variable developed with this notion in mind is stronger than a mere count of undergraduate institutions. If one attends two or more schools and does not return to the first, one has engaged in a very different kind of movement than a stroll around the neighborhood.
The variable that captures this phenomenon was constructed by defining the date of the last term of undergraduate enrollment, and then determining whether the institution at that last date was the same as the referent first institution of attendance. There are some minor problems with any algorithm such as this, but the results reinforce common sense. For example, the more schools attended, the less likely the "return rate," and those who do not start in 4-year institutions are less likely to return to their first institution (this group includes the classic transfer students who, by definition, do not return to their first institution).
It might be helpful to sort the common-sense from the counterintuitive with a correlation matrix confined to the major attendance pattern variables for students who attended more than one institution. Table 21 offers this matrix. Five dichotomous measures of place and two of time are included. The dependent variable is ANY 4-YR, that is, whether the student ever attended a 4-year college. The other variables in the matrix are:
|NUMSCHL||The dichotomy splits students who attended only 2 schools from those who attended 3 or more.|
|FIRST4||The student's referent first institution was a 4-year college;|
|INSYS||The student's multi-institutional attendance was confined to the system of 2-year and 4-year colleges. The student never stepped outside the system.|
|NODELAY||The student's referent date of entry to postsecondary education occurred within 10 months of high school graduation.|
|NOSTOP||Continuous enrollment. The student never "stopped out" for more than two semesters or three quarters.|
|No Return||Numschl||First 4-Yr||Any 4-Yr||In System||No Delay||No Stop|
NOTES: (1) Universe consists of all students who earned more than 10 undergraduate credits and attended more than one postsecondary institution. (2) Weighted N=1.069M. (3) Design effect=1.52. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore cohort, NCES CD# 98-135.
What does table 21 begin to tell us? The correlations are not very high, but most are statistically significant. Some of the messages indicate that cross-currents are at work. The correlation coefficients suggest that :
Most of the variables in table 21 are those of place--at least in a generalized sense: number of schools, sector, type. One prominent aspect of place that is not covered by this configuration is geographical, and because states and regional bodies such as accrediting associations and interstate compacts (Western Interstate Commission on Higher Education, the Southern Regional Education Board, the New England Board of Higher Education) play key roles in planning or advising for the provision of higher education, a portrait of multi-state attendance may be helpful. After all, a major problem in discussions and arguments over graduation rates is that even the best tracking systems are confined within state borders. Given the extent of multi-institutional attendance, system graduation rates are very different from institutional graduation rates. Even statewide rates--for students who began their higher education careers in a given state--do not capture the movement of students across state lines. Table 22 sets forth the inter-state dimensions of mobility in the careers of the HS&B/So.
One category of analysis in table 22, "aggregate pattern categories," requires some explication. The "inter-sectoral" pattern describes any non-recursive (one-way) change of institution that crosses a border from the 4-year college sector to the 2-year college sector (and vice versa), or the "non-secular" sector (and vice versa). Students evidencing a classical community college to 4-year college transfer are included here. The "intra-sectoral" pattern describes non-recursive multi-institutional attendance within a given sector. An "alternating" attendance pattern is one in which the student is oscillating between two or more institutions, no matter what type of institutions they may be. The intra-sectoral pattern is slightly more likely to involve at least two states than the other two patterns. It will not surprise anyone that the vast majority (82 percent) of students in the intra-sectoral pattern who cross state lines are moving from one 4-year college to another.
|Number of States|
|One||Two||Three or More|
|Total Number of Schools|
|Three or More||48.7||35.4||15.8|
|Aggregate Pattern Categories|
|Selected Institutional Combinations|
|4 to 2-yr transfer||66.7#||25.1*||8.2|
|2 to 4-yr transfer||74.2*||22.2||3.6|
NOTES: (1) Differences in all possible row-pairs are significant at p<.05. (2) All column pair comparisons are significant at p<.05 except those indicated by * or #. (3) The universe consists of all HS&B/So students who earned more than 10 credits and attended more than one postsecondary institution as undergraduates. Weighted N=1.069M. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore cohort, NCES CD#98-135.
Table 22 confirms other basic observations: the more schools one attends and the more a 4-year college is involved, the more likely one is to cross state lines in the process. The swirl of multi-institutional attendance sweeps all borders away. Furthermore, for students who attended more than one college (including at least one 4-year college), the bachelor's degree attainment rate for those who crossed state lines was higher (62 percent) than for those who stayed within state borders (55.4 percent). These issues should be important to state planners who are currently drawing on a variety of demographic scenarios to predict likely future enrollments in higher education. The most noted of these scenarios, a joint study of the Western Interstate Commission on Higher Education (WICHE) and the College Board (1998), focuses on the supply of traditional-age high school graduates. Even when they predict the high school graduating population in a state, however, and even if they predict the proportion of those students who will continue on to college, none of the current modeling exercises includes post-matriculation behavior such as multi-institutional attendance patterns and inter-state enrollment commerce. The High School & Beyond/So histories, including these features, on the other hand, have been shown to be helpful with national projections (Adelman, 1999). However useful these are as background tapestries, the HS&B/So was not designed for--and cannot be applied to--state based analyses.
The literature on persistence and degree completion places considerable emphasis on variables describing timing of college entry and the type of institution first attended. A number of conventional wisdoms have grown into folklore from these research lines: starting in a 4-year college produces a higher bachelor's degree completion rate than starting in a 2-year college; the more selective the first institution of attendance, the higher the completion rate; the greater the delay between high school graduation and college entry, the lower the completion rate. Let us see how this folklore plays out in an analysis that refines both of these issues with the notion of "referent" first institution and-hence-"referent" first time.
First Institution: Sector and Selectivity
For purposes of basic understanding, the characteristics of first institution can be parsed by sector (4-year, 2-year, other), type (doctoral, comprehensive, liberal arts, community college, specialized, and other), selectivity (highly selective, selective, non-selective, open-door, and not rated), and level/control. I have never found level/control to be as revealing as the other basic characteristics. For example, in the standard taxonomy attached to all the national longitudinal studies data sets, proprietary schools that offer bachelor's degrees (including music conservatories, art schools, and large technical school chains) are not included with 4-year institutions. If, as the literature suggests, the 4-year school is the critical first institution of attendance for bachelor's degree completion and we scramble what we mean by a 4-year school, then we distort subsequent analyses and interpretations. A for-profit music conservatory can be highly selective. What is more important in considering the impact of attending such an institution: its for-profit status or the fact that it offers a bachelor's degree and is selective?
|Sector of 1st Institution|
|4-Year||63.2 (1.14)||65.9 (1.11)|
|All Students||14.8 (0.85)||19.3 (1.09)|
|Attended 4-Year at Any Time||59.0 (2.35)||60.0 (2.37)|
|Transfer Students||71.1 (2.33)||71.1 (2.33)|
|Other||4.4 (0.93)||5.7 (1.22)|
|4-Year College Sector|
|Doctoral||72.0 (1.57)||73.5 (1.11)|
|Comprehensive>||54.7 (1.62)||58.7 (1.66)|
|Liberal Arts||71.4 (2.59)||73.0 (2.55)|
|Specialized||42.5 (5.60)||43.7 (5.76)|
NOTES: (1) *The "access group" consists of anyone for whom we received a transcript, even if there were no credits on the transcript (Weighted N=2.27M); the "participation group" consists of all students who earned more than 10 undergraduate credits by age 30 (Weighted N=1.94M). (2) Standard errors are in parentheses. (3) Transfer students, by the definition used in this study, earned more than 10 credits from a 2-year college and, subsequently, more than 10 from a 4-year college. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore Cohort restricted file, NCES CD#98-135.
Table 23 offers an account of bachelor's degree completion rates by sector and type of the first institution of attendance. It also hones in on students who started in 2-year colleges in order to highlight some rather dramatic differences within this population. For all students who began their postsecondary careers in a community college and who earned more than 10 credits, about one out of five eventually earned a bachelor's degree. The denominator of this ratio, however, includes a mass of students who never attended a 4-year college and had no intention of doing so. On the other hand, it is rather obvious that students who began in a community college, earned more than 10 credits from the community college, and subsequently attended a 4-year institution, whether in a classic transfer pattern or in an alternating/simultaneous enrollment pattern, eventually earned bachelor's degrees at a rate (71 percent) higher than that for those who started in a 4-year college. "Early transfers," those who jumped ship from the community college to the 4-year institution with 10 or fewer credits, completed bachelor's degrees at a much lower rate-38.4 percent (this group is not shown in table 23). Table 23 also suggests a bi-modal pattern within the 4-year college sector, with clearly higher degree completion rates for those who start in doctoral and liberal arts institutions than those whose referent first institution is a comprehensive or specialized college. Because the proportion of 4-year college students whose first institution of attendance was a liberal arts college is low (12 percent), when we set up a dichotomous variable for institutional type in a multivariate equation, the construction is "doctoral" versus all others.
Table 24 moves on to the parsing of first institution of attendance by generalized selectivity levels for students in the "participation group." The selectivity levels were determined from the Cooperative Institutional Research Project's selectivity cells for first-time freshmen in 1982 (Astin, Hemond, and Richardson, 1982), the year most of the HS&B/So cohort entered higher education. For institutions not in the 1982 CIRP, selectivity was determined from the 1982 edition of Barron's Profiles of American Colleges. Given these different sources, a trichotomy seemed to be the most appropriate way to establish statistically significant borders. The effects of selectivity are enormous, though they may be confounded by the level of academic resources students bring forward from secondary school (a hypothesis to be explored in a multivariate context). These selectivity effects are reflected not only in bachelor's degree completion rates, but also in the proportion of graduates who continue to graduate or professional school, and in undergraduate GPA.(38) With respect to bachelor's degree completion rates, there is no real difference between the "highly selective" and "selective" institutions, thus justifying a dichotomous variable in multivariate analyses (any selectivity versus none).
|At First Entry||4.0%||13.7%||82.3%|
|BA Completion Rate||93.1*||87.9*||64.7|
|for First Entrants|
NOTES: (1) The universe for "first entry" is confined to students whose referent first institution was a 4-year college and who earned more than 10 undergraduate credits. Weighted N=1.03M. The universe for graduation is confined to those for whom a bachelor's degree is documented. Weighted N=935k. (2) All row pair comparisons are significant at p<.05 except those indicated by asterisks. SOURCE : National Center for Education Statistics: High School & Beyond/ Sophomore cohort, NCES CD#98-135.
Continuity of Enrollment
Continuity of enrollment is the first of two temporal variables that deserve further description. The definition of continuity of enrollment depends on the length of the period of measurement. Carroll (1989), Berkner, Cuccaro-Alamin, and McCormick (1996), and others, working with longitudinal studies of five or six year time-frames, define non-continuous enrollment ("stop-out") as a gap of four months or more (excluding summer periods) in spells of attendance. The postsecondary time-frame for the High School & Beyond/Sophomores, however, is 11 years. While the vast majority of the cohort, including those who did not earn degrees, drift away from postsecondary education after 8 years (age 26/27), we do not take the benchmark measure of completion until the history is censored at age 29/30. Under those circumstances, and based on a hand-and-eye reading of the records by two judges, a student's enrollment was judged to be non-continuous if it evidenced a break of (a) two or more expected consecutive semesters, or (b) three or more expected consecutive quarters, or (c) two or more breaks of one expected consecutive semester or two expected consecutive quarters. Put another way, one no-year or two or more part-year enrollment spells are thus the units of analysis behind the judgment of non-continuity.
The continuous enrollment variable in the HS&B/So data base is not dichotomous. Students who were enrolled for less than one year (6.4 percent of all who entered) are not subject to judgment, and were assigned a separate value. And there were many cases (11.4 percent of all students) where continuity could not be determined, usually because records or term dates were missing. In presenting a descriptive account of continuity of enrollment, table 25 serves a dual role. It first maps continuity of enrollment against the aggregated multi-institutional attendance variable for students who earned more than 10 credits; and then provides guidance for constructing a dichotomous version (for convenience, called NOSTOP) for use in multivariate analysis of bachelor's degree completion.
Part A of table 25 provides further confirmation of what we have observed of the positive relationship between 4-year college attendance and continuous enrollment, as well as the negative relationship between inter-sectoral (including alternating/simultaneous) patterns of multi-institutional attendance and continuous enrollment. Part B of table 25 provides a preview of the potential strength of NOSTOP in multivariate analyses: the completion rate for continuously enrolled students is two times that for non-continuously enrolled students.
|Enrollment Continuity Status|
|Part A. Attendance Pattern|
|1 Institution Only||77.8||16.5||5.7||---|
|Students Who Attended a
4-year College At Any Time
|Part B. Completion|
Completion Rate for
Students Who Attended a
4-Year at Any Time
NOTES: (1) The Part A universe consists of all students who earned more than 10 credits and for whom a value of continuity could be determined. (2) Column comparisons for Part A are significant at p<.05 except for the asterisked pairs. (3) Rows for Part A add to 100.0%. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore Cohort, NCES CD#98-135.
Full-Time, Part-Time, or DWI?
Whether a student is full-time or part-time is a stock variable in virtually all analyses of postsecondary persistence and attainment. To repeat an observation from Part II: student reports of part-time enrollment are severely lower than aggregate institutional reports; the FT/PT measure is usually based on a snapshot of the first term of attendance, and, most importantly, students change enrollment status during their college careers.
Transcript analyses render the notion of "part-time" even more fragile. Students may start a term with a 15-credit load (full time), but drop or withdraw from six or more credits before the term is over. One never knows precisely when the drop or withdrawal took place, but a student who behaves in this manner moves into part-time status in the course of the term. It would be enormously difficult to estimate the true part-time rate in longitudinal studies, but we can draw some parameters from the transcripts. Table 26 demonstrates what I call the "DWI Index" for the HS&B/So cohort by highest degree attained by age 30. What we see is the percent of students who either dropped, withdrew from (with no penalty), or left as unresolved "incompletes," proportions of the courses for which they registered over the course of their undergraduate careers. The proportions are set in three ranges. At a 20 percent DWI rate, there is a strong likelihood that the student became part time at some point. Some 10.6 percent of the students, in fact, evidence a DWI Index in excess of 40 percent, and 95 percent of this group earned no degree.
While Carroll (1989) did not use these data, they support his contention of the "hazards" of part-time enrollment. Whether they are as strong as his other "hazards" remains to be seen. But it will not surprise anyone that of the 20 course categories (out of over 1,000) with the highest rates of DWI, eight are remedial courses such as developmental math and remedial reading (Adelman, 1995, p. 269). Students with high DWI indices are carrying other hazardous baggage. Any study of persistence and completion that limits its subjects to full-time students and assumes this variable to be incorruptible (e.g. Chaney and Farris, 1991) risks shaky conclusions. Even McCormick's (1999) transcript-based analysis defines part-time students in terms of credits attempted in the first semester, thus overlooking students with a de facto part-time status by virtue of DWI.
|Proportion of All Attempted|
|Courses that Became DWIs:||<10%||10-20%||>20%|
|For ALL Students:||62.0||14.7||23.3|
|Highest Degree Earned:|
Notes: (1) All row and column differences significant at p<.05 except the column pair indicated by asterisks. (2) Universe consists of all HS&B/So students with complete postsecondary records. Weighted N=1.65M. (3) Rows add to 100.0%. SOURCE: National Center for Education Statistics: High School & Beyond/Sophomore cohort, NCES CD#98-135. *DWI=Drops, Withdrawals & Incompletes.
The volume of DWI (Drops-Withdrawals-Incompletes) phenomena reflected in table 26 above was striking enough(39) to demand a measure, particularly in light of a ratio of Withdrawals to Drops (many of which are by-products of early term scheduling adjustments) of 12:1. No doubt that some of the students with significant DWI activity were part-time to begin with. But others "went part-time." Only 40 percent of the total adjusted sample went through their college careers without a D, W, I, or NCR (No Credit Repeat-included because a repeated course involves the equivalent of one withdrawal) grade. Some 23.2 percent had a DWI Index of .2 or higher. These students reduced their course-taking (and, probably, enrollment-status-qualifying credits) by 20 percent or more. For multivariate analysis, the DWI Index was turned into a dichotomous variable with a threshold of .2. In another stunning case of common sense, its correlation with degree completion is strong-and negative.
We have now walked through the major variables of attendance patterns: those that deal with place and those that deal with time. Based on student transcript records, the construction has been inductive and empirical. I have illustrated relations among these variables, though not exhaustively; suggested just how complex these swirling patterns of student behavior have become; and have provided some clues as to how they might play out in explaining bachelor's degree attainment. Given the extent of multi-institutional attendance, I have declined to attach to each institution such characteristics as size, ethnic composition, special mission, curriculum offerings, proportion of students in residence, and other features that appear in the literature on college persistence. If attendance patterns involving more than one institution turn out to play a significant role in explaining degree attainment, then and only then would we be justified in turning to the task of accounting for combinations of institutional characteristics. But the student is the subject, and as we turn to the multivariate analysis, the student's critical life events and judgments move to center stage.