Archived Information

Answers in the Tool Box: Academic Intensity, Attendance Patterns, and Bachelor's Degree Attainment — June 1999

I. Cultivating ACRES, the Academic Resources Index

So let us turn to the details. This section of our exploration is essential reading, for in it three indicators of students' high school curriculum and academic performance will be created, then merged to create a master-variable, "academic resources." The acronym for this variable, ACRES, is intended to invoke an agricultural metaphor. It is during the pre-college years that one's academic history is planted and subject to early cultivation.

The Test

The most compliant of the three indicators was that of a senior year test given to nearly all (92.7 percent) of the HS&B/So students. The test can be described as a "mini, enhanced SAT." With a testing-time planned at 68 minutes, the test has core reading, vocabulary, writing and mathematics sections with items drawn from old SATs, plus science and civics sub-tests (Rock, Hilton, Pollack, Ekstrom and Goertz, 1985). The composite score, however, does not include the science and civics sections. Exactly the same test was administered to the HS&B/So group as 10th graders in 1980. While the results are strongly correlated with SAT or ACT scores for the 57 percent of the sample that took either of those tests, they are not psychometrically equitable. All scores for the senior test were set on a percentile scale. Where the senior test score was missing but the student's record included an SAT or ACT score, one could impute a percentile for the senior test, and this procedure was followed for 376 of the 13,477 students whose records exhibit any de facto national test scores(11). Whatever occasionally lumpy effects might result are smoothed when the scale is divided in quintiles. The resulting variable is called TESTQ, or test quintile.

Class Rank and GPA

The construction of the second indicator began with students' high school class rank, by percentile, then quintile. Not all high schools rank students, and one could determine a class rank percent "score" for only 9,082 students out of 13,020 for whom high school transcripts were available. An alternative approach was necessary for the residual group. An academic grade point average(12) was constructed for everyone, and tested against class rank in those cases where both were available. The Pearson correlation was .841. While that is a high number, one is wary of substituting a specific percentile of academic grade point average for a missing class rank percentile because of variances in local grading practices. A larger unit of measurement was necessary to reduce the statistical noise, and quintiles were selected for the task. One could thus substitute an academic GPA quintile for a missing class rank quintile with more than a modicum of confidence. Quintile positions could be determined for nearly all students in the residual group. RANKQ is the result of combining both indicators.(13) It was this variable more than any other that determined the used of quintile presentations for otherwise continuous variables in the analysis.

Academic Curriculum: Intensity

The most complex and important of the variables describes the academic intensity and quality of one's high school curriculum. This construct challenges the five composite variables included in pre-1998 releases of the HS&B/So data that were based on the high school curriculum standards set forth as recommendations in A Nation at Risk (1983), the so-called "new basics." These five variables(14) do not reflect a hierarchy, and are impossible to validate. Unfortunately, a new literature has sprung from the use of these variables, so that in place of the old style dichotomy of academic/non-academic we now find "rigorous academic" v. "academic" (Akerhielm, Berger, Hooker, and Wise, 1998). "Rigorous," for all its high-sounding educational machismo, is very misleading in this context. "Rigor" applies to standards of performance, which vary widely from course to course and school to school, and no data set can claim to measure it. Academic intensity, on the other hand, is accessible and measurable.

So intensity comes first, and is then modified for quality. For this study, variables covering 15 broad high school subject areas were constructed by standardizing or censoring credit ranges that appeared in the high school transcript files and with reference to state standards for high school graduation (Medrich, Brown and Henke, 1992). For science, two variables were formed: one covering core laboratory science only (biology, chemistry, and physics) and one for all science credits. This distinction was particularly necessary because the transcripts from some high schools did not specify the field of science or offered a "unified" science curriculum. For mathematics, four variables were created: all high school mathematics credits, remedial mathematics units, net mathematics units (all minus remedial), and HIGHMATH, a variable indicating the highest level of mathematics reached by the student in high school. HIGHMATH proved to be an extremely powerful construct, and its position in the subsequent forging of the Academic Resources model was suggested both by Pallas and Alexander (1983), who found some of the elements of the variable on the high school transcripts of the Academic Growth Study(15); and by Kanarek (1989), who found mathematics-related variables (the SAT-Q, an algebra test that was part of the New Jersey Basic Skills battery, total number of years of high school math, the student's self-reported most recent grade in high school math, and the student's self-reported rating of mathematics ability) to contribute significantly to a five-year graduation rate.

The second step in this process was to examine the credit distributions in six core curriculum areas: English, mathematics, laboratory science and total science, history, social studies, and foreign languages, and to cluster them so that five distinct levels of intensity could be discerned. This inductive approach is very different from that reflected in the old "new basics" variables. Table 4 highlights the differences between the highest value of our "academic intensity" indicator and that of the most "rigorous" version of the new basics curriculum configuration. By excluding remedial courses in basic skills and by using a core laboratory science benchmark, we begin to include quality criteria in addition to intensity (Alexander and Pallas [1984] also excluded remedial courses in constructing an empirical counterpart to the "new basics"). The "new basics" variables do not, and the only other major attempt in the literature to set up a detailed curriculum index to be used in multivariate analyses of student-reported "years of postsecondary education" (Altonji, 1994) drew on the NLS-72 high school records, which are not transcripts and cannot provide any details concerning the types or levels of mathematics or science(16).

Table 4.–High school units at the highest level of the "academic intensity" variable versus those of the "most rigorous" New Basics variable developed for pre-1998 releases of the HS&B/So data base.

"Most Rigorous"
New Basics
Units of English 3.75+ 4.0+
No Remedial English Yes(17) No
Units of Mathematics 3.75+ 3.0+
No Remedial Math Yes No
All Science Units (2.5)* 3.0+
Core Lab Science Units 2.0+ ---
Social Sci/History 2.0+ 3.0+
Foreign Languages 2.0+ 2.0+
Computer Science --- 0.5
TOTAL: 13.5+ or (14.0+)* 15.5+

Why does the highest value of academic intensity use 3.75 units of English and mathematics as a threshold--instead of the 4 unit criterion of the new basics version? The "new basics" variables were constructed from an externally-dictated blueprint. The selection of >3.74 Carnegie unit equivalents, on the other hand, was based on empirical clusters of credits on transcript records from different kinds of high schools with different calendar and credit systems, and in accordance with state requirements. The total of 13.5 or 14.0 academic, non-remedial credits for the highest value of academic intensity and 15.5 credits for the version of "new basics" used here should be compared to a national average of 14.2 academic credits (the remedial and core science strictures not included) for the high school class of 1982 (Tuma, Gifford, Horn, and Hoachlander, 1989). While these bottom lines fall in a fairly narrow range, the constructs are different.

The curriculum intensity variable was modified for quality, adding gradations to each of its five levels for the number of advanced placement courses (0, 1, and >1), highest level of mathematics reached in high school (+1 for trigonometry or higher, 0 for Algebra 2, and -1 for less than Algebra 2), and subtracting for any case where mathematics course work was largely remedial. The enhanced curriculum indicator then has 40 gradations, set out on a scale of equal intervals from 100 to 2.5.(18) At the highest interval, a mark of 100 on the scale, students display, at a minimum, the following contents of their high school portfolios:

At each of the 40 marks on the interval scale of the enhanced curriculum indicator, one will find a similar richness of curricular description (see Appendix C). The reader should know that this is what lies behind the quintile version of student positions on this scale. It is extremely important for the analysis, conclusions, and recommendations of this study to recognize that the enhanced curriculum indicator is a criterion variable: unlike test scores or class rank scales, it sets absolute goals of content, not relative measures of performance. Theoretically, everybody can reach the highest interval of curriculum intensity and quality.

All three component variables were set out in quintiles. To obtain an initial rough estimate of their relative strength in relation to bachelor's degree completion, as well as to see the size of Ns and weighted Ns available for correlations and multivariate analyses, table 5 takes four groups of students and presents a simple cross-tab for each. To be included in these calculations, the student's record had to include all of the following: high school transcripts with in-scope credit totals, class rank/academic GPA, senior year test score, and 1993 degree status from the postsecondary transcript file.

Table 5.–Percent of HS&B/So students completing bachelor's degrees, by quintile of performance on three component variables of "academic resources."

  High 2nd 3rd 4th Low
1) Base Group
N=10470; Weighted N=2.83M
   H.S. Curriculum 70 44 19 5* 3*
   12th Grade Test 64 37 16 8 3
59 36 19 7* 4*
2) On-Time HS Grads Only
N=9635; Weighted N=2.61M
   H.S. Curriculum 70 45 19 5* 3*
   12th Grade Test 64 38 17 8 3
59 37 20 8* 4*
3) On-Time HS Grads with
SES Data; N=8819;
Weighted N=2.31M
   H.S. Curriculum 69 45 19 6* 3*
   12th Grade Test 64 38 17 9 3
59 37 21 9 5
4) On-Time HS Grads with
SES Data Who Attended
College at Any Time;
N=6868; Weighted N=1.82M
   H.S. Curriculum 72 49 25 10* 6*
   12th Grade Test 67 43 23 14 7
   Rank/GPA 64 42 28 14 9

What we see in table 5 is that, no matter how one cuts the population, roughly the same proportions earn bachelor's degrees by quintile of the three component resource measures. Furthermore, academic intensity of high school curriculum emerges as the strongest of the three, followed by 12th grade test score, and class rank/academic GPA. It is not surprising, too, that degree completion rates are higher, across all quintiles, when the population is restricted to those who actually entered postsecondary education.

HIGHMATH: Getting Beyond Algebra 2

Of all the components of curriculum intensity and quality, none has such an obvious and powerful relationship to ultimate completion of degrees as the highest level of mathematics one studies in high school. This is a very critical equity issue because not all high schools can offer their students the opportunity to learn the higher levels of mathematics that propel people toward degrees­no matter what their eventual major field of study. If we are serious about preparing students not merely to enter higher education (access) but to complete degrees, then the lesson of what I call the "math ladder" should be heeded, and we will talk more about this issue in the conclusion of this monograph.

Table 6 uses a logistic regression model to illustrate the strong impact of opportunity to learn. The five-rung ladder consists of calculus, pre-calculus, trigonometry, Algebra 2, and less-than-Algebra 2. This HIGHMATH variable was controlled by socioeconomic status, and its analytic unit is an odds ratio, i.e. "the number by which we would multiply the odds [of completing a bachelor's degree] . . . for each one unit increase in the independent variable" (Menard, 1995, p. 49). In this case, the ladder says that, in the High School & Beyond/ Sophomore cohort, for each rung of HIGHMATH climbed, the odds of completing a bachelor's degree increased by a factor of 2.59 to 1. Each rung up the SES quintile ladder (to match the 5 step math ladder), in contrast, increased the odds by a mere 1.68 to 1. To be sure, there are only two variables in this model, but even so, math solidly trounced SES! HIGHMATH correlates with SES at .3095; with bachelor's degree completion at .5103 (see table 7), so the story told by the logistic model is supported by a related measure.

And the precise point at which opportunity to learn makes the greatest difference in long-term degree completion occurs at the first step beyond Algebra 2, whether trigonometry or pre-calculus. To be sure, some Algebra 2 courses in high school include trigonometry, but the preponderance of evidence for the period in which the HS&B/So students went to high school suggests that most trigonometry classes were discrete and distinctly labeled. Notice that in the 7-step account of the math ladder, the odds ratio "versus everybody" dips below 1 at the line between Algebra 2 and geometry, with the Beta value going negative at that point. For a moment, it appears that Algebra 2 is the significant cut-point. But when we move to consider sequential odds ratios ("odds ratio versus those below the referent rung"), the line between Algebra 2 and Geometry becomes muddled and the difference not statistically significant, while that between Algebra 2 and trigonometry is a clear break.

If we asked simply what percentage of students at each rung on the math ladder earned a bachelor's degree, the largest leap also takes place between Algebra 2 and trigonometry: a nearly 23 percent increase among all high school graduates, and a 21 percent increase among those whose who continued on to postsecondary education. The empirical account of degree completion, then, reinforces the "speculative" account of odds ratio relationships.

Table 6.–The math ladder: odds ratios for earning a bachelor's degree at each rung, controlling for socioeconomic status (SES), High School & Beyond/Sophomore cohort, 1982-1993.

Model with 5 rungs: Odds Ratio for HighMath: 2.59 t=14.1 p<.0001
  Odds Ratio for SES: 1.68 t= 9.2 p<.0001
Model with 7 rungs:  
  odds versus those
below the
referent rung
odds ratio
Beta Percent of
H.S. Grads
Earning BA
Percent of
Earning BA
of All
Highest Math  
Studied in H.S.  
Calculus 9.52 9.52 2.25 79.8 81.6 6.4
Pre-Calc 7.20 6.15 1.82 74.3 75.7 5.9
Trig 5.42 3.83 1.34 62.2 65.1 11.3
Algebra 2 4.15 1.54 0.43 39.5 44.4 28.3
Geometry 4.27 0.69 -0.38 23.1 28.5 17.0
Algebra 1 2.52 0.17 -1.77 7.8 11.9 20.0
Pre-Algebra N.A. 0.07 -2.61 2.3 5.1 11.1

A decade ago, a similar attempt was made with the same HS&B/So cohort, without the college transcripts but with student reported college access and February, 1986 degree completion/ Senior status as the dependent variable (Pelavin and Kane, 1990). With an incomplete history (3.5 years after high school graduation) and a dependent variable that is far from the desired end of the story for most students, this analysis announced that completing one or more units of geometry was the critical mathematics filter for those who would be college bound. This conclusion was unfortunate. It is not the number of credits in a course that counts, rather, as Madigan (1997) demonstrated with high school science course-taking in relation to tested science proficiency, it is the level of courses that should be the unit of analysis. In mathematics, Madigan's advice would be gratuitous. It is obvious, in light of a full 11-year history, that high school students who stop their study of mathematics after completing geometry are not very likely to finish college. In support of their contention that geometry is the cornerstone of advancement, Pelavin and Kane (1990) noted that 29 percent of the HS&B/So students who took one or more year of geometry earned a bachelor's degree or had attained Senior standing by the spring of 1986, As table 6 indicates, that 29 percent is as high as it got, even seven years later, and the percentage is well below the bachelor's degree completion rate for those who reached higher levels of mathematics in high school.

Mathematics is the only secondary school subject presenting a distinct hierarchy of courses and that is required for graduation in all states. One could not replicate this type of analysis with any other subject. But in the formulation of "academic resources," the mathematics ladder becomes part of a larger construct. It helps us refine gradations of intellectual capital accumulation, and adds a quality dimension to curricular intensity. Its value is thus subsumed, and the variable does not stand alone in the multivariate analyses of this study.

Only 23.6 percent of the college-goers in the HS&B/So cohort (and only 18.4 percent of all high school graduates) reached trigonometry or a higher level of mathematics in high school. If moving beyond Algebra 2 is truly a gateway to higher levels of degree completion, then we have a conditional hypothesis: the higher the percentage of high school graduates who reach those levels of mathematics and subsequently attend a 4-year college at any time, the higher the overall system college graduation rate. For the more recent NELS-88 cohort (scheduled high school graduating class of 1992), 37.6 percent reached what the taxonomy for that data set calls "advanced levels" of mathematics (Data Analysis System, NELS-88, NCES CD#98-074). While this NELS-88 indicator is not an exact match with HIGHMATH, it is a credible proxy for the direction of student participation and curricular change during the 1980s and early 1990s. At this time, though, we have no idea whether the empirical or speculative analyses of the HS&B/So cohort will be validated by the NELS-88 since we will not be gathering college transcripts with long-term undergraduate histories for the NELS-88 cohort until 2001.

Correlations, Correlations

Now we can turn to the correlation matrix of table 7, in which the components of "academic resources" are set out. Some 10 variables are used in the matrix, employing the universe of the "base group" of students (see table 5).

Alexander and Pallas (1984) would note that the strong correlations with the 12th grade test score (those of curriculum, highest mathematics, and rank/GPA quintile) are well built into students' momentum by 12th grade, and would hypothesize that similar strengths would be revealed with a 10th grade test that was also administered to the HS&B/So sample. They would also suggest that the one curricular area in which the coefficients increase when appropriate curricular variables are added in a linear regression with test scores as dependent variables is mathematics. And Pallas and Alexander (1983) found that not only the amount, but the level and type of mathematics and other quantitative courses taken in high school was a far more powerful predictor of differences in SAT-Q scores of men and women(19). This validation of the "differential coursework" hypothesis is one of the principal motivations behind my construction of and emphasis on academic intensity and quality variables--but with degree completion, not test scores, as the outcome.

Some of the correlations in table 7 are weak, suggesting that in multivariate analyses, the variables will not add much to the explanatory power of the equations. For example, on-time high school graduation displays no strong relationships with anything else in the matrix. One might follow McCormick (1999), and use on-time high school graduation as a filter for the statistical noise that might result from including late graduates and GED recipients, but our story will suggest that direct entry to higher education, no matter when one graduates, is more important. Others variables exhibit superficially paradoxical relationships. For example, Advanced Placement course-taking (which, like Highest Mathematics, is subsumed in the curriculum quality variable) is more strongly related to degree completion than to mere entry into postsecondary education, even though 85 percent of those who took AP courses continued their education after high school.

Table 7.–Correlations of major pre-college "Academic Resources" variables and high school graduation status, college entry, and bachelor's degree attainment by age 30, High School & Beyond/Sophomore Cohort, 1982-1993

  Curric. Intensity Quintile (ACINQ) Curric. Quality Quintile (CURRQ) Senior Test Quintile (TESTQ) C1 Rank/ Acad GPA Quintile (RANKQ) Highest Math (5 Levels) (HMATH) Math was All Remedial (RMATH) AP Course (3 Levels) (APCRS) On=Time Grad (ONTIM) Entered postsec any Time (PSENT)
ACINQ 1.000 0.924 0.529 0.465 0.658 -0.415 0.358 0.175 0.336
CURRQ --- 1.000 0.595 0.518 0.756 -0.483 0.388 0.189 0.410
TESTQ --- --- 1.000 0.508 0.540 -0.397 0.337 0.115 0.401
RANKQ --- --- --- 1.000 0.471 -0.315 0.310 0.179 0.331
HMATH --- --- --- --- 1.000 -0.305 0.408 0.114 0.328
RMATH --- --- --- --- --- 1.000 0.166 -0.129 0.307
APCRS --- --- --- --- --- --- 1.000 0.073* 0.203
ONTIM --- --- ---- --- --- --- --- 1.000 0.118
Earned BA by age 30 0.509 0.541 0.484 0.441 0.510 -0.255 0.316 0.108* 0.395

The matrix once again suggests that high school curriculum measures hold a stronger relationship to eventual bachelor's degree completion than the other major secondary school performance measures. Table 8 reiterates what we see of this matter in table 7, but with three outcome measures: entering postsecondary education, attainment of either an associate's or bachelor's degree, and attainment of a bachelor's--all by age 30. The table drives home the point that performance has less to do with entering college than it does with completing a degree program, but that no matter what the outcome, curriculum intensity and quality holds the strongest relationships with that outcome while class rank/GPA holds comparatively weak relationships. When we move the threshold of attainment from associate's degree to bachelor's, too, only the changes in curriculum correlations are both positive and significant.

While access (entering postsecondary education) is not the topic of this monograph, one notes in table 8 the comparatively high correlation of test scores with that event. The literature is fairly consistent on this finding. Using the NELS-88 longitudinal study, Akerhielm, Berger, Hooker, and Wise (1998), for example, found that low-income/high test score students entered higher education at a 75 percent rate, compared with 64 percent for high-income/low test score students, and this relationship held up in logit regression models. In an age of aggressive recruiting of minority students, who tend to be in the lower income bands (Cabrera and Bernal, 1998), this is a reasonable finding.

Table 8.–Pearson correlations of the major components of "academic resources" with three outcome measures, by age 30: High School & Beyond/Sophomores

or Bachelor's
Curriculum Intensity & Quality .410 .520 .541
Highest Mathematics .328 .478 .510
Curriculum Intensity Only .336 .492 .509
12th Grade Test Composite .401 .483 .484
Class Rank/Academic GPA .331 .447 .441

Construction of the Composite Variable, "Academic Resources"

It is possible to carry each of these indicators forward, separately, into multivariate analyses. But the more complex the multivariate model, the higher the measurement error under those circumstances. The potential of a composite variable thus arises, and these are used often in NCES data sets. Any composite variable involves a statistical trade-off: one obtains a lower measurement error at the cost of greater covariance, that is, the intrinsic relationships among the components. I preferred the lower measurement error.

Once the three component indicators were developed, they were tested, individually and together, in both linear and logistic regression equations using bachelor's degree attainment as the dependent variable. The weighting of the three components was based on the comparative odds ratios in the logistic equation. Each component was then weighted by its comparative contribution. Table 9 displays the basic model for determining those weights for all students whose records included all three of the component variables, who graduated from high school before 1988 and for whom highest degree earned by 1993 is known for sure. Even though 94 percent of the high school graduates in the HS&B/So sample had received their diplomas or equivalencies by the end of 1983, the pre-1988 boundary for high school graduation was chosen for this calculation because given a mean elapsed time to bachelor's degree of 4.74 calendar years in the HS&B/So, students receiving diplomas by the end of 1987 had the chance to enter college and complete a degree by the time the transcripts were gathered in 1993. For anyone graduating from high school after that point, the chances of meeting the 1993 censoring date for earning a bachelor's degree were nil.

Table 9.–Weighting of the three components of Academic Resources based on their comparative odds ratios in a logistic regression with bachelor's degree attainment by age 30 as the dependent variable, High School & Beyond/ Sophomore cohort, 1982-1993

s.e. t p Odds
Intercept -3.1794 0.1336 14.5      
Curriculum 0.7252 0.0469 9.4 .001 2.15 40.9%
Test Score 0.4687 0.0464 6.2 .001 1.60 30.4
Class Rank/GPA 0.4150 0.0435 5.8 .001 1.51 28.7

Table 10 presents a linear regression with the same components, not because it is the source of the weighting, but to indicate that an alternative statistical method yields the same general relationships. For the time being and the purposes at hand, the model is very simple. The adjusted R2 is solid: it says that, in the absence of any other controls, these three components of the academic resources students bring to higher education account for about 35 percent of the variance in bachelor's degree completion (and where degree completion is unknown, it is assumed to be none). The standard errors are tight, and the indicators of significance are robust (the minimum acceptable t would be about 2, and all of the ts in this equation are much higher).

Table 10.–Basic linear regression model for the components of academic resources, with bachelor's degree attainment as dependent variable, High School & Beyond/ Sophomore Cohort, 1982-1993

  Estimate (Beta) s.e. t p< Contribution to R2
Intercept -0.4108 .0141 18.8    
Curriculum 0.1076 .0050 14.2 .001 .2947
Senior Test 0.0643 .0051 8.3 .01 .0399
Rank/GPA 0.0533 .0049 7.2 .01 .0176
  Adjusted R2=0.3521

At this very raw preliminary stage, and at the suggestion of one of the reviewers of this study, I tried to bring sex, race (a dichotomous variable, with African-Americans/Latinos/American Indians=1), and a socioeconomic status quintile measure into the basic linear regression model. Neither race nor sex met a very generous selection criterion for statistical significance of .20 (the default selection criterion in the software package is .05). SES, on the other hand, not only met the selection criterion but edged out class rank/GPA for third place (out of four) in contribution to the explanatory power of the model. Consider this exercise a precursor, for SES reflects not merely income, but the kind of parental knowledge of what is involved in higher education, in admissions and college choice, in children's occupational and educational plans, and financial planning-all of which contribute to a supportive environment for student persistence (Flint, 1992). These socioeconomic factors, by-products of parental occupation and/or level of education, have been persistently shown to be far more important than race or sex in relation to a child's degree attainment (Burbridge, 1991; Hearn, 1991).

Once a student has a quintile score for a given component, for example, Senior Test, it is multiplied by its relative strength weight. In this method, the scale for Academic Resources is compressed to 1-5, and the sum of the adjusted component weights is the basic "Academic Resources Index." A student in the highest curriculum quintile, the second highest test quintile and the second highest class rank/GPA quintile would have a composite academic resource index of 4.409 (5x.409 + 4x.304 + 4x.287). Index scores such as these were again set out in quintile bands. The final product is the variable called ACRES. It is this variable that we carry forward into multivariate analyses.

This method of creating an academic resources index differs somewhat from its sources of inspiration. Alexander, Pallas, and Holupka (1987), for example, boiled 18 combinations of curriculum, test scores and grades down to four groups. Their formula took a standard deviation above and below the mean for both test score and GPA, and crossed the results with a dichotomous curriculum variable, academic/non-academic. Their extreme groups are thus roughly equivalent in size to the tails of a standard distribution, that is, the top 16 percent and the bottom 16 percent. While this approach carries a strong academic logic, it is not as persuasive in public policy applications as would be an array of standard and fairly transparent intervals such as quintiles, nor does it highlight the kind of curricular details one would assume from a "differential coursework hypothesis." In a similar vein, it would be possible to use a standard distribution of "academic resources index" scores, isolating the tails, and blocking the middle ranges in Standard Deviation Units (see Alexander, Riordan, Fennessey, and Pallas, 1982, and Alexander, Holupka and Pallas, 1987). That strategy, however, again undercuts the very purpose of the curriculum portion of the index, which relies on benchmarks that, theoretically, all students can reach if they have the opportunity. The SDU strategy also results in a tripartite division of academic resources (20) that does not match the accessible quintile formulation of the key SES variable.

Does ACRES Work?

Prior to multivariate analysis, there are two ways to illustrate whether the composite variable tells a consistent story against its principal "rival" in this early stage of investigation, socioeconomic status (SES). ACRES and SES, of course, are hardly "rivals": the Pearson correlation between the two variables is .368, a modestly strong relationship. Even so, table 11 provides some clues as to the extent to which Academic Resources can overcome the effects of SES. While there is a linear relationship between both variables and bachelor's degree completion, the curve for Academic Resources in much steeper. The long-term degree completion rate for those in the highest quintile of ACRES is 72.5 percent, 17 percent higher than for those in the highest quintile of SES. Yes, the higher one's initial SES quintile, the stronger one's platform for launching an effort to earn a bachelor's degree, but acquiring academic resources pays off at a higher rate of interest, so to speak. Among those who attended a 4-year college at any time, the ACRES story is more consistent than the SES story.

Table 11.–ACRES versus socioeconomic status in relation to bachelor's degree completion, High School & Beyond/Sophomore cohort, 1982-1993


Percent Completing Bachelor's Degrees Within Each Quintile

  High 2nd 3rd 4th Low
All High School
Academic Resources 72.5 44.8 17.3 5.4 2.0
Socioec Status 55.4 32.9 23.2 14.5 7.2
All Who Attended
a 4-Year College
Academic Resources 80.3 64.1* 40.1 25.1 16.7
Socioec Status 72.1 59.6* 55.5* 45.5 35.4

Another, and more dramatic, way to illustrate these relationships is to present bachelor's degree completion rates within each quintile of Academic Resources, controlling for SES. Table 12 provides a descriptive account for all high school graduates in the HS&B/So who subsequently attended a 4-year college at any time. Why is this a more dramatic account? Because it shows, for example, that students from the lowest two SES quintiles who are in the highest ACRES quintile earn bachelor's degrees at a higher rate than a majority of the students from the highest SES quintile. It demonstrates that students in the bottom two quintiles of ACRES earn degrees at low rate no matter what their SES standing.

This is a temporary judgment in the unfolding of this study. It is temporary because the universe (in table 11, in particular) has been divided in a rather stark manner: all high school graduates v. those who went on to attend a 4-year college at some time. There are students who aspire to 4-year degrees but never attend 4-year colleges, and their background characteristics may be sufficiently different from others to warrant a recalibration of the balances among SES, Academic Resources, and degree attainment. The most productive moment to confront this issue is at the beginning of the multivariate analysis in Part IV.

Table 12.–Bachelor's degree completion rates by quintile of ACRES, controlling for socioeconomic status, High School & Beyond/Sophomore cohort, 1982-1993

  Percent Completing Bachelor's Degree Within Each ACRES Quintile
  High 2nd 3rd 4th Low
Status Quintile
Highest 85.9 75.4 51.2 28.1* 12.8*
2nd 79.2# 59.5# 42.7# 20.7* 20.1*
3rd 78.6# 58.7# 38.4# 29.5* 17.4*
4th 66.0## 60.6# 24.5* 26.5* 14.6*
Lowest 62.2## 42.1 28.9* 20.4* 20.8*


As a composite variable heavily weighted by intensity and quality of high school curriculum, academic resources (ACRES) is a valid and viable construct to represent the intellectual capital brought by students to the threshold of postsecondary education, and can be profitably carried forward into analyses of postsecondary careers. For the High School & Beyond/Sophomore cohort, the components of ACRES reflect critical details of secondary school performance, not deceiving generalized dichotomies. ACRES is a heuristic in which we can have great confidence, far more than in a label such as "college qualified" that Berkner and Chavez (1997) constructed with reference to the pre-collegiate performance of students who enter 4-year colleges(21). How much it will contribute to our understanding of bachelor's degree completion when other background variables and college experiences are brought into play remains to be seen. It is precisely because our dependent variable is long-term (by age 30) bachelor's degree completion for a conventional age cohort that we now pause to consider what the completion issue entails and what other variables traditionally used in analyses of educational attainment must be either discarded or reconstructed.

Introduction: Departing from Standard Accounts of Attainment [Table of Contents] II. Touchstone Variables: Persistence, Completion, Aspirations, Parents, and Parenthood