NACE Journal, February 2016
The College Scorecard limits post-graduate information to salary for the school as a whole. NACE's research director looks at how three factors—type of school, demographics, and academic program—affect salary results.
On September 20, 2015, the U.S. Department of Education released the latest version of the government's "College Scorecard." This is the first version to include institutional "performance" measures related to the outcomes of college graduates. In effect, it is the first complete version of the Scorecard as envisioned by the Obama White House. However, it does differ significantly from some of the President's original intentions for the scorecard.
In 2013, the President announced a plan to reform college accessibility and outcomes by developing a new ranking system; the system was to be based on performance measures that focused on how well a school provided access to its institution and how well that institution's students performed in terms of getting their degrees and becoming integrated into the nation's economy. He charged the U.S. Department of Education (DOE) with the task of identifying metrics for accessibility, affordability, and student outcomes.
The original intent was to have the DOE develop the comprehensive assessment in 2015, then test the metrics over the course of a couple of years, and, eventually, to use the performance rankings to allocate federal student aid to schools that best achieved the combination of accessibility, affordability, and graduation/post-graduation outcomes. However, representatives of the higher education community, including NACE, criticized the concept, pointing to the potentially contradictory incentives a comprehensive ranking system would place on colleges.1 The DOE recognized the validity of this criticism by quickly abandoning the comprehensive ranking system, choosing to focus on a general rating for colleges on each of the individual measures in the scorecard. However, this attempt also ran into problems, particularly as it related to outcomes measures: The problem is whether it is legitimate to rate the performance of an institution on the aggregated outcomes of its students when a plethora of other factors may impact the results for any individual student.
The DOE ultimately decided that "rating" individual colleges was inappropriate on the factors that were of core interest to the White House. Instead, the DOE chose to develop the "Scorecard" as a consumer information tool. The Scorecard now provides basic metrics related to access, affordability, and outcomes for each college or university in the United States without actually providing rankings or rating the individual school on each measure. For example, one of the metrics used for post-graduation outcomes is the median salary for graduates 10 years after entering the college or university. The Scorecard does not rank or rate individual schools on how well their graduates perform in the economy after graduation; rather, it is providing basic information a potential student and/or the student's parents can use in choosing a college.2
There are still a number of problems with the outcomes information provided. First, the post-graduation information is limited to salary data. There is no information about graduates continuing on to an advanced degree or the nature of the graduates' employment. Second, the data are aggregated figures for the school as a whole. There is no way to assess the salary outcomes for an individual academic program within an institution. Third, the outcomes information is applicable for only a portion of an institution's graduates. The DOE was limited in having access to graduate income data to only those students who took federally subsidized student loans. The DOE did not have access to outcomes measures for graduates who did not take part in federal loan programs, which, for a number of schools, represented the vast majority of the institution's graduates.
These limitations are serious shortcomings in the usefulness of the Scorecard as a consumer information tool—at least in terms of the Scorecard's post-graduation outcomes measures. However, the data that the DOE has made available do allow for some interesting research into the relationships between institutional characteristics and graduate outcomes, at least for the segment of the college graduate population that the information covers. To illustrate that potential, we will examine some of the factors that correlate with these outcomes.
Factors Associated With Institutional Outcomes
In this article, we look at three factors that may potentially effect institutional outcomes, as defined by the median salary of those receiving federal aid 10 years after entering the school:
First, we will examine if there is any difference based on the type of school—public, private nonprofit, private for-profit. Second, we will analyze a variety of demographic factors, including gender and ethnic composition, the income profile of the student body (as measured by the percentage of students with Pell Grants), and admissions criteria, that may impact institutional outcomes. Finally, we will examine one of the major concerns critics have had about the outcomes data, i.e. the effect different majors can have on the results.
School Type and Post-Graduation Outcomes
Figure 1 shows the average and median institutional salaries for public, private nonprofit, and private for-profit school types. The more traditional school types exhibit higher average salaries for their graduates than do for-profit institutions. Although the private-nonprofit institutions have higher average salaries than do public colleges and universities, the difference is not statistically significant. The same is not true of the difference between these traditional schools and the for-profit institutions. The approximate $5,000 difference in average salary 10 years after entering college is statistically significant, with graduates from for-profit institutions performing considerably worse than the graduates of traditional schools.
Figure 1: Salary by institutional control
Before jumping to a conclusion that for-profit institutions are poor performers in terms of improving the prospects of their graduates, it is important to recognize that there are sizable differences in the populations served by the for-profit institutions as compared to their more traditional counterparts. The entrance cohorts of for-profit institutions are significantly more male (for-profit: 49 percent; private, nonprofit: 43 percent; public: 42 percent); significantly more composed of underrepresented minorities (for-profit: 43 percent; private, nonprofit: 28 percent; public: 31 percent); and significantly poorer, as indicated by the percentage of students receiving Pell Grants (for-profit: 66 percent; private, nonprofit: 41 percent; public: 40 percent).
To control for these demographic differences in determining the independent relationship between graduates of for-profit institutions and post-graduation salary levels, we created a regression analysis that incorporated each of the preceding demographic variables along with a dummy variable that categorized institutions as either for-profit or other. The results are expressed in the following equation where each of the variables have been transformed into standardized units of measurement:
Equation 1: Salary = -.753 (Pell) + .163 (For-Profit) + .112 (Minority) - .087 (Women)
What the equation tells us is that the relationship between post-graduation salaries and for-profit schools is still statistically significant. However, the direction of the relationship is completely different. Rather than being a negative correlation between for-profit institutions and the salaries of graduates, there is a relatively small but positive correlation. What drives the initial negative relationship between for-profit institutions and post-graduation salaries is the large percentage of poor students, as indicated by the percent of entrants receiving Pell Grants, they serve. If we hold the percentage of Pell Grant recipients steady, then the "performance" of for-profits appears somewhat superior to more traditional higher education institutions.
Demographics and Student Outcomes
The regression equation points out the importance of demographic factors in determining the "performance" of colleges and universities when it comes to the post-graduation outcomes of their students, at least as defined by the median salary.
Clearly, the most important factor in determining the median salary of an institution's graduates is the percentage of entrants that come from relatively poor circumstances, as identified by the percentage of Pell Grant recipients in the student body: The higher higher the percentage of Pell Grant recipients in a college or university class, the lower the median salary of its graduates.
While the percentage of Pell Grant recipients is by far the single most impactful factor in the preceding regression equation, it is not the only demographic factor significantly correlated with the median income of graduates. The percentage of underrepresented minorities (African-Americans, Hispanic Americans, Native Americans, Hawaiians/Pacific Islanders, and multi-racial students) in the university has a small, but positive correlation with median income. This does not mean that underrepresented minorities perform better in terms of post-graduation salaries; rather, the institutional average increases if the institution has a more diverse student body.
In contrast to the positive relationship with ethnic diversity, the presence of a higher percentage of women in the student body has a negative correlation with institutional graduate results. However, the relationship, while statistically significant, is quite small, suggesting that the impact of a high percentage of women on campus is minimal and might be negated by other factors that are not fully accounted for in the preceding regression analysis.
One of the factors not accounted for may be the degree to which the school is selective in admitting its student body. To account for the impact of selectivity, we ran another regression in which the percentage of applicants offered admission to the entrance class is one factor and the average SAT score for the entrance class is another factor. Both of these variables can be viewed as representing the basic ability of the entrance cohort to achieve. The expectation is that institutions that can be more selective in terms of admitting applicants to their student body should reasonably expect a higher average performance in their students after graduation, regardless of other demographic factors.
This regression analysis was restricted to public and private, nonprofit colleges and universities. The for-profit schools are excluded because so few require the SAT and have little variation as to the percentage of applicants who are admitted.
We examine the impact of selectivity factors on median incomes in a two-step analysis. First, we run the regression analysis, including the percentage of applicants admitted, as a separate independent variable. Then, we rerun the analysis, including average SAT as a second selectivity variable, to determine if one or both play a significant role in correlating with the median income of an institution's graduates. As before, the succeeding equations express the results where each of the variables have been transformed into standardized units of measurement.
Equation 2: Salary = -.687 (Pell) + .188 (Minority) - .133 (Admit Rate) - .101 (Women)
In Equation 2, the admissions rate is indeed statistically significant and points in the expected direction. The equation shows that the higher the percentage of applicants admitted, the lower the median post-graduation income. However, the relationship between the admissions rate and median income is relatively small given the presence of other factors, particularly the percentage of Pell Grant recipients. The percentage of underrepresented minorities and the percentage of women continue to have a statistically significant relationship with institutional median incomes. As before, the presence of a more-diverse student body is correlated with higher income levels, while a higher percentage of women is related to lower median incomes. With the equation restricted to public and private, nonprofit schools, and adding the admissions rate as a factor, the linkages between ethnic diversity and women with median incomes is still relatively small but grows marginally stronger.
Equation 3: Salary = .454 (SAT) - .372 (Pell) + .258 (Minority) - .110 (Women)
Equation 3 results in some sizable changes in the relationships between the main demographic variables and the median income of an institution's graduates. First, the major demographic impact is no longer the percentage of economically disadvantaged students. Average SAT is the variable with the strongest independent relationship. It goes in the expected direction, i.e. the higher an institution's average SAT score, the higher the median income of an institution's graduates. More interesting is how strong that relationship is even in the presence of other demographic variables.
The percentage of Pell recipients still has a strong independent relationship, but the strength of that relationship is mitigated considerably when average SAT is accounted for in the equation. In fact, the presence of the institutional SAT score almost halves the strength of the independent relationship between the percentage of Pell recipients and median graduate incomes. Ethnic diversity's correlation also continues to be positive, but the independent strength of that relationship is actually strengthened by taking into account institutional SAT scores. The strength of the relationship between the percentage of underrepresented minorities and median post-graduate incomes begins to approach, although in a completely different direction, the relationship between Pell and median income. The correlation between the percentage of women and median income remains about the same as it was in previous equations. The relationship remains negative and small. Finally, the significance of the percentage of applicants admitted and post-graduate incomes is completely negated by taking into account the average SAT. Thus, it was eliminated as an independent variable in Equation 3.
Academic Program and Institutional Outcomes
Our final objective is to determine if and how much academic program may affect an institution's post-graduate outcomes.
One of the most consistent criticisms leveled at this version of the College Scorecard when it was introduced in fall 2015 was that the outcomes data—specifically the median wage of graduates—was for the school as a whole rather than for individual programs at an institution. The question we ask is: How much difference does this make?
We examine the issue in two ways. First, the four-year institutions in the database were delineated by their undergraduate instructional profile as defined by the Carnegie Classification system. The Carnegie system classifies undergraduate instructional profiles in a variety of categories related to the percentage of a school's bachelor's degrees awarded in majors the system recognizes as arts and sciences as opposed to those majors it classifies as professional, and to the level of graduate instruction present on the campus. For this purpose, we ignored the distinctions related to graduate education and focused on classifying institutions according to the percentage of students graduating in the arts and sciences as opposed to the professions. The result was five categories of undergraduate instructional profiles:
- Arts and Sciences: At least 80 percent of bachelor's degree majors are in the arts and sciences;
- Arts and Sciences plus Professions: From 60 to 79 percent of bachelor's degree majors are in the arts and sciences;
- Balanced Arts and Sciences/Professions: Bachelor's degree majors are relatively balanced between arts and sciences and professional fields (41 to 59 percent in each);
- Professions plus Arts and Sciences: From 60 to 79 percent of bachelor's degree majors are in professional fields (such as business, education, engineering, health, and social work);
- Professions: At least least 80 percent of bachelor's degree majors are in professional fields (such as business, education, engineering, health, and social work).3
Figure 2: Median salary by undergraduate program, all schools
|Arts & Sciences|
|A&S plus Professions|
|Professions plus A & S|
|Arts & Sciences||$45,967|
|A&S plus Professions||$45,966|
|Professions plus A & S||$40,283|
Figure 2 shows the median post-graduate salary for institutions as categorized by their undergraduate instructional profile. The table suggests that academic major may have an impact on institutional performance in terms of post-graduate income outcomes. The graduates of schools where the academic focus is in the arts and sciences appear to do decidedly better than schools where the focus is on professional studies. The difference between schools where graduates are concentrated in the arts and sciences and schools where graduates are clustered in professional majors is statistically significant.
One concern with the validity of this relationship is the potential influence of for-profit institutions in the mix. We have already seen that these schools have a significantly lower average median salary for their graduates. The graduates of these schools are also likely to be concentrated in the professional majors category. To eliminate the possible bias in the relationship from the presence of for-profit schools, we re-ran the table for only public and private, nonprofit colleges and universities.
Figure 3: Median salary by undergraduate program, traditional schools
|Arts & Sciences|
|A&S plus Professions|
|Professions plus A&S|
|Arts & Sciences||$46,438|
|A&S plus Professions||$46,052|
|Professions plus A&S||$40,171|
Figure 3 shows that the relationship between types of undergraduate instructional program and the median salary of graduates continues to exist in the absence of for-profit institutions. In fact, the differences by program are somewhat greater for just the "traditional" schools.
That academic majors would have an influence on a school's post-graduate economic outcomes is not particularly surprising. The salary and job outlook reports that NACE has conducted for many years have consistently shown that the economic prospects for some majors are better than others, and NACE has not been alone in seeing these results. However, the majors that have consistently recorded better economic outcomes in terms of both employment and salary are those that Carnegie would classify as professional (engineering and business). Consequently, the finding that schools where graduates who are focused on the arts and sciences do significantly better than the schools where graduates are focused on professional majors in terms of economic outcomes is indeed surprising.
How can we explain this surprising result?
The first guess would be that schools where students concentrate in the arts and sciences are distinctly different demographically. Our previous analyses pointed to the significant relationship with both entry-level student academic ability and economic disadvantage. As Figures 4 and 5 show, the demographics of schools where the undergraduate instructional profile is dominated by the arts and sciences are weighted in favor of better post-graduate economic outcomes than the schools where professional majors dominate the profile. The average SAT score of schools where 80 percent or more of the graduates are in arts and sciences majors is around 1250 compared with an average SAT score of approximately 1000 for schools where 80 percent or more of the graduates come from professional majors. In addition, only 27 percent of students in arts and sciences dominated institutions are Pell Grant recipients compared with nearly 50 percent of the students at schools dominated by majors in the professions. When these demographic factors are taken into account, the statistical significance of the relationship between undergraduate instructional profile and the median post-graduate salary is eliminated.
Figure 4: Average SAT by academic program, traditional schools
|Arts & Sciences|
|A&S plus Professions|
|Professions plus A&S|
|Arts & Sciences||1247|
|A&S plus Professions||1139|
|Professions plus A&S||1032|
Figure 5: Percent of Pell recipients by academic program, traditional schools
|Arts & Sciences|
|A&S plus Professions|
|Percent Pell Recipients||32.5%|
|Percent Pell Recipients||38.3%|
|Percent Pell Recipients||48.6%|
|Professions plus A&S|
|Percent Pell Recipients||40.4%|
|Program||Percent Pell Recipients|
|Arts & Sciences||26.6%|
|A&S plus Professions||32.5%|
|Professions plus A&S||40.4%|
The second way we chose to examine the effect of academic major on the "performance" outcomes of schools was to examine the relationship between the percentage of engineering graduates and the median post-graduate salary of institutions. Engineering was selected because, traditionally, this academic program has been connected with the highest average starting salary. Does the percentage of graduates coming from this discipline significantly raise the overall post-graduate salary for the institution as a whole, particularly if we control for the demographic factors associated with economic "performance?"
Equation 4: Salary = .398 (SAT) - .348 (Pell) + .285 (Engineering) + .226 (Minority)
Equation 4 shows that the percentage of engineering graduates does indeed have a significant and positive effect on the median post-graduate salary for an institution—even when controlling for the school's average SAT level and the percentage of students who come from disadvantaged economic backgrounds. The effect of engineering is not as great as these dominant demographic conditions, but it is relatively sizable. Entering engineering into the equation also changes the relationship with demographic diversity. The relationship with ethnic diversity remains—the larger the percentage of minority students at the institution, the higher the median post-graduate income—but the statistical significance of the presence of women is eliminated.
The Scorecard: What the Data Really Tell Us
The analyses presented here point to the value of the data assembled by the Department of Education in creating the latest version of the College Scorecard—despite the fact that there are great gaps in the representation of graduates covered in the data. The information points to the importance of demographic characteristics in the "performance" of institutional outcomes. Schools that are academically selective, have a relatively small percentage of economically disadvantaged students, and have a relatively diverse ethnic student body tend to do better as defined by the median wage of their graduates a few years after graduation.
While this finding is interesting—and could potentially be helpful in identifying institutions and possibly student populations that need assistance to get their graduates to perform at competitive levels—it is not the avowed purpose of the Scorecard. The DOE was clear that the purpose of the Scorecard was to provide information to the consumer (student and/or parents) that would allow the consumer to make a more-informed choice. In this iteration, the Scorecard is a relative failure in meeting its avowed purpose. The outcomes data are too nebulous and generic for individual institutions to provide the potential student with much useful information. In reality, the same is true of the cost data. Average net price is a meaningless figure for the individual consumer who can't really know the actual cost of attendance at an individual institution without going through the process of applying for student aid.
This is not to say that data on student outcomes are not useful. Certainly, they can be valuable to individual institutions in evaluating the results of their own graduates and in benchmarking these results with other schools and focusing efforts to improve graduate performance.
The problem with the College Scorecard approach is that the focus is the institution rather than the individual student. The relationships between demographics and institutional outcomes we have seen here suggest that there are at-risk populations that need special focus. More-detailed data taking into account differences in institutional expenditures, academic programs, and social and academic support could be extremely valuable in suggesting pathways to improve the outcomes of these students. Rather than focusing on identifying the 5 percent of institutions whose graduates do not achieve a median wage above that of a high school graduate, the data collected would be far more valuable if it were focused on providing systems to support the outcomes of the far greater number of individual students who continue to find it difficult to achieve desirable economic results.
1 NACE response to the Department of Education re: A New System of College Ratings, February 2015.
2 The DOE is somewhat disingenuous in claiming that it is not rating schools on these measures. The Scorecard website allows a consumer to look at schools in a number of ways, but offers up results in a rank order. For example, if you want to look at four-year institutions in terms of the median salaries of graduates, the website will present the schools with the school with the highest median salary first and the one with the lowest median salary last.
3 Carnegie's methodology and a link to its full list of academic programs classed as either Arts and Sciences or Professional can be found at http://carnegieclassifications.iu.edu/methodology/ugrad_program.php