Watch CBS News

I WILL NOT TOLERATE YOUR INSOLENCE!

I WILL NOT TOLERATE YOUR INSOLENCE!... Kevin disloyalty sides with Sam Boyd in objecting to one aspect of the methodology behind our college rankings: namely, that the research component of the score counts the absolute number of PhDs a school awards and the total number of dollars it spends on research, thereby giving an advantage to larger schools.

We get this criticism a lot, and we're not unsympathetic. Indeed, the research component is the one thing we re-litigate internally every year when we're putting together our rankings. In the end, though, we always come down on the side of doing it this way, for two main reasons.

First, in the area of research and graduate education, we think size does matter. There's no reason to suppose that large schools would have a natural advantage over small schools in, say, recruiting and graduating low-income students. But there are reasons to believe that large schools have several legs up when it comes to doing cutting-edge research (decoding genes, exploring subatomic particles) and producing graduate students who are familiar with that research. Sure, such work can be done in small schools. It's also possible to make great films, design innovative software, or publish award-winning glossy magazines in small towns and provincial cities. But it doesn't happen nearly as often as it does in LA, Seattle, or New York, in large part because these large metro areas can support the thick labor markets and webs of interconnected companies that are required to do this kind of collaborative work easily. We assume large universities enjoy similar economies of scale in research and graduate education, especially in highly technical fields.

Might our assumption be wrong? Sure. But then the federal government and private industry, which lavish research dollars disproportionately on larger universities, are making the same mistake.

But let's say we are wrong. That gets to the second reason why we've kept the methodology the way it is: the denominator problem. Say you wanted to get rid of the bias towards large schools. To do that, you'd have to divide each institution's total PhD and research output by some other factor--say, total faculty, or total number of faculty members teaching graduate students or doing research. The problem is that schools don't report faculty and researcher numbers in consistent ways. Some only count professors, not adjunct lecturers or researchers in university-based institutes, who often do much of the graduate-level teaching. Others count researchers--say, at affiliated hospitals--who never set foot in the classroom. Judging the research-and-PhD component by the reported number of faculty would give schools with the narrowest definition of faculty an edge. Bottom line: the more we looked at the data, the harder it was to find a fair and solid method to calculate the research and graduate education score in a way that would even the playing field between small and large schools, presuming we thought that was the right thing to do, which, on balance, we don't.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.