The Numbers Game

Love them or hate them, college rankings appeal to a culture that worships consumer choice and is seduced by prestige value.

At the John Jay College Theater in midtown Manhattan, this past spring’s attraction was Mnemonic. It is a play built on memory, “one of the last great mysteries,” according to a snippet of dialogue. Even before the stage action unfolds, audience members are challenged to remember where they were two hours ago, two weeks ago, a month ago, ten years ago.

As it happens, those audience members stepped out of the theater to see two huge memory aids: twin blue-and-white banners proclaiming that John Jay College is “Ranked #1 in America by U.S. News & World Report” for its graduate program in criminal-justice policy. That’s a memory that’s meant to endure.

Last fall, Jim Gray and other senior administrators at Duke’s Fuqua School of Business huddled together just before the unveiling of business-school rankings from BusinessWeek. Gray, Fuqua’s associate dean for marketing and communications, recalls meeting in the dean’s conference room as the results came through over an Internet connection. “All the schools were logging into this site. It was funny: The connection wouldn’t work at first because all the schools were flying into this thing precisely at six o’clock. But we finally got in, and they did this big countdown—counting down from number twenty-five to number one. And once they got past number seven or six and we weren’t yet on the list, we knew that something good was going to happen.”

About 200 students were gathered in a classroom nearby. As the countdown persisted—and as a lustrous Fuqua ranking became more and more likely—the excitement kept building. Fuqua ended up at number five. When that word finally came, the students were “yelling, screaming, high-fiving”; several of them ran up to fetch then-dean Rex Adams ’62. Adams climbed onto a bench and made some appropriately enthusiastic remarks. And the next day, Gray says, it was back to the serious-minded business of business education.

Those who want to keep reveling in the world of rankings can consult Stuart Rojstaczer’s College Ranking Service (www.rankyourcollege.com). Rojstaczer, a Duke geology professor, unveiled the mock website in July. The rambling rationalization on the site says, “Through elaborate meta-analysis that took place over several years at a cost equal (in 1994 dollars) to the Manhattan Project, we identified 629 independent factors (cabalists on our staff note that this number is the numerical equivalent of the words ‘Torah’ and ‘life’ combined, and believe we have identified the Holy Grail, so to speak, of higher education) contributing to the quality of a college.” Bragging that “our rankings are not static,” the website advises users to hit the refresh button on their web browser. That causes the “Mighty Max” computer program to recalculate the rankings. In fact, the site uses a program to sort colleges randomly; in the course of less than a minute, Duke, Dartmouth, and Carnegie Mellon changed places at the top of the list.

Rojstaczer told The Chronicle of Higher Education that “there is no rational basis to numerically ranking American universities and colleges.” Rational or not, college rankings are one aspect of the number-one—or at least a prevalent—cultural phenomenon. Amazon.com ranks book preferences and Consumer Reports ranks refrigerators. An international rankings system sorts out the expertise of Scrabble players. In a paint-by-the-numbers variation, an artist has produced a work with the elements that, so his polling data tell him, rank at the top of the public’s preferences: a landscape with water, mountains, animals, lots of blue, and the figure of George Washington. The Van Cliburn competition produces the top-ranked classical pianist—prompting the complaint from a New York Times columnist that “ranking pianists as if they were Olympic athletes is inherently inartistic.”

Why the preoccupation with college rankings? According to a 1997 study by UCLA’s Graduate School of Education and Information Studies, “Choosing a college is an intangible, expensive purchase perceived to be fraught with risks, and parents and students may be using national rankings as impartial sources of reliable information. The more uncertain the decision, the greater the likelihood that consumers consult ratings information in an attempt to lower their risks.” To the extent that they validate a decision steeped in ambiguity, resorting to rankings can even be emotionally soothing.

The UCLA study found that most students don’t find rankings to be important. At the same time, “Users of rankings (those citing them as somewhat or very important) are more likely to have frequently asked a teacher for advice in high school, more likely to be high-achieving students, and more likely to aspire to doctoral, legal, and medical degrees.” That is, rankings have a special appeal to the brightest and most ambitious students. “The students who are using the rankings are precisely those students who have fine-tuned perceptions of what’s important in choosing a college and who already know, and act on, notions of which institutions are ‘best.’ Newsmagazine rankings are merely reinforcing and legitimizing those students’ status obsessions.”

To Peter Cary, special-projects editor at U.S. News & World Report, rankings transcend status concerns. With so much information, much of it conflicting, college-bound students can find themselves “utterly lost” without “an assessment of comparative educational quality.” Rankings, he says, can narrow college choices, but shouldn’t define the ultimate choice. In forum after forum, he says, he stresses to students and parents that “one would be crazy to apply to only the number-one school in our rankings.”

Though a growing phenomenon, college rankings aren’t, strictly speaking, a new phenomenon. They’ve been around for more than 200 years, says Ted Fiske, former education editor of The New York Times. His Fiske Guide to Colleges, which evaluates schools by several criteria but doesn’t rank them, is considered a standard. (Fiske, now an education consultant, lives in Durham with his wife, Helen Ladd, a Duke public policy professor.) In the 1870s, the U.S. Bureau of Education published lists of colleges by type. And in 1886, Fiske says, it singled out twelve schools as having “achieved more than national distinction.”

Fiske says that colleges themselves brought on the rankings frenzy. By the late 1970s, they were discerning a shift from a seller’s to a buyer’s market. With a demographic downturn, enrollments were dwindling and competition for tuition-paying students was intensifying. So colleges became marketing-minded. “One result of the new professionalism in college advertising is that promotional brochures are beginning to look like cigarette ads,” Fiske wrote in a 1979 Atlantic Monthly story. He went on to argue that “the most obvious problem” with the newly spirited grab for students was “the abuse of simple truth, a virtue with which colleges have often presumed to identify themselves in the past.”

Today he says, “One thing colleges didn’t count on as they became so savvy about marketing is that Americans know how to be consumers.” Quite inadvertently, colleges, in their self-promoting mode, “created a market for people to come in on the side of consumers, to sort out the overload of propaganda.”

Fiske is concerned that college rankings obscure deeper issues of campus “flavor and character.” He says, “It’s inappropriate to say what’s the best college—the issue is what’s the best college for a particular individual. And criteria that can be quantified are not necessarily the important ones in making decisions about colleges. Colleges are like people, and matching a student to a college is like a marriage. You want to find a place that coincides with your needs and desires. There’s no way you can quantify the sort of people who go there or quantify whether you’ll want them as lifelong friends.”

U.S. News sparked the modern rankings trend in 1983 with its “best colleges” list, originally compiled from a survey of college presidents. And it’s received the brunt of the criticism. Reed College, for one, publicly questioned the methodology and usefulness of the magazine’s rankings from the beginning. Reed president Steven Koblik told U.S. News that its project wasn’t credible, and said the college would not be returning any of the magazine’s surveys. “Higher education isn’t a commodity like cars or refrigerators,” he insists. “There aren’t twenty-five colleges in this country that are best for everyone.”

In 1996, Stanford’s then-president, Gerhard Casper, wrote to U.S. News’ editor complaining about the “specious formulas and spurious precision” behind rankings. Casper noted, “Universities change very slowly—in many ways more slowly than even I would like. Yet, the people behind the U.S. News rankings lead readers to believe either that university quality pops up and down like politicians in polls, or that last year’s rankings were wrong but this year’s are right (until, of course, next year’s prove them wrong).” And he disputed the validity of particular indicators, observing, for example, that a college could improve its “predicted” graduation rate by “offering a cream-puff curriculum and automatic A’s.”

The annual college-rankings issue is an automatic winner for U.S. News: It sells twice as many copies as a typical run of the magazine, and is reportedly a bigger seller than the Sports Illustrated swimsuit issue. During the month that the rankings are released, the U.S. News website gets some 40 million page views, says Peter Cary. As special-projects editor, he has a purview that extends to the magazine’s college guide, graduate-school guide, online education guide, and website.

Those rankings are part of a larger social reality—a reality that may not be so quick to separate cars from colleges. Claudia Buchmann, an assistant professor of sociology at Duke, says Americans like to believe “that things can be ordered in a hierarchy, that one thing can be quantitatively better than another. There are really big problems when we do that, especially because people tend to buy into these things wholeheartedly.” The ready-made hierarchy of colleges is presented as a tool to help students make decisions. In fact, she says, it can be an impediment to making the important decisions that should come from a careful reasoning process.

Buchmann finds it odd that college rankings are seen as being no less meaningful, and no less appropriate, than product rankings. But she says they show a familiar American preoccupation with prestige. “It used to be that the level of education meant something; you graduated from high school and that meant something, and the elite went on to higher education. Then the credential of a college degree was diminished because higher education was so plentiful. But the prestige of the degree became the distinguishable factor.” In a culture where prestige does count, she says, “The college a person goes to matters because of the social networks it affords—access to an elite that can help in getting a good job.”

In his elite position as Duke’s director of undergraduate admissions, Christoph Guttentag says he regularly gets unsolicited college mailings. They’re clearly meant to impress him as someone invited to rate peer institutions. U.S. News, notably, surveys admissions directors, provosts, and college presidents for “reputational” assessments. In sounding out its fellow liberal-arts colleges on rankings, Alma College discovered a couple of years ago that 84 percent of the voters were unfamiliar with some of the schools they were asked to rank, and one-quarter were just guessing. (U.S. News asks individuals not prepared to evaluate a school to mark “don’t know.”)

There’s little guessing about the fact that colleges are rankings-sensitive. And from the vantage point of U.S. News, that’s not a bad thing. “A number of schools have made public pronouncement that they want to improve themselves in the rankings,” says U.S. News’ Cary, because they believe the magazine’s ranking system provides legitimate pointers to “improving academic quality.” Ohio State University is guided by a so-called 20-10 Plan: By the year 2010, twenty of its programs should rank in the top twenty, and ten of those should be in the top ten. The benchmark ranking systems are the National Research Council and U.S. News. The lead item in a June “Update from the President’s Office” at the University of Georgia reports that “For the first time, the University of Georgia was ranked among the top twenty public universities by U.S. News & World Report. We now have our sights set on the top fifteen.”

Caltech’s website points out that the school was “ranked the number-one university in the U.S. by U.S. News & World Report in September 1999.” It had leapt from ninth place the year before. Wagner College, in its employment notices, accents its U.S. News standing in the “top tier in the Northeast.”

When it’s not possible to make such boasts, the consequences can be unpleasant. Two years ago, a senior vice president at Hobart and William Smith Colleges resigned under pressure after a self-inflicted rankings wound. She had failed to submit updated information that U.S. News uses to compile its annual survey. As a result, said Hobart’s president at the time, the college suffered a “profoundly disturbing” fall from the second to the third tier in the rankings of liberal-arts colleges.

The quest for higher rankings has had even more extreme implications. In 1995, Wall Street Journal education writer Steve Stecklow reported that some colleges were inflating the data they supplied. In reporting SAT scores, a school in Florida, for example, was lopping off the bottom-scoring 6 percent of students, thereby lifting the average about 40 points. A northeastern school was excluding both international students and remedial students, who together represented about 20 percent of the freshman class. The practice boosted the school’s SAT average by about 50 points. Another school excluded the verbal SAT scores, but not the math scores, of about 350 international students. The reason: Foreign students often have trouble with English and tend to do poorly on verbal SATs, but many score better than U.S. students in math.

  Reporter Stecklow compared SAT scores, acceptance rates, graduation rates, and other enrollment data that colleges provided to the published guides with data they gave to debt-rating agencies, investors, and the National Collegiate Athletic Association. Misrepresenting facts to debt-ratings agencies, as required when schools sell bonds or notes, violates federal securities laws. Most of the statistics in magazines and guidebooks are self-reported and unaudited. As Stecklow put it, “There are no legal penalties for misleading guidebook publishers.” He found case after case of what he called “sleight of hand,” but what college officials variously labeled a “transcription error,” a “mystery,” or even a “conflict that we have regularly with the [school’s] business office.” One former college communications director admitted to taking part in “a meeting that could only be described as a strategy session on how to cheat on the survey.”

At U.S. News, “our validation process is a lengthy one,” says Cary. Researchers scrutinize the submitted information to make sure that there isn’t an obvious innocent mistake, such as an out-of-place decimal point. They’ll check a school’s data against what it submitted the previous year. And they’ll compare the data with other information sources, such as the U.S. Department of Education. The schools, then, will be asked to explain “anomalous” results—including results that may be pointed out by their competitors following publication of the rankings. Cary says U.S. News assigned a reporter to look into allegations of cheating by colleges, and the reporter found nothing to substantiate the suspicions.

Duke’s Guttentag says “changing the essential character of an institution is not terribly cost-effective.” So rankings-driven schools are more likely to try to shift how they’re perceived than to seriously revamp themselves. “If our goal were simply to change how we ranked, we could do things like fill 50 percent of the class instead of 30 percent of the class with Early Decision candidates. That increases your matriculant yield, it increases your selectivity, both of which are factors that are taken into account in U.S. News. We haven’t done it and there’s no intention to do it. I’d rather think about how we could make a better Duke than how we could make a higher-ranked Duke.”

“The real problem that I see is that U.S. News decides what questions to ask, they decide the scale that’s used, they decide the weighting,” says Guttentag. “So what happens is that a process that students should be able to do for themselves, U.S. News does for everyone. And then it’s assumed that there is objective truth in the outcome.” He suggests that a better practice would be for the magazine to sell a CD-ROM packed with all of the information gathered about schools. Students, then, could produce their own rankings in terms of the factors they value—distance from home, cost, volumes in the library, alumni giving, social life, prominence of a particular course of study, and so on. That way, he says, a set of data could become a personally tailored, educational tool.

Don’t count on it, is the word from U.S. News. “Anybody can take the data we have, put it on a spreadsheet with their own weightings, and re-rank the schools,” says Cary. “But we’re comfortable with what we do. We feel our weighting system is the one that has the U.S. News stamp on it. It’s established U.S. News as an authority in this area, and we’re not inclined to shake up the system.” 

A school’s U.S. News ranking, within its group of peer institutions, hinges on various academic indicators. Each indicator is assigned a weight—“based on our judgments,” as the magazine puts it, “about which measures of quality matter most.” Seventy-five percent of a school’s ranking is determined by “objective measures”: retention and graduation rates; class sizes and student-faculty ratios; measures of student selectivity, such as high-school class ranks, test scores, and admission acceptance rates; per-student spending; alumni giving; and so forth. The remaining 25 percent is based on the “reputational survey.” That prestige measure may be subjective but, as U.S. News sees it, it’s important: “A diploma from a distinguished college helps graduates get good jobs or gain admission to a top-notch graduate program.”  Critics are quick to point to U.S. News’ refining of its indicators as evidence not just of imprecision, but of a publicity-driven effort to produce surprising results. Cary, though, seems stability-minded. “I don’t believe we should be producing rankings that jump around a lot. We’re sensitive to making changes that would produce large moves in the rankings. If we did that too often, the whole system would become utterly suspect and would be rejected by the colleges and the public.” 

The great leaps made by Caltech and Johns Hopkins in a U.S. News ranking were personally jarring to him, he says. As a result, U.S. News, with the advice of experts, spent a year focusing on a particular indicator—research expenditures—and adjusted its weighting to reflect the less-than-direct influence of research dollars on undergraduate education.

But such refining doesn’t invalidate the methodology, Cary says. A couple of years ago, the magazine assembled a group of top-level college administrators and asked them, in effect, to rank its indicators of educational quality. All the indicators scored high. 

Every year, fifty to a hundred college delegations visit the magazine. U.S. News itself regularly meets with a board of college admissions officers and, from time to time, with guidance counselors, financial-aid officials, and institutional researchers. Cary attends five to ten education conferences every year. From all those conversations, “we accumulate a lot of information and advice, and a fair amount of criticism,” he says. “It all goes into informing the process. We listen very carefully to what presidents and deans and admissions officials tell us, and we do make changes based on what we’re hearing.” 

One thing Cary is hearing is an interest in rankings that comes not just from consumers. It’s not unusual for associations of professional schools—graduate foreign-policy schools and schools of art and design, for example—“to come to us and say they want to be ranked.” And about a year and a half ago, a U.S. News team traveled to Spain, at the invitation of the nation’s council of public and independent universities. The universities had decided they wanted to be able to compare themselves to each other, Cary says, and they were impressed with the indicators developed by the magazine. So the rankings phenomenon is crossing borders—educational and international.

Duke has found itself with some rankings that have been rewarding, and with others that have rankled. For quite a while it’s been fairly steady in the U.S. News roster, where it most recently found itself eighth among “national universities.” Last spring Duke Hospital ranked number six on the U.S. News roster of “America’s Best Hospitals”; the hospital had twelve specialty-care areas that were rated among the nation’s top ten. The magazine’s 2001 survey of “America’s Best Graduate Schools” pegged Duke’s medical school at number three, the Fuqua School of Business at number eight, and the law school at number ten. 

Three years ago, Duke rose to the top among Mother Jones’ top-ten “activist schools.” That elevated position came from the work of a single student organization, Students Against Sweatshops, on issues of overseas labor and the university’s licensing arrangements. On the other hand, The Princeton Review—which claims that its rankings “are based directly on what students on each campus tell us about their college”—put Duke at the top of a list of schools where “town-gown relations are strained.” Given the university’s multi-pronged investment in the community and the range of student volunteer efforts, that characterization mystified university officials. The same survey pegged Duke as seventeenth among places where “students pack the stadiums,” a ranking that might seem disingenuously generous for football and absurdly stingy for basketball. 

Just as it was being rated as an activist paradise, Duke was declining to participate in Yahoo! Internet Life magazine’s annual survey of campus technology. The survey provides a ranking of “America’s 100 most wired colleges.” Betty Leydon, Duke vice provost for information technology when the 2000 survey was being run, said, “It’s really not clear what they’re trying to measure or how they decide to rank the schools once they get the information.” Brown, Cornell, Harvard, Princeton, Stanford, and Yale were among the other schools that boycotted the survey. The magazine’s senior editor suggested, in a Chronicle of Higher Education interview, that sour-grapes defensiveness was at work: “When a school does well, they applaud the methodology.”   A rankings result from last spring brought an even more vociferous response than usual—some of it from Duke’s direction. Despite having revealed rankings abuses in its earlier article, The Wall Street Journal published its own survey of business schools. The newspaper called it “the first study to focus exclusively on the opinions of recruiters—the buyers of M.B.A. talent—and as a result, our rankings look quite different from those in other business-school guides.” Recruiters judged a school’s faculty strength, a particular academic specialty, and a strong international perspective not especially important compared with factors like student communication skills, teamwork approach, and problem-solving abilities. They also tended to consider their company’s track record with recruits from the school.

The Journal’s rankings looked different indeed. They led with the Tuck School at Dartmouth, followed by Carnegie Mellon. The University of Chicago and Harvard were outranked by Purdue. The University of Pennsylvania’s Wharton School was down at number eighteen, Columbia at thirty-four, and M.I.T.’s Sloan School at thirty-eight. Duke’s Fuqua School of Business, certified number five by BusinessWeek, was number forty-four in the Journal’s judgment—a notch below the State University of New York at Buffalo and just above Stanford University.

The Fuqua School’s John Lynch, a marketing professor, says he normally doesn’t pay much attention to published rankings. As it happened, one of his students called his attention to the new survey just before he was to give his final exam last spring. So he turned the Journal’s survey methodology into an exam question. Fuqua officials heard about the exercise and asked Lynch for his analysis. They shared it in a private message to Journal editors, who—in a surprise move—responded to Lynch’s arguments on the newspaper’s website. The marketing professor then agreed to have his original analysis posted. In the meantime, one of his students had managed to post the analysis on a BusinessWeek website, where it became the subject of avid discussion.

Lynch labels the survey “unscientific” and says it shows elementary sampling mistakes. One fundamental problem was that its respondents were “biased and open to manipulation,” he says in his posted response. Those who ran the survey didn’t sample randomly from all recruiters who visited a school. Rather, they allowed schools to choose which recruiters from a company would be invited to participate. When initial response rates were very low, schools were encouraged to make follow-up calls to recruiters to get them to participate. “One can presume that, next year, all schools will provide contact lists including only alumni,” Lynch says. 

The researchers made another “fatal error” of self-selection: letting the recruiters decide which one to three schools they would rate. Given a choice, says Lynch, recruiters will tend to report on schools that are striking to them in some way—where they have had particular good or bad experiences, or where they earned their own M.B.A. Most recruiters rated a single school, producing another sample bias: If a particular recruiter visits one school only and another visits twenty, and if both evaluate exactly one school, the impact is to give small recruiters a disproportionate influence. 

Another big problem was the small sample size, Lynch says. In the list of recruiters initially invited to participate, a large majority declined. Fuqua initially provided eighty names; later, it was asked to give 400 names so that the survey could reach its minimum targets. In the end, thirty-nine recruiters from the original 400 rated Fuqua. That low response invalidated the study, according to Lynch: “We cannot assume that those agreeing to participate have attitudes like those who declined.” 

Responding to Lynch’s concerns, Harris Interactive, which ran the survey for the Wall Street Journal, says, “The universe we are trying to reproduce (the business-school-recruiter universe) is not suited to typical sampling procedures.… We believe that within the constraints imposed by the nature of the universe and the variability across the various business schools, we produced a database that is representative of recruiters in general.” 

Love them or hate them, the rankings have an impact—on staff time, for one thing. Fuqua associate dean Jim Gray says he has one person on his staff who spends about a third of her time producing “huge lists of data” for the various guidebooks and ranking projects. “The frustrating thing is that it’s never the same data as the last survey, so we have to really start from scratch for each one that we do.” Business schools, he says, are especially attuned to rankings; they’re seen as indicators of what might be called “brand equity” in the business world. “We teach how important reputation is. Particularly in the marketing curriculum, we teach a lot about brand—how to build a company’s brand, how a company’s brand translates into success.” 

And he says business students are brand-conscious; typically, they’re giving up a salary in entering an M.B.A. program, expecting that experience to be a career enhancer. “We do know that many students, the best students, the ones we want at Fuqua, will apply only to the top five schools. So it’s really important to us to be in the top ten, and preferably in the top five.” Gray sees a feedback loop at work: High rankings help draw good students, whose qualifications ensure high rankings. 

“Our five-year strategic plan says that our objective is to be, on a sustained basis, among the top five or six business schools in the world. Now the plan doesn’t say specifically that we want to be perceived in that position. It says we want to be in that position.” But it’s a short distance between perception and reality, he adds. “We could be top five or six, but if nobody says we are, if people don’t perceive that through various rankings, then we probably wouldn’t have reached our strategic goal. So, although it’s not specifically a reputational declaration, all of this has to do with reputation.”

It’s a fine line that Gray describes—being concerned about rankings but not driven by rankings. “We don’t do anything consciously to get higher rankings. We like it when we’re ranked high, and it helps us, but we don’t do things differently because of that. For example, the number-one priority this year and for the coming years is to increase the size of our faculty. And that has virtually nothing to do with the rankings.” 

The fact that something as basic as faculty infrastructure doesn’t register in the rankings points to the limits—if not the absurdity—of constructing a hierarchy of schools. Still, rankings of all varieties are proliferating. So will students, enamored of latching onto a prestigious label, wary of the marketing campaigns waged by the schools themselves, and hungry for information with the imprint of impartiality, come to rely on them more and more? 

Duke’s Guttentag, from his admissions perspective, sees a counter-trend in the ease of electronic communication. “You get down to the websites of campus organizations, academic programs, individual students, faculty. And you can get some idea of what it is like to be on the inside rather than the outside. So, increasingly, there are ways to move beyond the message that the school is trying to put out and to look at how members of the community talk with one another.” Translated into tech-talk, Guttentag is referring to disintermediation, or a direct path from consumer to product.

Published rankings may appeal to the need to compartmentalize and categorize. But the mass-market approach to college rankings may be overwhelmed by an even more powerful cultural force—the joy of making decisions for oneself.

Share your comments

Have an account?

Sign in to comment

No Account?

Email the editor