Following the release of the long-awaited National Research Council report, "A Data-Based Assessment of Research-Doctorate Programs in the United States" during a public briefing in Washington, DC, yesterday, several media outlets reported on a maelstrom of confusion surrounding the group's stats. Though the public was given ample notice as to the NRC's use of a novel ranking methodology to evaluate data from the 2005-6 academic year, many were surprised by just how different this report appears from those that precede it. In particular, the exhaustive report of 5,100 doctoral programs across 62 disciplines at 212 universities fails to delineate which schools have "the best program[s]," according to Science Insider. "Instead of assigning a single score to each program in a particular field, the assessment ranks programs on five different scales," Science's Jeffrey Mervis says.
While in the past, the NRC's "rankings have been criticized in the past for suggesting false levels of precision," Inside Higher Ed's Scott Jachnik adds, "that isn't a criticism you'll hear about this process." NRC ranking committee chair Jeremiah Ostriker told Inside Higher Ed that "we can't say this is the 10th best program. We can say it's probably between 5th and 20th."
Stanford University's Patricia Gumport told Science that "It's difficult to draw meaningful conclusions about the relative quality of programs from these ranges of rankings," though the Council of Graduate School's Debra Stewart "calls the report's two ranking systems and the ranges of outcomes 'perplexing in a very healthy way,'" according to Nature News.
Perhaps more surprising still, according to Jaschik at Inside Higher Ed, is that "while the NRC committee that produced the rankings defended its efforts and the resulting mass of data on doctoral programs now available, no one on the committee endorsed the actual rankings, and committee members went out of their way to say that there might well be better ways to rank — better than either of the two methods unveiled."
In its report, the NRC details its R and S methods. The committee states that "indicators of research activity are of the greatest importance to faculty in determining program quality by means of the S measures, which are based on the program characteristics that faculty say explicitly are important" and that "in many cases, program size is very important when quality is measured by the regression-based, or R measures." The committee also noted that faculty view student diversity as an important factor, though not a "direct predictor of overall program quality." Completion rates and time to degree weighed less heavily than academic placement and support during the first year.
Inside Higher Ed went so far as to compose a spreadsheet detailing "departments that could make the top three in the two methodologies used," which is available for download, here. As for the field of Genetics and Genomics, for example, MIT, the University of California, Berkeley, and Columbia, Yale, Johns Hopkins, and Stanford Universities were rated highly under the R methodology. According to the S methodology, MIT, Stanford, UC Berkeley, Columbia, Baylor College of Medicine, and the University of Michigan, Ann Arbor could have the "top" programs in the field.
The complete NRC report and data table are available for free download, here.