In a recent blog post, I took a tongue-in-cheek approach to the contentious topic of ranking business schools. The genesis of the post was a very different question: how to rank hospitals’ success rates with a specific operation when some hospitals only accept less risky cases while others take on more challenging ones. Accepting only less risky cases should imply a higher success rate for obvious reasons having little to do with the quality of care. Business schools endowed with brighter, more capable students likewise should see higher success among their students independent of the quality of education the students receive.
To demonstrate this point, I provided a quick and dirty analysis, completed between the hours of 1 and 3 am, restricted to data on hand, and without the careful statistical standards that would constitute "research." The point was to show that changes to the assumptions underlying rankings can significantly change the results.
The resulting hoopla over the post, which begot university press releases and took my blog’s traffic from a handful of loyal-reader friends into the thousands, is both enlightening and frightening. Below I offer a few clarifications.
"What silly methodology! You’ve got to be kidding": I received many emails and comments of this form. The answer, of course, is "yes, I am." My friends, the frequent consumers of my blog, have sufficient personal context to inject sarcasm into my written word. I very much believe that there is insufficient discussion about what rankings try to measure, and whether they do this well. This point of my post was quite serious. However, the very aim of the post (literally stated) was to note that rankings (i) fail to account for self-selection, and (ii) are too easy to generate and cause mass hysteria among schools and students. The media coverage, some of which reported on these rankings without any hint of humor, seems to prove this point. A case in point: any ranking which offers "extra credit for sending me money (‘investment index’), or publishing my papers (‘scholarship discovery index’)" was likely not intended to supplant Business Week.
The value of salary: I do not buy the premise that salaries equate to school quality. Assuming that some students care about social, career, and life issues apart from maximizing net present value of future salary stream, starting salaries may say more about the student’s priorities than the quality of the school.
I did not "name" any school as a top school: A few universities took to releasing press releases, indicating that "an Economist names SCHOOL X Top Y." While I do consider myself an economist, that hardly confers authority status on business schools. My rankings aimed only to demonstrate that if you accept starting salary as a valid sole measure of schools, then one should contemplate the increase in salary a school provides, not its absolute value. These press releases, issued without any communication with me, became articles in papers (absent the tongue-in-cheek nature of the numerical rankings) heralding my "research."
The self-selection bias is ever-present: An editor at Business Week blogged about my rankings. Despite ever so slightly impugning my motives (and misspelling my name), he seems like an all-round good guy and found “a lot to recommend” about my methodology, though perhaps in part because he viewed my post as more critical of US News than Business Week’s ranking methodology. While Business Week does not explicitly include GPA or GMAT in its rankings, recruiter evaluations of students still necessarily conflate the quality of the student with the quality of the school. Recruiters are not asked "how much do you think the school contributed to this individual’s market value above what she would likely have had if she went to another school?" Instead, a recruiter simply saying "I like these students" can very well reflect that the kind of students that go to this school would be well-liked by recruiters even if they went elsewhere or did not pursue an MBA.
This is not "research": Not all mutterings by those of us in ivory towers constitute research. Some are just mutterings. While I would love for a journal editor to attest to my PTRC that my blog has contributed to general knowledge, the standards for research are quite different from those for personal blogs. While I was unaware of them at the time (my research priorities have nothing to do with ranking schools), several articles that did survive (or are currently in the process of) peer review offer similar methodologies or conceptual discussions for ranking business schools:
- Tracy and Waldfogel, Journal of Business, 70, 1.
- Dichev, Journal of Business, 72, 201.
- Bednowitz, CHERI Working Paper #6, 2000.
- Arcidiacono, Cooley, Hussey, International Economic Review, forthcoming.
- Devinney, Dowling, Perm-Ajchariyawong, Australian School of Business Working paper, 2007.
So why did you do this?
I should have learned my lesson last year. The only other post of mine ever to receive attention was my ranking of local restaurants by wine prices. It, too, resulted in well-placing restaurants citing my “study” under their list of “awards” and received its share of detractors from those lower down the list.
Methodology versus assumptions: If I were to rank dog breeds by average size, a careful methodology would account for variance, measurement error, etc. If I then labeled this list “The top dogs for kids” one should pause, wondering what breed size has to do with loyal pets. The methodology I employed had significant disclaimers, but was mostly correct. A follow-up by resident statistician extraordinaire Bruce Cooil found an empirical model that improves on mine. This is not surprising since Bruce ranks first on the noted Global Statisticians by Efficacy annual ranking, though his tweaking, while resulting in a statistically superior model, leads to few qualitative changes.
In any event, methodological questions like “did you account for standard error,” or “did you account for …” any of the other million issues miss the point entirely. Clearly, I didn’t (though those questions are better directed to the publishers of rankings that people actually use to make life-altering decisions). Instead, ask what the rankings ought to measure, and if the methodology achieves that aim.
The best “breed” of dog, of course, is a mutt.