The new BusinessWeek rankings of business schools are out. Thousands of future MBAs will pour over the statistically-insignificant differences between similarly ranked schools to decide which will receive their quarter million dollars.Continue reading »
Among the many issues with ranking schools, one of the most glaring is incorporating the input of those who are impacted by the result. Students reporting on MBA programs or University presidents ranking schools all put people influenced by the result in a position to influence the results. This creates quite the incentive problem.
Recent evidence comes from the rankings of schools (pdf) provided by University of Florida President Bernie Machen. The surveyed rankings are an integral part of the U.S. News ranking formula, and were obtained by the Gainesville Sun in a public records request. Other Florida university presidents were shrewd enough to “lose” theirs.Continue reading »
Ranking journals is a popular pastime among academics. Each of us has a favorite ranking, largely chosen by the results fitting with our favorite publication outlets. There are more debates over the methodology of journal rankings than of ranking business schools. There may be no universal agreement on the right method but there certainly is a wrong one.Continue reading »
In a recent blog post, I took a tongue-in-cheek approach to the contentious topic of ranking business schools. The genesis of the post was a very different question: how to rank hospitals’ success rates with a specific operation when some hospitals only accept less risky cases while others take on more challenging ones. Accepting only less risky cases should imply a higher success rate for obvious reasons having little to do with the quality of care. Business schools endowed with brighter, more capable students likewise should see higher success among their students independent of the quality of education the students receive.
To demonstrate this point, I provided a quick and dirty analysis, completed between the hours of 1 and 3 am, restricted to data on hand, and without the careful statistical standards that would constitute "research." The point was to show that changes to the assumptions underlying rankings can significantly change the results.
The resulting hoopla over the post, which begot university press releases and took my blog’s traffic from a handful of loyal-reader friends into the thousands, is both enlightening and frightening. Below I offer a few clarifications.Continue reading »
Rankings of business schools generally fail to evaluate the inherent quality of an institution, instead ranking the people who choose to attend it.
An MBA student from UC-Davis will graduate, on average, with a starting salary that is $30,000 lower than a graduate from nearby UC-Berkeley. Can we take from this that two similarly-credentialed students at the two schools would have such a high difference in their market values? This reasoning ignores the selection bias. A student accepted by both schools is quite likely to choose the one often ranked in the top 10.
As long as top candidates choose to go to top programs, a higher ranking confounds the quality of students and the quality of a school. The proper interpretation of Business Week’s rankings, for example, is not that Harvard is a better school than Blah College, but that the type of students who go to Harvard do better after graduating from Harvard than the type of students who go to Blah College after graduating from Blah. Thus, a high starting salary for Harvard graduates might imply that Harvard’s professors can polish rough stone into beautiful rubies, but it is also possible that Harvard has the benefit of students who could have very well succeeded anywhere. (Note: I pick on Harvard because it actually does quite well in my rankings, supporting the rubies theory).
Which schools do best with the students they have? Traditional rankings fail to tell a given student with a given skill set which schools are most likely to increase his market value. That’s the goal of these rankings, highlighting that a change in methodology significantly alters the results. Full rankings below the jump. Methodological disclaimers (and there are many) are at the very bottom.Continue reading »