There’s a lot of information out there. In fact, there’s so much data that while our computers might be able to wade through it all, our brains can’t make much sense out of it until we reduce it to something more manageable. For instance, if I asked you to pick the best restaurant in Manhattan, it would be a difficult task. It would get easier as I began to place filters on the question. For example: Which is the best pizzeria in Manhattan? Better yet: Which pizzeria in the Little Italy neighborhood of Manhattan has the best crust? The more specific my question is, the easier it is for you to think about answering.
The trouble is, if I go back to my original question: What is the best restaurant in Manhattan? You’ll either pick one you like (totally nonscientific) or–if you’re the analytical type–you’ll create your own filters or categories of things that you consider important and evaluate the various restaurants along those dimensions (a more quasi-scientific approach). You’d probably select the restaurant with the highest overall score–the highest ranking restaurant–as the best. In some ways, it is, but think about how much data you’ve synthesized in that process. It’s like taking a painting, mixing the individual colors together in the ratios in which they appear, doing this for several paintings and then selecting the one with the most pleasant shade of brown as the “best” work of art. But we do this because we have to.
If we reduce several dimensions into a single score, we can make head-to-head comparisons and conclude that our methodology for evaluation is empirically rigorous. It makes sense, but there are often drawbacks. For example, there has been much talk of the U.S. as being #37 in the WHO rankings. That makes us look pretty bad vis-a-vis nations 1 through 36. And, in many ways, we are. Of course, in certain areas (e.g., emergency care) we do quite well. And, as has recently come to light, there was a lot of imputation (a fancy statistical term for educated guessing) going on in the WHO data. As a result, the #37 ranking, while convenient may be only a moderate indicator of U.S. health status. It hides our strengths and our weaknesses in a composite score.
Another interesting finding about rankings is that they don’t always measure what they claim to be measuring. For example, U.S. News & World Report is famous for its rankings of everything from colleges and graduate schools to hospitals and more. Now there’s evidence that the hospital rankings may be biased because they include a component based on a hospital’s reputation and a recent study finds that that reputation score is in no way associated with the actual quality of care the hospital provides. That means a place like Johns Hopkins may provide a quality of care on a par with (or perhaps lower than) the quality of care provided at Podunk Community Hospital, but Hopkins will rank much higher because of their prestigious reputation.
Data are useful, but they can also be misused. The above are just a couple of recent examples. So, now that you’ve been warned, here’s a link to some nifty data from the Centers for Medicare & Medicaid Services. Go ahead and rank things ’til your heart’s content.