“Experts are always getting it wrong” is now a familiar trope. As a historian of science, I disagree: I think history shows that scientific experts mostly get things right. But examples where experts have gone wrong offer the opportunity to better understand the limits of expertise. A case in point is the Global Health Security Index (GHSI), the result of a project led by the Nuclear Threat Initiative and the Johns Hopkins Center for Health Security. It was published in October 2019, just weeks before the novel coronavirus made its appearance.
GHSI researchers evaluated global pandemic preparedness in 195 countries, and the U.S. was judged to be the most prepared country in the world. The U.K. was rated second overall. New Zealand clocked in at 35th. Vietnam was 50th. Well, those experts certainly got that wrong. Vietnam and New Zealand had among the best responses to the COVID-19 pandemic; the U.K. and the U.S. were among the worst.
In fairness, the study did not conclude that overall global preparedness was good or even adequate. It warned that global health security was “fundamentally weak” and that no country was fully prepared for either an epidemic or a pandemic. The COVID pandemic was equivalent to a giant fire before which almost no one had done a fire drill. But while these experts got the coarse-grained analysis right, they were grossly wrong in their nation-by-nation assessment. As we now know, both the U.S. and the U.K. have suffered death rates much higher than many countries that the GHSI rated as far less prepared. The study results were so wrong in this regard that one post-hoc analysis concluded that it was “not predictive”; another dryly observed that it was predictive but in “the opposite direction.” So what happened?
The GHSI framework was based heavily on “expert elicitation”—the querying of experts to elicit their views. (This method contrasts with consensus reports such as those produced by the U.S. National Academy of Sciences or the Intergovernmental Panel on Climate Change, which are primarily based on a review of existing, peer-reviewed publications.) Expert elicitation is often used to predict risks or otherwise evaluate things that are hard to measure. Many consider it to be a valid scientific methodology, particularly to establish the range of uncertainty around a complex issue or, where published science is insufficient, to answer a time-sensitive question. But it relies on a key presumption: that we’ve got the right experts.
The GHSI panel was understandably heavy with directors of national and international health programs, health departments and health commissions. But the experts included no professional political scientist, psychologist, geographer or historian; there was little expertise on the political and cultural dimensions of the problem. In hindsight, it is clear that in many countries, political and cultural factors turned out to be determinative.
Consider the U.S., a country with some of the most advanced scientific infrastructure in the world and a prodigious manufacturing and telecommunications capacity. The U.S. failed to mobilize this capacity for reasons that were largely political. Initially the president did not take the pandemic seriously enough to organize a forceful federal response, and then, by his own admission, he played it down. More than a few politicians and celebrities flouted public health advice, appearing in public without masks well after the evidence of their benefits had been communicated. Our layered and decentralized system of government led to varied policies, in some cases putting state governments in conflict with their own cities. And many refused to practice social distancing, interpreting it as an unacceptable infringement on their freedom.
To evaluate American preparedness accurately, the GHSI group needed input from anthropologists, psychologists and historians who understood American politics and culture. In fact, it would have had to grant social scientific expertise primacy because social factors, such as racial inequality, most strongly shaped the American outcome. Around the globe, whether countries were able to mount an effective pandemic response depended crucially on governance and the response of their citizens to that governance. The GHSI team got it wrong because the wrong experts were chosen.