“We can win a complete victory over this virus,” Zhang Hanhui, the Chinese ambassador to Russia, recently declared, adding that in Hubei, “the disease will be liquidated next month”. This is propaganda masquerading as prediction, as it often does. People fall for it because predictions like this are comforting, even when they’re wrong. But it’s fear, not his knowledge, that makes anyone believe him. The hard truth is that much in life is, and always will be, unpredictable – and we’re better off accepting that than falling for propaganda, however optimistic it might be.
Epidemics are scary because they don’t follow predictable patterns. They might look the same but there is no predictive profile of an epidemic. There is just one certainty: that they will break out. After that, uncertainty rules: we don’t know what disease, in what country or when. New strains of diseases appear constantly; since the 1970s, new pathogens have been emerging at an unprecedented rate of more than one per year, and in a world of global travel, they move fast. Different geographies need different kinds of response. What works in one place, with one disease, might not work somewhere else. So these crises take us by surprise, even when we know something is coming.
Technology can help, of course – not in predicting epidemics but in identifying them fast when they start. Artificial intelligence created by the Canadian firm, Blue Dot, analyzed masses of data on animal and plant disease outbreaks, together with global news reports, and was able to identify the outbreak of what has become known as COVID-19 a full week before the Center for Disease Control announced the appearance of a new flu-like disease.
Applying AI to global flight data also generated generally accurate predictions about where and how the disease would spread. Because treating epidemics is always a race against time, this represents a meaningful improvement in speed once an infectious disease has emerged. Like all applications of AI, it is only as good as the data on which it is based, which in epidemics can be tricky because each one is unique. Scientists and health care workers are still needed to analyze findings and diagnose patients. And AI doesn’t change our inability to predict epidemics before the pathogen shows up.
Spotting the outbreak is just the beginning. The holy grail in every epidemic is a vaccine to stop a disease from spreading. The development of vaccine candidates is time and labour-intensive and also unpredictable; many fail. The Coalition for Epidemic Preparedness began work on new vaccines for beta coronaviruses in 2016, so they aren’t now starting from scratch. The speed of genome mapping has hugely accelerated vaccine development so that researchers already know the genes and the molecular structure of the protein that gets the virus into human cells. Sharing that information with research teams around the world accelerates discovery.
But it’s still slower than anyone would like because each new vaccine has to be tested, injected into animals to see if they develop any immune response. Some won’t work, some may cause side effects worse than the disease. Paradoxically, epidemics sometimes die out before the vaccine is ready for testing; that means the drug has to wait until the next epidemic – if it happens to be exactly the same disease – before it can be tested and evaluated on humans. That’s one reason it took so many years to produce a successful Ebola vaccine.
Technology has now accustomed human beings to expect accurate prediction of just about everything: when the Uber arrives, how long the journey takes, whether you’ll like a book, or movie or meal. So the idea that some things in life can’t be predicted feels counter-intuitive. For decades, tech innovation has relied on the belief that, with massive amounts of data, together with the tools to interrogate it, everything ultimately proves predictable. Real-life proves otherwise.
Terrorism isn’t always defined by profiles or patterns; terrorists know that randomness protects them. Surveillance produces more data than ever before, but it’s hard to find what you don’t know to look for until it’s too late. As head of MI5, Eliza Manningham-Buller was aghast when, under President Clinton, the Americans abandoned human intel, thinking that technology would give them all the information they needed. After 9/11, that strategy changed, an acknowledgment that human intelligence was still crucial in finding new patterns in new places, previously never considered meaningful.
Similarly in pure science, it’s been routine for politicians to call for greater efficiency to produce strategically desirable breakthroughs. But attempts to use data mining, to identify the hot spots from which such triumphs emerge, identified little more than scientists themselves already knew; it’s their job and passion to stay alert to developments in their field. Trying to build the perfect profile of the superstar scientist failed too. Scientific discoveries exist across a spectrum, from the predictable (when it became technically feasible to decode the human genome) to flukes (like microwave radiation.) Focus efficiently only on what you can predict and you risk managing the flukes out of the system.
But ultimate predictability is the holy grail of much technology. It’s the faith implicit in Amazon’s anticipatory shipping patent. The company that continues to recommend to me the books I’ve already written is so confident of its data analysis that it wants to send me what it predicts I will be aching to read. Closer examination reveals seeds of doubt; I won’t have to return the first few they get wrong. But the only really predictable aspect of the offering is that I’ll forget, or be too lazy, to return the ones I never wanted. This isn’t prediction; it is forcing customers to do what the business plan requires.
Uncertainty is endemic to life. Will the COVID-19 epidemic be over by March? The best models of disease will only provide probabilities, not absolutes. Soothsayers always have their own agendas, so we would do better to prepare