CAMBRIDGE – Those of us who are fortunate enough to live in the developed world fret about myriad minor – or sometimes improbable – hazards: carcinogens in food, air crashes, and so forth. But we are less secure than we think. We are in denial about scenarios that could cause such devastation that even one occurrence would be too many.
CAMBRIDGE – Those of us who are fortunate enough to live in the developed world fret about myriad minor – or sometimes improbable – hazards: carcinogens in food, air crashes, and so forth. But we are less secure than we think. We are in denial about scenarios that could cause such devastation that even one occurrence would be too many.Much has been written about possible ecological shocks triggered by the impact of a growing human population’s demands on the biosphere, and about the social and political tensions stemming from resource scarcity or climate change. Even more worrying are the downside risks of powerful new cyber, bio, and nanotechnologies: A few individuals, via error or terror, could ignite a societal breakdown so quickly that government responses would be overwhelmed.The "Anthropocene” era, in which the main global threats come from humans rather than from nature, became especially risky with the mass deployment of thermonuclear weapons. Throughout the Cold War, false alarms and miscalculation by both superpowers were a constant occurrence, with several posing a serious risk of triggering nuclear Armageddon.Those who anxiously lived through the Cuban missile crisis would have been gripped by panic had they realized just how close the world came to catastrophe. Only later did we learn that President John F. Kennedy at one stage assessed the odds of nuclear war as "somewhere between one in three and even.” And only when he was long retired did Kennedy’s defense secretary, Robert McNamara, state frankly that "[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.”It is now conventional wisdom that nuclear deterrence worked; and, in a sense, it did. But that does not mean that it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishingly high – or the value you place on your life inordinately low – for this to seem a wise gamble.But we were dragooned into just such a gamble throughout the Cold War. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most citizens would have accepted, had they been asked to give their informed consent.For my part, I would not have chosen to accept a one in three – or even a one in six – chance of a disaster that would have killed hundreds of millions and shattered our cities, even if the alternative were a Soviet invasion of Western Europe. And, of course, the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat.Fortunately, the threat of global annihilation involving tens of thousands of H-bombs is in abeyance, though there is now more reason to worry that smaller nuclear arsenals might be used in a regional context, or by terrorists. But when we recall the geopolitical convulsions of the last century – two world wars, the rise and fall of the Soviet Union, and so forth – we cannot rule out a global realignment that leads to a future standoff between new superpowers. A new generation could face its own version of the Cuban missile crisis – one that might be handled with less skill – or less luck – than the original was.Nuclear weapons are the darkest side of twentieth-century science. But there are novel concerns stemming from the impact of fast-developing twenty-first-century technologies. Our interconnected world depends on elaborate networks: power grids, air traffic control, international finance, just-in-time delivery, and so forth. Unless these networks are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns.Moreover, a contagion of social and economic breakdown would spread worldwide via computer networks and "digital wildfire” literally at the speed of light. Concern about cyber-attacks, by criminals or hostile countries, is rising sharply.Likewise, while it is hard to make a clandestine H-bomb, millions will one day have the capability and resources to misuse biotechnology, just as they can misuse cybertechnology today. The physicist Freeman Dyson foresees a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. Were this to happen, our ecology (and even our species) surely would not survive unscathed. And, as for cybertechnology, should we worry about another science fiction scenario – that a network of computers could develop a mind of its own and threaten us all?Some would dismiss such concerns as a jeremiad; after all, human societies have survived for millennia, despite storms, earthquakes, and pestilence. But these human-induced threats are different: They are new, so we have had limited exposure to them and cannot be so sanguine that, if disaster struck, we would survive for long – or that governments could cope. It follows that we have no grounds for confidence that we could survive the worst that even more powerful future technologies could inflict on us.In a media landscape saturated with sensational science stories, end-times Hollywood productions, and Mayan warnings of apocalypse, it may be hard to persuade the public that potential catastrophes could arise as unexpectedly as the 2008 financial crisis did – and with a far greater impact. Existential risks receive disproportionately little serious attention. Some suggested scenarios can be dismissed, but we should surely try to assess which ones cannot – and study how to mitigate them.Martin Rees is a member of the United Kingdom’s House of Lords and Astronomer Royal