The great downside dilemma for risky emerging technologies

Decisions about whether to develop technologies that promise great benefits to humanity but come with a risk of human civilization being destroyed.

Seth D. Baum, 2014. The great downside dilemma for risky emerging technologies. Physica Scripta, vol. 89, no. 12 (December), article 128004, doi:10.1088/0031-8949/89/12/128004.

Full article: Available free online at Physica Scripta * Click here for the full pdf.

Pre-print: Click here to view a full pre-print of the article (pdf) - includes expanded citation information.

* This paper was written for a talk of the same title presented at the event Emerging technologies and the future of humanity, hosted by the Kungliga Vetenskapsakademien (Royal Swedish Academy of Sciences) on 17 March 2014 in Stockholm. The paper is part of a series of papers based on presentations at the event.

Some emerging technologies promise to significantly improve the human condition, but come with a risk of failure so catastrophic that human civilization may not survive. This article discusses the great downside dilemma posed by the decision of whether or not to use these technologies. The dilemma is: use the technology, and risk the downside of catastrophic failure, or do not use the technology, and suffer through life without it. Historical precedents include the first nuclear weapon test and messaging to extraterrestrial intelligence. Contemporary examples include stratospheric geoengineering, a technology under development in response to global warming, and artificial general intelligence, a technology that could even take over the world. How the dilemma should be resolved depends on the details of each technology’s downside risk and on what the human condition would otherwise be. Meanwhile, other technologies do not pose this dilemma, including sustainable design technologies, nuclear fusion power, and space colonization. Decisions on all of these technologies should be made with the long-term interests of human civilization in mind. This paper is part of a series of papers based on presentations at the event Emerging Technologies and the Future of Humanity held at the Royal Swedish Academy of Sciences, 17 March 2014.

Non-Technical Summary: pdf version

Background: The Great Downside Dilemma
A downside dilemma is any decision in which one option promises benefits but comes with a risk of significant harm. An example is the game of Russian roulette. The decision is whether to play. Choosing to play promises benefits but comes with the risk of death. This paper introduces the great downside dilemma as any decision in which one option promises great benefits to humanity but comes with a risk of human civilization being destroyed. This dilemma is great because the stakes are so high—indeed, they are astronomically high. The great downside dilemma is especially common with emerging technologies.

Historical Precedents: Nuclear Weapons and Messaging to Extraterrestrial Intelligence
The great downside dilemma for emerging technologies has been faced at least twice before. The first precedent is nuclear weapons. It came in the desperate circumstances of World War II: the decision of whether to test detonate the first nuclear weapon. Some physicists suspected that the detonation could ignite the atmosphere, killing everyone on Earth. Fortunately, they understood the physics well enough to correctly figure out that the ignition wouldn’t happen. The second precedent is messaging to extraterrestrial intelligence (METI). The decision was whether to send messages. While some messages have been sent, METI is of note because the dilemma still has not been resolved. Humanity still does not know if METI is safe. Thus METI decisions today face the same basic dilemma as the initial decisions in decades past.

Dilemmas in the Making: Stratospheric Geoengineering and Artificial General Intelligence
Several new instances of the great downside dilemma lurk on the horizon. The stakes for these new dilemmas are even higher, because they come with much higher probabilities of catastrophe. This paper discusses two. The first is stratospheric geoengineering, which promises to avoid the most catastrophic effects of global warming. However, stratospheric geoengineering could fail, bringing an even more severe catastrophe. The second is artificial general intelligence, which could either solve a great many of humanity’s problems or kill everyone, depending on how it is designed. Neither of these two technologies currently exists, but both are subjects of active research and development. Understanding these technologies and the dilemmas they pose is already important, and it will only get more important as the technologies progress.

Technologies That Don’t Pose The Great Downside Dilemma
Not all technologies present a great downside dilemma. These technologies may have downsides, but they do not threaten significant catastrophic harm to human civilization. Some of these technologies even hold great potential to improve the human condition, including by reducing other catastrophic risks. These latter technologies are especially attractive and in general should be pursued. The paper discusses three such technologies: sustainable design technology, nuclear fusion power, and space colonization technology. Some sustainable design is quite affordable, including the humble bicycle, while nuclear fusion and space colonization are quite expensive. However, all of these technologies can play a helpful role in improving the human condition and avoiding catastrophe.

Created 21 Nov 2014 * Updated 27 Nov 2014