Established methodologies from risk analysis and risk management can help avoid catastrophe from future advanced AI.
Anthony M. Barrett and Seth D. Baum, 2017. Risk analysis and risk management for the artificial superintelligence research and development process. In Victor Callaghan, James Miller, Roman Yampolskiy, and Stuart Armstrong (editors). The Technological Singularity: Managing the Journey. Berlin: Springer, pages 127-140.
Pre-print: Click here to view a full pre-print of the article (pdf).
Artificial superintelligence (ASI) is increasingly recognized as a significant future risk. In the absence of adequate safety mechanisms, an ASI may even be likely to cause human extinction. Thus ASI risk scenarios merit attention even if their probabilities are low. ASI risk can be addressed in at least two ways: by building safety mechanisms into the ASI itself, as in ASI safety research, and by managing the human process of developing ASI, in order to promote safety practices in ASI research and development (R&D). While ASI researchers and developers typically do not intend to cause harm through their work, harm may nonetheless occur due to accidents and unintended consequences. Thus opportunities may exist to reduce ASI risk through engagement with the R&D process. This paper surveys established methodologies for risk analysis and risk management, emphasizing fault trees and event trees, and describes how these techniques can be applied to risk from ASI R&D. A variety of risk methodologies have been developed for other risks, including other emerging technology risks, but their application to ASI has thus far been limited. Insights from risk literatures could improve on what existing analyses of ASI risk have yet been conducted. Likewise, a more thorough and rigorous analysis of ASI R&D processes can inform decision making to reduce the risk. The decision makers include governments and non-governmental organizations active in ASI oversight, as well anyone conducting ASI R&D. All of these individuals and groups have roles to play in addressing ASI risk.
Non-Technical Summary: pdf version
Background: Artificial Superintelligence
Already computers can outsmart humans in specific domains, like multiplication. But humans remain firmly in controlÖ for now. Artificial superintelligence (ASI) is AI with intelligence that vastly exceeds humanityís across a broad range of domains. Experts increasingly believe that ASI could be built sometime in the future, could take control of the planet away from humans, and could cause a global catastrophe. Alternatively, if ASI is built safely, it may be able to solve major human problems. This paper describes how risk analysis and risk management techniques can be used to understand the possibility of ASI catastrophe and make it less likely to happen.
Artificial Superintelligence Risk Analysis
Risk analysis aims to understand bad events that could happen. It studies the eventís probability, severity, timing and other relevant factors. For ASI risk analysis, one technique is to model the sequences of steps that could result in ASI catastrophe. Each step can then be studied to get an overall understanding of the total risk. These models are called fault trees or event trees. Creating the models and studying each step is difficult because ASI is unprecedented technology. Whatís happened in the past is of limited relevance. One way to study unprecedented events is to ask experts for their judgments on them. This is called expert elicitation. Experts donít always get their judgments right so itís important to ask them carefully, using established procedures from risk analysis.
Artificial Superintelligence Risk Management
There are two general ways to manage the risk of ASI catastrophe. One is to make ASI technology safer. This is a technical project that depends on the details of the specific ASI. The other is to manage the human process of ASI research and development, in order to steer it towards safer ASI and away from dangerous ASI. Risk management steps can be taken by governments, corporations, philanthropic foundations, and individual ASI researchers, among others. They can create regulations to restrict risky ASI research, covertly target risky ASI projects, and fund the development of ASI safety measures, among other things. Risk analysis and the related field of decision analysis can help people make better ASI risk management decisions. In particular, the analysis can help identify which options would be the most cost-effective, meaning that they would achieve the largest reduction in ASI risk for the amount of money spent on them.
Created 14 Sep 2015 * Updated 24 May 2017