Social Choice Ethics in Artificial Intelligence

Proposals for AI to follow society's aggregate ethical views face difficult and important questions about how to define society.

Seth D. Baum. Social choice ethics in artificial intelligence. AI & Society, forthcoming, doi 10.1007/s00146-017-0760-1.

Pre-print: Click here to view a full pre-print of the article (pdf).

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.

Created 2 Oct 2017 * Updated 2 Oct 2017