Views

I am, among other things, a philosopher and ethicist. As such, I have thought a lot about what my views are about certain topics, especially topics related to what we should do. This very brief discussion goes from the most abstract to the most concrete aspects of what we should do. The intent here is to give a simple summary of my views for anyone who might be interested, and to suggest resources for further inquiry.

Meta-Ethics: Moral Skepticism & Moral Fictionalism

Meta-ethics concerns the nature of ethics: is anything actually right or wrong, and can we possibly know? My views can be described as moral skepticism and moral fictionalism. By moral skepticism, I mean that I am skeptical about the existence of morality. While I cannot rule out the possibility that right and wrong actually exists as a property of our universe, I would be surprised if this were the case. By moral fictionalism, I mean that while morality probably does not actually exist, it is a "nice" fiction for us to pretend as if it did. In other words, even though I doubt that anything actually is right or wrong, I still want us to live as if this were not the case - as if morality did actually exist. This means that I try hard to live a "better" life, and to make the world/universe(s) a "better" place, even though I feel compelled to put scare quotes around words like "better" and "nice".

Richard Joyce's book The Myth of Morality appears to capture my meta-ethics views fairly well - see this review. There are also useful reviews of meta-ethics at Wikipedia and the Stanford Encyclopedia of Philosophy.

Normative Ethics: Total Experientialist Utilitarianism

Normative ethics concerns the principles of right and wrong, or the principles that we consider to be right and wrong. My ethics views can reasonably be described as total experientialist utilitarianism. Utilitarianism is a type of consequentialism in that it considers the consequences of our actions to be what is ethically important. Our actions are right when they have good consequences, and wrong when they have bad consequences; nothing else matters. For utilitarianism, good (bad) consequences are those that increase (decrease) utility.

There are several views on how we should define utility. I favor utility defined in terms of cognitive experiences. These experiences are those that we consider to correspond with an enjoyable life. This will often not mean pure bliss at every moment. For example, I might want to have some difficult times so that I can more richly enjoy the good times. Also, utility here is not a strictly human phenomenon. Many other animals have the cognitive capacity to enjoy life. Perhaps some artificial intelligences could too.

Finally, given a view on how to define utility, we need a view on how to aggregate the utility to determine how good or bad different consequences are. My view on this is that we should simply add up the total amount of utility. This means that I value your utility just as much as my utility or anyone else's utility, including individuals who live in distant places and times.

Torbjorn Tannsjo's book Hedonistic Utilitarianism captures my normative ethics views fairly well, although I favor the term "experientialist" to the term "hedonistic" because hedonism can refer to more narrow forms of pleasure, whereas I want to refer to a much richer, more complex experience. There is also a useful review of meta-ethics at Wikipedia; the Stanford Encyclopedia of Philosophy has excellent articles on many specific views about normative ethics. Finally, the website felicifia.org, which I helped establish, hosts open discussion of utilitarianism and related topics.

Applied Ethics: Global Catastrophic Risk Reduction

Applied ethics concerns the application of our ethical views to specific circumstances. In other words, given what we hold to be right and wrong, and given our capabilities, what we specifically should we do? Answering this question requires philosophical thought about the ethics, empirical study of the nature of our world/universe(s), and our best estimations of what we might be capable of. All this may seem very complicated, and it can be. However, the analysis becomes quite simple once we recognize a two simple things.

First, the future is really, really big. We can remain on Earth for a long time, and we can remain in the universe for much, much longer. If we care even a little bit about the future, then it is so big that we should do what we can to make sure that this future is something we would consider to be good. That means ensuring that whatever it is that we care about - whether it's utility, or ecosystems, or life itself (or some other things) - is sustained into the future. Eventually this will require space colonization, when Earth is no longer inhabitable, but that won't be necessary for a long time.

Second, the best way to ensure that whatever we care about exists into the future is to help avoid major, civilization-ending global catastrophes today. Risk of these catastrophes is called either "global catastrophic risk" or "existential risk", the latter because the catastrophes threaten our existence. These catastrophes could include nuclear warfare, pandemics, environmental catastrophes, disruptive technologies, and large asteroid impacts, among other things. In my view, helping avoid these major catastrophes - helping reduce global catastrophic risk - as best as we can is what we should be doing here and now.

I take global catastrophic risk reduction very seriously, not just as an intellectual idea but as an actual goal for us in our lives. Thus my main project is an organization called the Global Catastrophic Risk Institute.

A more detailed discussion of my argument for reducing global catastrophic risk can be found in my article "Is humanity doomed? Insights from astrobiology", in particular section 4.

Created 8 Mar 2010 * Updated 20 Jun 2016