A cult-famous moral philosophy paper argues that we’re probably living through an ongoing moral catastrophe. Many of them, more likely.
“Moral catastrophe” here denotes: A widespread, serious pattern of wrongdoings that occurs on a large scale, affecting the moral equivalent of at least a million people per year and involving the complicity of most members of society.
Since such catastrophes have been ongoing for literally all of human history, usually unbeknownst to the people involved in them, odds are we’re committing many moral catastrophes right now and don’t even realize it. Besides the examples that should be obvious to readers of this blog — not doing enough to stop fish from having sex, causing fish to have more sex, etc. — a few possibilities are:
That abortion is as wrong as murdering adult humans.
That preventing women from receiving abortions is as wrong as torture.
That it’s wrong to have your own biological children if you could instead use the genetic material of someone whose children would have a better life.
That raising children to believe in hell is a form of abuse.
And: “[T]hat the function of the corpus callosum in a human brain is not to unite the two hemispheres in the production of a single consciousness, but rather to allow a dominant consciousness situated in one hemisphere to issue orders to, and receive information from, a subordinate consciousness situated in the other hemisphere.”
You don’t have to think that any of these possibilities is particularly likely to conclude that we’re probably living through a moral catastrophe. Moral perfection is literally impossible, especially if you believe in positive duties. And there are countless ways we could be doing wrong, most of which we probably haven’t even thought of yet.
If any non-insignificant number of the moral catastrophes we could be committing each has a non-insignificant chance of being real, then many of them are likely ongoing right now.
Since moral catastrophes are, in fact, morally catastrophic,1 it follows that we ought to invest a lot of resources into figuring out what catastrophes are ongoing and what we can do about them.
Evan Williams, author of the moral catastrophe paper, suggests that we should push a lot of resources into academic moral philosophy and related fields so we can determine what moral catastrophes are real and not. (Very convenient for an academic moral philosopher!)
Yet, while philosophy can help us figure out right from wrong, it can’t alone tell us how to make sure the right values and ideas are disseminated throughout society. Even though we’ve basically discovered the correct position on factory farming — that it’s really, really bad — the consumption of factory-farmed animal products continues to rise every year because most people don’t care a lot or think critically about moral issues.
A friend of the blog, Bentham’s Bulldog, raised this point just yesterday. He writes, for example, that:
Non-philosophers often object to thought experiments on the grounds that they’re unrealistic. […] I recently objected to the notion that we should only care about the interests of biological humans by noting that this would imply that it would be okay to harm intelligent aliens or elves or orcs. In response, people noted that elves and orcs and aliens are not real — a fact which came as a real surprise to me.
But this is totally irrelevant. It’s a point of basic logic that if only humans matter, non-humans don’t — and this implies that were there to be elves, harming them for slight benefit would be permissible. The fact that they aren’t real is totally irrelevant to this matter of basic logic.
It’s easy to throw your hands up in despair and say that most people aren’t ever going to understand basic logic, but there’s a rationality behind people’s irrationality. While most people are capable of grasping unconventional moral arguments, they simply don’t have a reason to think critically about them unless they can be made to abstract from their own circumstances and consider moral principles qua themselves.
This is why, while most people consume factory-farmed animal products and insist there’s nothing wrong with it, they also readily acknowledge that future generations are likely to look back upon factory farming as a moral catastrophe. (The latter is such a common view that it’s been voiced by New York Times opinion columnist Nick Kristof on multiple occasions.)
This all suggests that one solution to the disseminating-moral-values problem may be to build institutions that induce people to think about moral issues in context of the values and interests of future generations rather than their own concrete circumstances.
As AI researcher Paul Christiano has suggested, establishing future-oriented prediction markets may be key here:
Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask “Was this policy a mistake?”, “Did we do too much, or too little?”, and so on.
Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045’s answers to “Did we do too much or too little about climate change in 2015-2025?”
We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like “The market expects that in 20 years we will consider this policy to have been a mistake.”
This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.
Christiano’s proposal might work surprisingly well as a guide to present-day policy. Below I consider three objections to using the plan to try to overcome people’s aversion to moral critical thinking; none of them seems to be insurmountable.
1. There’s no reason to think, if people aren’t troubled by moral catastrophes right now, that in the future they’ll regret not having done more to stop them.
First of all, plenty of people change their minds about morally important issues. How many Americans were there who passively supported Jim Crow in the 1950s but would have answered differently in a poll 20 or 30 years later?
Second, the surveys wouldn’t have to ask people if they regret their own actions, just if they think society should have done something different. People are more likely to engage in moral critical thinking when they can imagine a situation not identical to their own circumstances — i.e., when they become unburdened by what has been, not when they exist in the context of all in which they live and what came before them — and can blame someone other than themselves for perpetrating a moral catastrophe. Even if the scapegoat is a collective “we” of polities past, most people would still be prompted to engage in some level of abstraction when answering the surveys. This should make it easier for people to reason clearly.
2. To capture radical shifts in moral thinking, we’d need to extend the time horizon to at least several centuries rather than several decades.
Since the prediction markets would be so inexpensive to establish, they wouldn’t need to capture radical shifts in thinking to do net good, just improve policy outcomes by some non-negligible margin. As long as we can expect people in the future to have a sounder view of past people’s actions than people today have of their own actions (and also that prediction markets have at least some influence on policy), this should be the case.
Moreover, it’s not principally impossible to create a prediction market where bets wouldn’t close until long after the initial betters die, since traders can buy and sell shares at any moment.
For example, there would be nothing to stop a market from establishing the question “Will 2124’s survey show that at least 51% of adults think policymakers in the 21st century didn’t do enough about the Hemispheric Hierarchy Catastrophe?” and let people trade “YES” and “NO” starting tomorrow. There probably wouldn’t be much volume, and the price would be extremely low confidence. But if it was obviously too high (99%) or too low (0.01%), or traders thought new research was about to give more or less credence to the hemispheric hierarchy hypothesis, then they would have an incentive to put money on the appropriate side of the ledger. As long as the bets were invested in some base asset, like an index fund, it wouldn’t even be a suboptimal financial decision just to hold some shares for a matter of years and then sell them to another trader whenever an interested party comes along.
3. It’s unlikely that moral prediction markets would have any impact on government decision-making.
Under present political institutions, this is likely true. The same government that’s nearly driven PredictIt out of business might care only a little about what the markets would have to say, though possibly still enough to generate a return on scant investment.
However, this just seems like a reason to establish some more political institutions that would amplify the influence of prediction markets and the interests of future generations. These could include an agency, commission, ombudsman, or international treaty tasked with representing future people; a longtermist civil society group that would draw on prediction markets to decide where to focus its advocacy; or a full-on futarchy in which prediction markets are used to determine laws and policies directly.
As Robin Hanson suggests, we might vote on values and bet on beliefs.
But why not bet on values, too?
With due respect to moral anti-realists, it’s hard for me to imagine someone who would disagree with this except, like, a humanities grad student with a blog named “Critical Masses” or something.
Very disappointed to learn that a futarchy is not rule by Futanari anime girls.
I sort of wonder if it's possible not to live in a moral catastrophe. That is not to say : "Well, there will always be moral catastrophe, so let's move along.." No, I'm saying that we'll certainly find other parts of our existence that we find immoral as time goes on. However, the real progress of our species might just be in the overcoming of these moral catastrophes. Maybe something along those lines should be our guiding principles into the future instead of the current norm of diety worship. Why not worship our own ever more moral future? And if you're looking for more moral Catastrophe's, look at Peter Singer's 1972 paper "Famine, Affluence and Morality." I've personnally found it hard to argue against or reconcile the way that I live with it. I've been thinking about it for months and I'm searching for a better way to move forward personally.