Utilitarianism as “moral Esperanto”?

The Atlantic‘s Robert Wright has a thought-provoking review of Joshua Greene’s Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Greene used scans of people’s brains to examine their responses to the famous (famous by the standards of professional philosophy, anyway) “trolley problem” thought-experiment. In the thought-experiment, people are asked whether they would divert a runaway trolley about to hit five people onto a track where it would hit just one person. Most people think this would be the right thing to do. But when the conditions of the experiment are changed, people tend to respond differently. For instance, many people say they wouldn’t be willing to push someone onto the track to prevent the trolley from hitting the other five, even though the utilitarian moral calculus (one life for five) is the same.

Greene found that MRIs showed that people who said would be OK to push the one man onto the track were using the portions of their brains associated with logical thought, while those who said it wouldn’t were responding more emotionally. He concludes that emotional bias–inherited from our evolutionary past–clouds our judgment. Because our ancestors lived in small hunter-gatherer groups, we’re good at group solidarity, but bad at inter-group harmony. Pushing someone to their death is the kind of thing you could be blamed and swiftly punished for in a small group, so the idea of doing that lights up some deep-seated moral aversions. Green concludes that humanity needs a global moral philosophy that filters out these atavistic types of responses can “resolve disagreements among competing moral tribes.” And the best candidate for this is a form of utilitarianism.

Here’s Wright summarizing Greene:

One question you confront if you’re arguing for a single planetary moral philosophy: Which moral philosophy should we use? Greene humbly nominates his own. Actually, that’s a cheap shot. It’s true that Greene is a utilitarian—believing (to oversimplify a bit) that what’s moral is what maximizes overall human happiness. And it’s true that utilitarianism is his candidate for the global metamorality. But he didn’t make the choice impulsively, and there’s a pretty good case for it.

For starters, there are those trolley-problem brain scans. Recall that the people who opted for the utilitarian solution were less under the sway of the emotional parts of their brain than the people who resisted it. And isn’t emotion something we generally try to avoid when conflicting groups are hammering out an understanding they can live with?

The reason isn’t just that emotions can flare out of control. If groups are going to talk out their differences, they have to be able to, well, talk about them. And if the foundation of a moral intuition is just a feeling, there’s not much to talk about. This point was driven home by the psychologist Jonathan Haidt in an influential 2001 paper called “The Emotional Dog and Its Rational Tail” (which approvingly cited Greene’s then-new trolley-problem research). In arguing that our moral beliefs are grounded in feeling more than reason, Haidt documented “moral dumbfounding”—the difficulty people may have in explaining why exactly they believe that, say, homosexuality is wrong.

If everyone were a utilitarian, dumbfoundedness wouldn’t be a problem. No one would say things like “I don’t know, two guys having sex just seems … icky!” Rather, the different tribes would argue about which moral arrangements would create the most happiness. Sure, the arguments would get complicated, but at least they would rest ultimately on a single value everyone agrees is valuable: happiness.

Whenever I see someone arguing that “science” can tell us which moral framework to adopt, it sets my Spidey-sense tingling. Simply saying we should all be utilitarians dodges a bunch of important and contested philosophical questions, like

–What is “happiness” (or “utility”)? Is it just the net balance of pleasure over pain (as the founder of utilitarianism, Jeremy Bentham, thought)? Or does it include “higher,” more complex elements (as Bentham’s protégé and critic John Stuart Mill thought)?

–Assuming we can define happiness, can we quantify it in such a way that allows us to determine which course of action in a given case will yield the most of it?

–Even if we can define and quantify happiness/utility, might there not be other things that are good and whose promotion should enter into our moral calculus? What about beauty? Truth? Should those always be subordinated to happiness when they conflict?

–Utilitarianism is a form of consequentialism. But can we know what the likely consequences of our actions are ahead of time? Can we even specify what counts as a consequence of a particular action with any precision?

Wright says that Greene studied philosophy, so presumably he knows this. And it’s not that utilitarians don’t have responses to these questions. But they don’t all agree among themselves on what the answers are. And these are properly philosophical questions, not questions that the natural sciences (including neuroscience) can answer in any straightforward way.

To Wright’s credit, he is skeptical of Greene’s advocacy of utilitarianism as a kind of “moral Esperanto.” And he notes that some of the most intractable conflicts in our world aren’t necessarily conflicts over ultimate values, but over facts. For instance, most Americans are, at best, dimly aware of our history of meddling in the internal politics of Iran, so they attribute Iranian mistrust of the U.S. to irrational animus or religious fanaticism. The problem is that we are all afflicted with a self-bias that inclines us to filter out facts that our inconvenient to our cause and which makes it difficult for us to view a situation from the perspective of our opponent. Christians would call this a manifestation of Original Sin.

UPDATE: At Siris, Brandon offers some thoughts on the Atlantic article and utilitarianism in general.

Advertisements

15 thoughts on “Utilitarianism as “moral Esperanto”?

  1. Worse: as demonstrated in your quotes here, preference for this sort of reductive utilitarianism often comes along with other vices, like the denigration of emotional values as irrational and beneath consideration. But the situations are not in fact identical! Shifting the moment of moral action makes a very big difference!

    If the question is between redirecting a vehicle that is mostly out of control so that it does more or less harm, and inflicting direct and voluntary harm in order to possibly avert a larger harm that is outside of your control, most ethicists will tell you that there’s more to the problem than “1 < 5".

    Five dead in a tragedy that I could not morally have prevented is bad, no question. One dead at my hands is worse, though. I have in that case done wrong. I have arrogated to myself responsibility for an event that was not within my control, and killed (or at least severely maimed) a human being. Not one of the five will thank me for that.

    In the other situation, where either set of outcomes is a result of my choice to act (because choosing not to divert the car is an act), I get to indulge in "1 < 5" logic because I get to choose to reduce or not reduce the resulting harm from a situation within my control. It is arguably immoral to choose to let the car I control harm more people than necessary.

    • I should clarify that I haven’t read Greene’s work first hand, so maybe he deals with these complexities. But I agree with you that the moral features of the two situations have important differences that don’t boil down to emotional response. (And as you rightly note, there’s a problem with trying to excise emotion from moral decision-making in the first place.)

  2. Interesting; I have a post in the works about utilitarianism and “self-bias,” too, based on a powerful article I read recently.

    I’m always surprised, though, when I read “feelings” and “reason” made opposites; surely “feelings” are part of how we reason? Are people just recognizing that – or do they believe there’s such a thing as “pure rationality”? I don’t see how that can be, myself. And isn’t it sort of weird to be arguing about “happiness” by attempting to divest the discussion of “feelings”?

  3. Some of the most important choices we make are based on empathy and an inherent (even animals seem to have this … http://www.cnn.com/2013/01/19/health/chimpanzee-fairness-morality/) sense of fairness. I don’t think important issues can just be evaluated simply by the numbers … the most good for the most people … sometimes the counter-intuitive minority report has value.

    I had a past post about the trolley experiment and Philippa Foot … http://povcrystal.blogspot.com/2008/12/can-ends-justify-means.html …. but what cracked me up was when some guys on Stargate Atlantis tried to explain the experiment to aliens …

    Rodney: Let me ask you a question. Say there’s a runaway train. It’s hurtling out of control towards ten people standing in the middle of the tracks. The only way to save those people is to flip a switch — send the train down another set of tracks. The only problem is there is a baby in the middle of those tracks.
    Teyla: Why would anyone leave a baby in harm’s way like that?
    Rodney: I don’t know. That’s not the point. Look, it’s an ethical dilemma. Look, Katie Brown brought it up over dinner the other night. The question is: is it appropriate to divert the train and kill the one baby to save the ten people?
    Ronon: Wouldn’t the people just see the train coming and move?
    Rodney: No. No, they wouldn’t see it.
    Ronon: Why not?
    Rodney: Well … (he sighs) … Look, I don’t know — say they’re blind.
    Teyla: All of them?
    Rodney: Yes, all of them.
    Ronon: Then why don’t you just call out and tell them to move out of the way?
    Rodney: Well, because they can’t hear you.
    John: What, they’re deaf too? How fast is the train going?
    Rodney: Look, the speed doesn’t matter!
    John: Well, sure it does. If it’s going slow enough, you could outrun it and shove everyone to the side.
    Ronon: Or better yet, go get the baby.
    Rodney: For God’s sake!

  4. The emotional underpinning of decision-making was researched by Antonio D’Amasio in his book Descartes’ Error: Reason, Emotion, and the Human Brain. D’Amasio said that the rational ideal of cold emotionless decision-making was something he got to see in his brain-lesioned patients. The lack of emotion did not make them into more effective people. Rather, it left them unable to frame decisions in the first place. They did not even know where to begin in making their decisions. A question as to when to make the next appointment led to one patient getting lost in a cost-benefit analysis taking into account possible meteorological conditions until the doctor cut if off with a suggestion like, “How about Tuesday?”

    While certain strong emotions may cloud judgment, to spring from that fact to the idea that we should rid decisions of all judgment is ludicrous. Plus, it ignores the fact that the utilitarian’s idea of going out to persuade others to adopt Utilitarianism for the good of mankind is itself emotionally driven. Those who have lost the ability to use such emotion don’t have much drive to do anything.

  5. 1) As noted above, it is perverse to try to “eliminate emotions” from moral decision making.
    2) Consequentialism has a circularity problem: if actions are to be judged by their consequences, we are left with: How do we judge the consequences? Either we have just kicked the moral can down the road, or entered an infinite regress (if we say “we judge the consequences bay what follows from them.”) Hedonism gives consequentialism bite by saying how we should evaluate the consequences, but at the cost of yielding preposterous results like the utility monster.

  6. Pingback: Links 14 – 8/11/13 | Alastair's Adversaria

  7. Noah Millman had an interesting piece on this describing what’s wrong with the trolley experiment, which in some ways echoes the Stargate excerpt above: http://www.theamericanconservative.com/millman/a-fat-man-is-hard-to-push/ I think he’s right that, without the experimenter there planting the suggestion that pushing the fat man in front of the trolley is the only way to stop the train, hardly anyone would even think of doing it. How would you know it would even work? Who would come back and recriminate you for not trying it? I wonder if the ‘logic’ part of our brains, beyond just being the bit that’s theoretically without emotions, is really the part that plays mental games. It’s like a puzzle: here are the rules, how do you solve it under those rules? It’s useful but it’s not necessarily apprehending reality more accurately than our feelings are.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s