Wednesday, December 17, 2014

Morality and responsibility

There is a tension that I feel in my own thinking about morality, and since that tension will inevitably be reflected on this blog, I want to bring it out into the open. 

DISCLAIMER: This post isn't really about "Star Trek"
On the one hand, I think that morality is extremely important, and I want to encourage people to think about it and talk about it much more, especially in the context of important public policy issues which effect many people's lives. Let me use a conveniently recent example: the CRomnibus appropriations bill. Ezra Klein helpfully provides this highly informative summary of what we need to know to "sound smart" about it. But notice what is missing from that summary. There's nothing in there at all about how the various provisions of this bill will impact the lives of real people for better or worse. I don't mean to single out Klein. I'm just using his piece as an example of the moral blind spot that exists in much of our political discourse. I think that blind spot skews public policy against the public interest, especially in economic policy.

On the other hand, I think the whole concept of "moral responsibility" is untenable. This is a direct consequence of my views concerning free will. Based on everything we know about the process of making decisions at the level of the brain, I think our decisions are determined not (or not only) by our moral character, but by a complex combination of circumstances governing the interactions of our brains with our environments at any given time. Psychology experiments have shown that people often have no idea why they do the things they do, and invent post hoc rationalizations on the assumption that they must have had a good reason. One experiment I recall reading about concluded that whether a person is holding a hot beverage or a cold beverage will influence how they interact with others. How can people be morally responsible for their choices when those choices are ultimately determined by complex physical interactions at the atomic level which we don't understand and cannot control?

I want to convince people to think more (and more clearly) about morality as a means of making better decisions. I want to use the persuasive force of moral language to convince people (politicians, for example) to do what they should do, yet I can't really blame anyone for doing what they shouldn't do. There's an obvious tension there, but is it actually a contradiction? Maybe. I don't think so, but maybe. 

Unfortunately, I've always found these to be rather difficult issues to talk about, mainly because so much of the language we have to use is really fuzzy. If Miles punches Julian, resulting in a broken nose, there is a very straightforward sense in which Miles is responsible for Julian's broken nose. In that sense, it is perfectly appropriate to "blame" Miles for breaking Julian's nose. Who broke Julian's nose? Miles did. Simple. But blaming Miles rests on the tacit assumption that he could have done otherwise. If we have reason to think that the punch was somehow unavoidable, and that Miles could not have prevented it, then it doesn't seem fair to blame him after all. When we look carefully at what really caused Miles to punch Julian, we'll discover that it isn't the sort of thing he could have prevented.  

Suppose Miles was drunk. That seems like an easy one. Alcohol impairs a person's judgment, rendering them less capable of making good decisions. Assuming Miles got drunk knowingly and voluntarily, we can consider him responsible for what he does when drunk (and that's basically how the law would treat it). But when we look carefully at what caused Miles to get drunk, we'll discover the same problem all over again: it was entirely the result of complex physical interactions taking place in Miles's brain.

Blaming people for the things that they do is sometimes very useful, so it's a bit awkward if it seems unjustified. If Miles blames himself and feels guilty about it, this might prevent him from losing his temper next time. And if Miles is arrested and prosecuted for assault, this could cause other people to think twice before punching someone themselves. Fortunately, these purely practical considerations don't require us to blame Miles in the deep sense. It doesn't matter that Miles is not responsible for punching Julian. What matters is that he and others are more likely to make better decisions next time. That consequentialist analysis is all the justification we need for holding Miles legally responsible. If so, do we even need to hold him morally responsible? I don't think we do.

And what difference does it make anyway? Holding someone morally responsible can be a form of social exclusion. It sets that person apart from everyone else by putting them in a special category. It makes that person appear to be different from the rest of us, which provokes in us that comforting sense of superiority. It makes that person a bad person, and we need not (some would say should not) have compassion for bad people. Miles punched Julian because he's a bad guy, and I never would have done that because I'm not! But when we let go of moral responsibility and focus instead on the morally-neutral issue of causal responsibility, it allows us to feel compassion for Miles without minimizing our concern for Julian, and without condoning or defending what Miles did in any way.

What's the downside of holding people morally responsible for their actions? I don't know where to start. We tolerate abominable conditions in our prisons because criminals deserve to suffer. We scrimp on social services because poor people don't deserve our help. We can't have affirmative action because that means favoring people of color over the white people who really deserve those jobs. We can't tackle income inequality because the 1% deserves its wealth. When we realize that no one deserves anything, and that everything ultimately comes down to luck in one way or another, we see that these views are indefensible. There are also personal, psychological, and social costs to negative emotions like guilt and anger, which depend upon some notion of moral responsibility.

I wanted to draw attention to this tension firstly because I think it's very interesting, but secondly because I think it's useful to try to identify the potential weaknesses in my own positions. By focusing on those weaknesses, I hope to either shore up my position or uncover a mistake. But mainly because I just like this stuff.

2 comments:

  1. Putting aside our disagreements over your views on free will (you're still wrong), I think you can still accept an idea of fee will that includes the ability to make complex, macro-level decisions even if you believe that the micro-level decisions you identify above are basically determined automatically. Combine that with the aspirational nature of moral responsibility. I think you'll find that your views are not totally incompatible.

    - your old pal Ovid

    ReplyDelete
  2. Not totally incompatible? I'll take it!

    ReplyDelete