Sunday, January 05, 2014

Local and global utilitarianism

Utilitarianism is often held to be the doctrine that when faced with any decision, an individual should choose the alternative that ceteris paribus maximizes immediate social utility. There are many problems with this formulation, but I want to look at one specific problem.

This would be easier with a diagram, but I'm on my pad without a good drawing program, so I'll just have to describe it. Consider a very simple, 2-dimensional model of utility, with states of affairs on the x-axis, and total utility on the y-axis. The utility function is continuous and has many peaks and valleys. We will assume that an individual knows the utility function, and can choose to "move left," "move right," or "stay put," i.e. to choose between two nearby states of affairs with different levels of total utility.

The current state of affairs is near a local maximum/peak, but there is another maximum, which an individual knows, that is substantially higher than the local maximum, with an intervening trough of lower utility. The question is: is an individual justified in moving towards that higher peak even though the immediate effect is to lower total immediate utility?

I will simply define two forms of utilitarianism.

Local Utilitarianism says absolutely, unconditionally not. If random, unchosen social perturbations happen to move us across the trough so that maximizing local utility moves us to the higher maximum, well and good, but it is immoral to intentionally lower total social utility even though we know we are moving towards a higher maximum.

In contrast, Global Utilitarianism says that it is possible to justify moving towards the higher peak even though we are reducing total immediate social utility. It is not necessarily justifiable, however, because there are other constraints. For example, the intervening trough might be so deep, with such negative utility, so as to make movement towards the new peak unjustifiable. One way to capture this constraint is that global utilitarianism seeks to maximize not an immediate utility function, but an integral of that function over time, to maximize not immediate total utility, but total utility over time. Hence we must look at utility in three dimensions: with states of affairs on the x-axis, time on the y-axis, and immediate utility on the z-axis, and we are maximizing a line integral of utility over states of affairs and time. There are other constraints, but this view of Global Utilitarianism directly answers some notable objections to utilitarianism.

For example, I'm about 20% of the way through Robert Nozick's Anarchy, State, and Utopia. He brings up as an argument against utilitarianism the case where one could choose to punish an innocent individual to satisfy the rage of a rampaging mob. Clearly, the harm caused by punishing one innocent individual will make happier both the rampaging mob and the numerous equally innocent victims of their rage. Local utilitarianism would definitively mandate punishing the innocent individual. However, under Global Utilitarianism, the mandate is not so clear. Global Utilitarianism demands that we evaluate, as best we can, the effect on total utility over time. I'm not considering risk and uncertainty here, so I won't add a probability function; I'll just assume that the innocence of the scapegoat will eventually be discovered. (If there were no possibility that the violation would be discovered, then in what sense could one say that the "scapegoat" really was innocent?) Thus I have to consider the effect of my action over time, given that innocent people would no longer have confidence that their innocence would protect them against punishment. I might well decide that over time, diminishing this social confidence would have an overall negative effect on the utility integral, and choose to protect the innocent person.

Nozick's example "works" precisely because we have strong intuitions, validated by millennia of experience, that protecting the innocent from state retribution has enormous long-term utility. If we weaken the violation of right, we get far more intuitively ambiguous situations. For example, during the fires after the 1906 San Francisco earthquake, firefighters intentionally dynamited many buildings, intentionally and purposefully violating the property rights of the owners, to prevent the spread of the fire. This action is precisely equivalent to the strong Trolley Problem (where one chooses between pushing one person onto the tracks to stop a train that would, if not stopped, kill five people). The only difference is that our intuitions that violations of property rights have a much smaller long-term impact on the utility integral.

This construction of Global Utilitarianism is, of course, far less rigorous than Local Utilitarianism. We add two extra abstractions: the prediction of utility over time and some conceptual integral of utility. But who says ethical system has to be perfectly rigorous? It doesn't seem so in real life; I think our ethical system should capture the real-world difficulties.

1 comment:

  1. I enjoyed this post. But I would say that time-inntegration is just the beginning of developing a global utilitarianism. The further we look into the future, the more uncertain things become, and a particular kind of uncertainty becomes important: the decisions of other people.

    We can consider a model of a social movement. A social movement may cause social conflict, leading away from a local maximum. But if they convince enough people to join them, they may reach a different maximum.

    ReplyDelete

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.

Note: Only a member of this blog may post a comment.