Layard seems to think that utilitarianism can be justified by neuroscience. He claims that it shows that there is such a thing as objective utility, which can be measured by studying electrical activity in the brain. This seems to provide a scientific basis for utilitarianism.
Will replies that this ain’t necessarily so. He cites this pdf as evidence that “a virtue-theoretic approach best captures what's going on in the brain. Moral judgment and motivation is not in all (most?) cases driven by judgments of utility.”
This, in turn, says Will, has a fascinating implication – there are “possible conflicts between social policy that is designed to maximize expected social utility and the affective/motivational systems that actually drive behavior.”
I’m sympathetic to Will’s conclusion. But I think he’s overlooking some steps in the debate, namely.
1. Does neuroscience matter? To base any morality – utilitarian or not – upon neuroscience requires us to make the Humean leap, from statements about “is” to statements about “ought”. Can we do this? If so how? We can’t just ignore the question.
What’s more, the object of utilitarianism is not necessarily happiness at an instant of time. It would be absurdly difficult – logically impossible? - to maximize this. A more feasible objective for social policy is happiness over a lifetime. Here’s John Stuart Mill on Utilitarianism:
The ultimate end…is an existence (my emphasis) exempt as far as possible from pain, and as rich as possible in enjoyments.
2. Why do our intuitions matter? Will is obviously right that utilitarianism can conflict with our intuitions – a point recognized by utilitarians ever since Mill. But so what? To utilitarians such as R.M Hare, this merely shows the inadequacy of intuitions. He wrote (pdf):
The intuitive level of moral thinking certainly exists and is (humanly speaking) an essential part of the whole structure; but however well equipped we are with these relatively simple, prima facie, intuitive principles or dispositions, we are bound to find ourselves in situations in which they conflict and in which, therefore, some other, non-intuitive, kind of thinking is called for.
You don’t have to be a utilitarian to sympathize here. In many activities – science, music, sport – we make progress only by repressing or retraining our instincts. Why should ethical thinking be different?
3. If the proper ethical basis of social policy - whatever it should be - conflicts with commonsense morality, what’s to be done? Can we avoid using what Henry Sidgwick called “esoteric morality”? Should we?
Obviously, there are more questions than answers here. For me, the bottom line is that there seems to be an enormous gulf between the rigour of moral thinking which is required to justify public morality and social policy on the one hand, and the actual thinking that occurs on the other.