ESSAY The real roots of morality

Part I

The universal need to achieve social stability guarantees
that some system of moral rules will be devised.
Jesse Prinz

A large part of “best” when dealing with ethics is always also “most useful,” and there is a decided preference here for a view of morality that produces useful insights for practical ethics: How should we be trying to get along in the emerging world society? That topic will be addressed more directly in Part II.

For now, what do we know about the roots of human morality? In evolutionary terms, what moral viewpoint, what ethical strategy, is likely to have the most adaptational value?

The starting point of this discussion is that there are no revealed commandments, and there are no objective moral truths. There is, rather, a complex of culturally-derived moral codes that developed as adaptational strategies from universal cognitive capacities.

Given these core assumptions, all the rest is open to debate. We have one human nature; that nature has been shaped through natural selection; all of our cultures, including our moral cultures, are expressions of our cognitive structures and their adaptations. Precisely how this works is the question at hand.

– . –

Any rational and social species will evolve some sort of pro-cohesion morality, though not the same rules nor even the same categories or criteria as those that possibly could have evolved under different initial conditions or as a result of different random accidents. If this is true, while moralities may differ, and do differ, having a moral strategy for achieving and maintaining community is universal.

One key insight here is that it’s not only specific moral codes that may differ; even the kinds of emotional cognition triggered in the same circumstances may vary among cultures. Studies consistently show that, using Jonathan Haidt’s terms, Western cultures stress “harm” and “fairness” in moral situations, while more traditional cultures stress “community” and “purity.” (For reasons of space, throughout this article I will either refer to the research just this briefly, or omit it entirely. There are many links in the articles in this series, and there are more in this article.)

This insight doesn’t mean that cultural relativism is right in the extreme sense that “culture is everything.” Culture is everywhere, but as Roy F. Baumeister puts it, “culture is humankind’s evolutionary strategy.” Steven Pinker puts this core idea like this:

All this brings us a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life – sex, government, commerce, religion, diet and so on – depends on the culture.

And Jonathan Haidt puts it this way: ”Virtues are socially constructed and socially learned, but these processes are highly prepared and constrained by the evolved mind.”

The last of our fundamental assumptions is that morality is entirely a cognitive process; but cognition involves both affective and rational activity. Moreover, while rational thought is, by definition, conscious, the vast majority of our affective life is unconscious. This idea has been expressed in a number of ways by different writers, but the central idea – that morality is based on evolved, instinctive emotions – is the assumed centrepiece of the moral positions I take in this article.

Jonathan Haidt:

We have affectively valenced intuitive reactions to almost everything, particularly to morally relevant stimuli…. I think the crucial contrast is between two kinds of cognition: intuitions (which are fast and usually affectively laden) and reasoning (which is slow, cool, and less motivating).

– . –

In moral situations, it has become quite clear that when reason operates at all, it does so as a response to innate emotional reactions to stimuli. In other words, morality is primarily an affective response. When reason enters in, it does so as an explainer, a justifier. The primacy of emotion in moral activity is an extremely important insight. Hume was generally right, although he didn’t have the means to show how right he was:

Morality is nothing in the abstract nature of things, but is entirely relative to the sentiment or mental taste of each particular being, in the same manner as the distinctions of sweet and bitter, hot and cold arise from the particular feeling of each sense or organ. Moral perceptions, therefore, ought not to be classed with the operations of the understanding, but with the tastes or sentiments.

Despite the different specific focuses of contemporary moral psychologists, there is broad consensus that emotion, affect, intuition – whatever you care to call it – underlies our moral lives. Study after study, from traditional psychological testing to neurological investigations of both typical and atypical brains, shows that we respond to situations first and foremost emotionally. We respond most strongly to those situations that we later label “moral.” In most cases, our emotional reaction is so rapid, so automatic, so unconscious, that we have no time to think before we respond. Emotions like fear, anger, and – most important, it seems – disgust have been termed the “moral emotions,” and when they are present, we respond more strongly, with greater motivation to act, than we do in other circumstances.

If we see someone buying a coffee, we have little if any affective reaction. If we see her spitting on the sidewalk, we may have a somewhat stronger reaction (depending on our culture). If we see him purposely strike a defenseless old lady, we react very strongly. In the first case, we are not motivated to act. In the second, we may be motivated to act, more or less, depending on the cultural context. In the last case, our motivation to act, to intervene, will likely be very strong.

We have these feelings, these affective reactions, immediately. If we reflect on them at all, we do so afterward – not as a motivation to act, but as an explanation of our initial reaction. Often, we are not conscious of our reaction.

This lack of awareness may help explain why we often feel that morality is “out there” somewhere, some property of the world itself. If we aren’t aware that we’ve already reacted, when we begin to think rationally about our moral response to our emotional reactions we may deny or underplay the central role of our unconscious affects.

This lack of awareness that our own brains are creating our moral sense — “sense” in the same way as our five physical senses — may lead us to look elsewhere for the source of morality: to reason, to culture, to God.

As Nicholas Humphrey puts it in a somewhat different context in Soul Dust:

[There is] a difference between perception (being able to see an object) and sensation (knowing you can see, having the sensation of seeing) … you can have perception without sensation.

The idea that affective intuition is the evolutionary basis of morality is strongly held by contemporary cognitive scientists. One of them, Joshua D. Greene, has written a paper where “he uses neuroscientific evidence to reinterpret Kantian deontological philosophy as a sophisticated posthoc justification of our gut feelings about rights and respect for other individuals.”

We’ll see a lot more of Greene in the next part.

– . –

Part II

Morality is not just any old topic in psychology but close to our conception of the meaning of life. … So dissecting moral institutions is no small matter.  – Steven Pinker

Part I promised an emphasis on practical ethics, that is, the application of our metaethical understanding to the very human issues of thriving and survival. Part II looks at morality from that perspective: Can we alter our inborn “moral emotions”? Can we “rein them in” when doing so would be more adaptive?

If there is no one moral code, if there may not even be one way of applying morality across cultures, how are we to cope with the emerging modern world? After all, every day our societies become less and less like the small bands and clan groups that were the norm when our current moral adaptations were shaped. Kin selection and mutual reciprocity still operate, of course, but they are not by themselves adequate to deal with our ever larger societies. John Teehan addressed this point at length in his book In the Name of God, reviewed here separately.

Worse, thanks to technologies of various kinds, our circle of societal contact increasingly consists of mixtures of very different and often hostile cultures. How is a world society – not at all the same thing as a world state – to become adaptational, to provide selective advantage, under these conditions?

As society globalizes, some aspects of divergent moral codes become not just ineffective but maladaptive — ever-expanding group size strains the moral unity necessary to community.

Full-blown relativism is inadequate to this task, as Blackburn, among others, has ably demonstrated. Various moral pessimisms, from dialectical materialism to existential withdrawal, don’t work, either: the first allows no individual freedom, while the second has no social component. Both are inadequate to any society that wishes to balance “I” and “we” to mutual benefit.

Some dislike any mention of innate cognitive structures as an attempt to “sneak in” objective morality, but that’s not what I’m after here. I’m not advocating a single, universal moral code, either as an expression of political goals or as a function of Darwinian reductionism.

What I’m after is an expanded awareness that our very survival as a species – our continued selection as adaptationally adequate – depends on understanding where our moral motivations come from, and what they’re for. They’re not for the glory of God, or for the glorification of the Homeland, or for the vindication of my rules over your rules. Our moral motivations are an evolved, natural part of our biology. If they’re “for” anything, it’s community. We are social animals. We have to start with that.

Are we stuck with our instinctive moral reactions? To some extent, yes. But remember the insight offered in Part I, that morality and reason are two different processes of one set of cognitive structures. We can apply reason to emotion, and with enough understanding — of ourselves and of others — we can use reason to moderate the impulses of our moral emotions.

It seems that instinctive, unconscious moral emotions affect us differently in different circumstances and with different ways of  conceptualizing the moral issues involved. This suggests that we may have the capability, to some degree at least, to alter or refine — to reinterpret — our immediate, automatic reactions.

This is a very encouraging possibility. If our affective reactions can be partly moderated situationally by context, which we apprehend rationally, then our moral sense is not locked entirely in stone.

While Jonathan Haidt uses Hume’s analogy of our taste sensors to describe our innate moral categories (harm, fairness, etc.), Joshua Greene instead suggests that morality is like a modern digital camera, with both automatic and manual controls.

Our affective reactions to moral situations are the automatic settings. Just as we can set our camera for built-in configurations like “landscape,” “portrait,” “action,” etc., our moral emotions kick in when faced with the equivalent contexts. We don’t think about moral F-stops, or exposure time, or focus. Our emotions do all that for us. In most situations, the result is transparent — and satisfactory.

But what if when using our fancy camera we want to put a building in the deep background in focus, or to over-expose a sunset? We have to go to manual mode and manipulate all the settings ourselves. We may not get all of these settings just right, but we will control the way the camera “sees” our subject. Greene suggests that this is similar to the process that occurs in moral situations. Most of the time, our automatic, innate emotional reaction is appropriate, and we succeed (or avoid failure, which is the same thing).

In some cases, we can be convinced by rational reflection to alter or reverse one of the elements of our moral code. We don’t ignore our moral affect — we can’t do that. Instead, we redirect the moral emotion we’re feeling to a different “target.” For example, if an initial, primitive sense of “purity” violation over gay marriage can be turned into a sense of “fairness” violation over the denial of marriage rights to gays, our moral position will change.

Greene goes so far as to say, as alluded to in Part I, that the Kantian idea of moral “rights” is basically a way of articulating the post-hoc reasoning we apply to our automatic emotional reactions to moral stimuli:

I think what we do as lay philosophers, making our cases out in the wider world … is we use our manual mode, we use our reasoning, to rationalize and justify our automatic settings. And I think that, actually, this is the fundamental purpose of the concept when we talk about rights … “a fetus has a right to life,” “a woman has a right to choose,” Iran says they’re not going to give up any of their “nuclear rights,” Israel says, “We have a right to defend ourselves” … I think that rights are actually just a cognitive, manual mode front for our automatic settings. This is obviously a controversial claim. …I think the Kantian tradition [which gives primacy to rights], actually is manual mode.

Bad things can happen when we’re faced with situations for which our “automatic” settings were not designed, circumstances that adaptive selection could not have anticipated, because they weren’t around when our brains were developing.

Greene argues that we face just that kind of problem now:

As a result of … technology and cultural interchange, we’ve got problems that our automatic settings, I think, are very unlikely to be able to handle. And I think this is why we need manual mode.

Greene recalls Peter Singer’s “Armani suit” study, in which participants were asked to judge the moral culpability of two situations. In the first, a man wearing an expensive Armani suit refused to wade into a shallow pond to save a child. He was judged to be a “moral monster.” In the second case, a man bought an expensive Armani suit rather than donate the money to a charity that would help starving children on the other side of the world. The second man was judged to be rather morally insensitive, but there was much less intensity in that evaluation than in the first. What was happening, of course, was the operation of action vs. inaction, and the diminishing focus of increasing distance. The more remote the moral imperative, the more passive our personal stake in the situation, the less we feel it, and the less harshly we judge others who don’t feel it strongly.

Greene puts it this way:

Our heartstrings don’t reach all the way to Africa. And so, it just may be a shortcoming of our cognitive design that we feel the pull of people who are in danger right in front of us, at least more than we otherwise would, but not people on the other side of the world.

Our moral emotions were hard-wired when there was no “other side of the world,” when we lived in small groups of family and direct acquaintances. One interesting corollary may be that mass media create a kind of “virtual clan,” such that we respond strongly to earthquakes in Japan or genocide in Darfur not because they are “bad” but because they have been brought close to us, making their victims – at least for a short time – part of our local “in group.” This may also help explain why our attention span for such disasters is so short. Once the next crisis has taken over the media, the former one is no longer local and is quickly forgotten.

It seems that our core problem is that our evolved moral hardware was not intended to handle the challenges of the new world society — the automatic focus controls don’t work with a landscape as broad as the one in which we now live.

Given the importance of the “in-group”/”out-group” distinction, one way of lessening the “culture clash” of the new world society would be to work to enlarge our perception of what constitutes our “in-group.” We can use our rational understanding of the universality of human morality — its forms and motivations, not its code contents — in conjunction with modern media technologies to bring the “outsiders” to the inside. This happens already, of course, in a variety of ways. Cultures spread and values move closer together. This is one reason that we face the challenges we do, but it’s also the best means available for addressing those challenges.

Steven Pinker incorporates several of these concepts in his ideas about how we should approach our modern moral dilemma. Pinker says that first, by interacting with others, we learn to play “non-zero sum games” (which are practiced in some circumstances by some other primates, as well). Second, a “Theory of Mind” consciousness of others as agents similar to ourselves (what Pinker calls “the interchangeability of perspectives”) leads to a universal version of the Golden Rule, which appears in one form or another in moral codes throughout history and around the world.

An application of the Golden Rule can power what Singer called “the expanding circle,” in which we progress morally by including in our list of those others who are “like us” an ever-wider variety of people (and sometimes, other higher animals). From self, to family, to clan — and wider and wider, until our moral sense encompasses every one of us, everywhere.

– . –

So where does that leave us, in practical terms?

In my view, if we are to survive in the emerging world society, if we are to make the best, adaptationally effective choices in our new moral environment, we must:

(1) understand the “automatic” settings of our moral emotions;

(2) accept that our common humanity lies in not in bludgeoning each other until one specific, culture-bound expression of our status as social animals wins out over the others, but in understanding that we are defined as a species by our unique cognition, which encompasses both affect and reason;

(3) use our “manual mode,” our rational brains, to moderate, modify, and where possible overcome our increasingly maladaptive innate moral structures.

John Teehan puts the approach this way:

An evolutionary account of morality leads to a view of morality that is always open to investigation and revision, not because morality is arbitrary but because the social environment in which those values function is dynamic. Morality, to fulfill its function of promoting social cohesion and individual striving, must be responsive to the particularities of its social environment. …

An evolutionary account of morality, while it does deny us the comfort of moral certitude, actually allows us an insight into morality that may open the door to true moral progress.

We’re all in this together, for good or ill. There’s no going back to the “good old days” of small, isolated clan cultures — unless we use our technology either to blow ourselves up or to lay waste to our planet, not solutions most would welcome.

Will we “make it,” as a species? Will we survive long enough to achieve a new adaptation, in which rational recognition of sameness constrains emotional instincts of difference?

I have no idea. But I do know that it’s where we have to go, if we’re going to go anywhere.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s