“Who am I?”
This is the most profound existential question most of us ever ask. But it may be the wrong question, or even no question at all. After all, if “I am the one asking the question” is the most reasonable answer, and it seems to me that it is, have I given an answer at all? It may be that the only appropriate and answerable question in a purely material world is “What is ‘I’?”
Reports of the death of duality in the wider world may be exaggerated, but in scientific and other rational circles, that route to an answer to our question is quite sincerely dead. So I have no interest here in blazing yet another trail through that particular patch of metaphysical underbrush. Let’s just assume that the traditional conception of a supernatural “soul” has no relevance to a rational enquiry into the nature of the self.
But there are non-believers, too, who resist the idea that the “self” is composed of only those material components and processes that can be empirically described, either now or at some more enlightened time in the future.
These faithless dualists worry that “free will” or the human moral sense somehow will no longer exist if we understand the mechanics of identity — the “magic” will be gone. So the faithless dualist prefers to keep Toto away from the curtain.
Are the key elements of human identity really that fragile? Or are both kinds of dualists — believers in the “soul” and those unbelievers who argue for the enduring mystery of the “self” – like the architects in Swift’s Grand Academy of Lagado, who laboured to build houses from the roof down?
This worry about the loss of stature, for want of a better word (although human “dignity” is a popular alternative), is one key difference that I have with the atheist dualists.
Various computational, representational, and mechanical models adequately — often comprehensively — explain the mechanisms that produce human thought, emotions, and actions.
And yet none of the discoveries of the new research really denigrates human achievement or personal worth, for one simple reason. Even were all of our ideas, feelings, and behaviours explained by purely physical processes, those explanations would change nothing about our experience of personal identity.
A clock that tells time thanks to gears and wheels does, in fact, tell time — it remains in all meaningful ways a clock, even though we understand how it works. A moth that flies in circles around a flame because it has an instinctive navigational system based on the light of the moon is no less like a moth because we understand why it behaves as it does. Knowing how and why is no more damaging to identity than is knowing what.
A person who smiles with delighted recognition on hearing the opening notes of Tchaikovsky’s Piano Concerto #1 feels exactly the same pleasure whether or not we can understand and can explain — quite a different thing than explain away –all of the physical mechanisms of mind and body that lead to the recognition, the smile, and the pleasure.
You need a brain to love Tchaikovsky. But you don’t need an immaterial “soul,” or a material-but-with-extras-we-can’t-explain self.
In this essay, I want to draw together the key ideas of a number of entirely physical, entirely non-transcendental explanations of the source and nature of our sense of self.
I don’t claim that any one of them or any group of them is definitive. Some of these theories are frankly speculative, and in time some of them certainly will be proved incomplete or even completely wrong.
What they share that justifies their inclusion here is a thoroughgoing absence of the spiritual, the supernatural, or the disembodied.
– * –
There is a growing consensus among brain researchers, one that is echoed in the scientific studies of investigators in many related fields, that there are specific, physical mechanisms in our brains, and that their processes shape our ideas, beliefs, and emotions, which then motivate our behaviour.
The books on the subject of the conscious self that I’ve recently read and the many research articles I’ve found – too many to list here – have different, specific interests. But they all present the central idea that there is a universal, and measurable, mechanism of consciousness.
The central theme of these publications is that my “self” is a co-ordinated perception of body states, sensory inputs from the environment, feelings and emotions.
We begin with a working brain, a processor of such complexity and subtlety that to call it merely a “processor” produces a pathetically inadequate image, unworthy of an amalgamation of circuits that have more connections than we can count. And these connections aren’t welded in place; they’re constantly altering themselves, shutting down some pathways, opening others, constructing and demolishing clumps and groups of neurons in a dynamic process so intricate that it beggars our ability to describe it.
– * –
I believe that Antonio Damasio’s Self Comes to Mind: Constructing the Conscious Brain is the best of the recent efforts to explain cognition in terms that respect the science yet are largely accessible to the non-expert, a group that certainly includes this writer.
Since the other books I’ve listed are, to one extent or another, alternative versions of Damasio’s ideas, I want to focus on Damasio, then present a few of the other writers’ ideas in areas where they go beyond the scope of Damasio’s presentation.
But what is mind made of? Does mind come from the air or from the body? Smart people say it comes from the brain, that it is in the brain, but that is not a satisfactory reply. How does the brain do mind?
Indeed, the last of Damasio’s questions above contains the cornerstone both of Damasio’s argument and of the way around the supernatural vs. reductionist pseudo-dilemma.
Mind isn’t the brain, but the brain “does” the mind. And, similarly, the self isn’t the mind; but the mind “does” self.
Damasio writes that “of the ideas advanced in the book, none is more central than the notion that the body is a foundation of the conscious mind.”
Damasio asserts that “body and brain bond,” that the body uses the brain to achieve the homeostasis necessary to sustained life. And it is that process of homeostatic regulation that has promoted and guided the evolution of the human brain.
All the astonishing feats of brains that we so revere, from the marvels of creativity to the noble heights of spirituality, appear to have come by way of that determined dedication to managing life within the bodies they inhabit.
One key mechanism for regulating body states is the fundamental importance of the “primordial feelings,” which are present at the deepest levels of the brain, in the brain stem, and are not dependent on consciousness.
As organisms evolved, the programs underlying homeostasis became more complex, in terms of the conditions that prompted their engagement and the range of results. Those more complex programs gradually became what we now know as drives, motivations, and emotions.
As important as are the primordial feelings and the rewards and punishments they produce as motivators of optimal life states, Damasio believes that the most important consciousness-related function of the brain occurs at the next “higher” level, with our ability to make cognitive “maps” of the body states we experience.
The mapped patterns constitute what we, conscious creatures, have come to know as sights, sounds, touches, smells, tastes, pains, pleasures, and the like—in brief, images. …This mapping goes on constantly, whether we are aware of it or not. Consciousness is not required for these automatic processes. And while the cerebral cortex is necessary for full mapping, even when there is cortical damage — or, in the case of hydranencephaly, no cortex at all — the brain stem reacts to elements of external environments by provoking the primordial feelings which underlie all minds.
It is these primordial feelings which are the bases of emotions, which Damasio defines as behavioural states that serve as motivations to interact with the external world or to alter the internal states of the body in order to achieve or maintain homeostasis: “Emotions are complex, largely automated programs of actions concocted by evolution.”
The “universal emotions” (fear, anger, surprise) are everywhere, existing in all cultures, however differentially expressed. Background emotions (satisfaction, dissatisfaction) and social emotions (shame, guilt, admiration, compassion) are more culturally influenced and are more malleable, but they operate in much the same way, as reactions to body states and environmental information.
But we need more than just feelings, feedback, and emotions. We need to be able both to recall past events and to predict likely future outcomes. For this, we need memory — that is, the ability to recall identical or similar circumstances and to hypothesize from them about what is going to happen next.
Damasio suggests that the sensory mapping he described earlier in the book is the basis of memory. His understanding of the relevant research, some of which is the work of his colleagues and himself, leads him to hypothesize that memory is not a point for point reconstruction of the object to be remembered, but rather that the brain stores the map structure of the original sensory perception. When this structure is recalled, an imperfect but useful replica of the experience is reconstructed, along with the feelings associated with it:
How does consciousness emerge from this brainy matrix?
Mind is primarily a result of the body processes necessary to homeostasis, the maintenance of viable physical life. For Damasio, human consciousness, the “self,” is a further step in the evolution of the brain and the body systems it regulates.
Consciousness, like mind, is an outcome of selective evolution, and like mind, “self… is not a thing; it is a dynamic process.”
Damasio distinguishes among three mind states, which he calls the “protoself,” the “core self,” and the “autobiographical self.” The protoself is the largely unconscious operation of innate body regulators (like hormone levels and temperature maintenance). These regulators are accompanied by “primordial feelings,” a general state of pleasure or pain, satisfaction or dissatisfaction.
The core self involves a more conscious awareness of external and internal environments, but it does not yet contain a fully present, reflecting consciousness. The core self operates, for example, when we walk to the car from the office while thinking about the conversation we’ve just had with a colleague. Our attention is directed to our physical circumstances, but there’s still a substantial autopilot operating. We’re in this state when we describe ourselves as being “absent-minded.” We are fully minded, but we’re “absent-self.” The autobiographical self is what we mean most of the time when we talk about ourselves, the “I.” It is the fully aware and reflective level of attention in which we interpret and plan, reflect on the past and present, project and predict into the future.
Damasio argues that these levels of consciousness evolved one after another, in response to selection pressures. The protoself is a product of basic homeostasis.Core consciousness eventually was added to the protoself, for the same evolutionary reason: it was slowly selected for because it improved survival. Full, autobiographical consciousness developed more recently, but it changed everything: “Consciousness is just a latecomer to life management, but it moves the whole game up a notch.”
Thus, “distinct levels of processing—mind, conscious mind, and conscious mind capable of producing culture—emerged in sequence.”
Once the reflective individual emerged, the social environment in which early humans lived made cooperation and group cohesion highly desirable. Homeostasis developed a social equivalent, as separate individuals manufactured culture in order to achieve more optimal states of existence. Damasio calls this overall process “sociocultural homeostasis.”
In one form or another, the cultural developments manifest the same goal as the form of automated homeostasis to which I have alluded throughout this book. They respond to a detection of imbalance in the life process, and they seek to correct it within the constraints of human biology and of the physical and social environment. The elaboration of moral rules and laws and the development of justice systems responded to the detection of imbalances caused by social behaviors that endangered individuals and the group. The cultural devices created in response to the imbalance aimed at restoring the equilibrium of individuals and of the group.
– * –
One popular, less thorough work that places its emphasis on a single feature of the mental processes that Damasio describes is David Eagleman’s Incognito: The Secret Lives of the Brain.
As its subtitle suggest, Eagleman’s book stresses that the cognitive functions that construct the mind and the self are mostly unconscious. If we have trouble accepting their central role in the construction of the self, that’s because we are largely unaware of their operation.
Although we are dependent on the functioning of the brain for our inner lives, it runs its own show. Most of its operations are above the security clearance of the conscious mind.
This fact is not alarming, Eagleman asserts, for that’s the way our brains are supposed to work:
Brains are in the business of gathering information and steering behavior appropriately. It doesn’t matter whether consciousness is involved in the decision making. And most of the time, it’s not. … Your brain has been molded by evolutionary pressures just as your spleen and eyes have been. And so has your consciousness. Consciousness developed because it was advantageous, but advantageous only in limited amounts. … Almost the entirety of what happens in your mental life is not under your conscious control, and the truth is that it’s better this way.
Eagleman understands that this new view of the brain as a series of primarily unconscious processes faintly perceived by a mostly uninvolved consciousness is just the latest in a series of scientific advances that have “dethroned” human beings from our long-held belief in our central importance to life, the universe, and everything.
The last four hundred years have not been good for our self-image. Cosmology removed us from the centre of the universe. Geology expanded time so dramatically that all of human history shrank to insignificance. Darwin did in special creation, and DNA threatened to reduce us to machine-like automatons at the mercy of our mindless genes. And, Eagleman writes:
Over the past century, neuroscience has shown that the conscious mind is not the one driving the boat. A mere four hundred years after our fall from the center of universe, we have experienced the fall from the center of ourselves.
In this new construct, consciousness sits beside (“atop” no longer fits) the working brain, waiting for reports to come down the wire. These reports help us cope with the world around us, but that doesn’t mean that they are “true.” Vision, hearing, our perceptions of time, movement, and causality — all of them are mental tricks, the ways the brain shows us the world in terms we can understand and (most of the time) use.
One of Eagleman’s analogies is that consciousness is like someone reading the headlines of a newspaper. The research, the writing, and the editing of the articles, the selection of the photos and graphics — all of these decisions have already been made before the headlines are put above the text. And those headlines are all that we can see of the journalistic process that is normal brain function. As long as it’s not taken too far, it’s a useful analogy.
So what makes the human brain different from other animal brains? What functions other than newspaper subscriber does consciousness serve? Eagleman’s primary answer is that consciousness affords us the ability to be flexible learners.
Flexibility of learning accounts for a large part of what we consider human intelligence. While many animals are properly called intelligent, humans distinguish themselves in that they are so flexibly intelligent, fashioning their neural circuits to match the tasks at hand. It is for this reason that we can colonize every region on the planet, learn the local language we’re born into, and master skills as diverse as playing the violin, high-jumping and operating space shuttle cockpits.
“The human brain” is a misnomer. Eagleman believes that our brains are an amalgamation of unique, redundant, cooperative, competitive subsections, what he calls “a team of rivals.” He begins his chapter of this conception of the brain with an apt quotation from Whitman’s Song of Myself:
Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)
In the final chapter of Incognito, Eagleman addresses the charge of reductionism that so often tags along with neuropsychology.
Just because a system is made of pieces and parts, and just because those pieces and parts are critical to the working of the system, that does not mean that the pieces and parts are the correct level of description. … But reductionism is not the right viewpoint for everything, and it certainly won’t explain the relationship between the brain and the mind. This is because of a feature known as emergence. When you put together large numbers of pieces and parts, the whole can become something greater than the sum. … Watching The Simpsons depends entirely on the integrity of the transistors, but the parts are not themselves funny. Similarly, while minds depend on the integrity of neurons, neurons are not themselves thinking.
Eagleman argues that the reductionist approach that has worked well in other hard sciences won’t work with neuropsychology:
This break-it-down-to-the-smallest-bits approach is the same successful method that science has employed in physics, chemistry, and the reverse-engineering of electronic devices. … But we don’t have any real guarantee that this approach will work in neuroscience. The brain, with its private, subjective experience, is unlike any of the problems we have tackled so far. Any neuroscientist who tells you we have the problem cornered with a reductionist approach doesn’t understand the complexity of the problem.
– * –
Eagleman’s caution about the limitations of mere reductionism is worthwhile, and at this point in the essay they are worth a short pause for reflection.
In the simplest terms, does describing the physical processes of the brain undermine the personal experiences of the mind? I don’t think so, for the simple reason that the mind is undoubtedly a product of the brain, and the simple (but not simple to perform) act of mapping the brain’s processes doesn’t — can’t — change the nature of the mind.
A table is a table whether or not we know the type of wood of which it’s composed or the methods used in its construction. Its “tableness” is not dependent on those descriptions, but on its purpose and utilization. Similarly with the mind, I think. It was a mind before we knew anything about how it works; it will be a mind after we’ve mapped out all of its parts.
If you think that this is an obvious and unnecessary observation, not worthy of providing the foundation for a theory of consciousness or will or self — if you insist on looking for something more complicated, more subtle, more profound on which to base our concept of mind — maybe you’re looking for something that isn’t there, or looking for it in a way that isn’t appropriate in our post-metaphysical world?
– * –
Like Eagleman, Terence W. Deacon has no love for reductionism.
He also has little fear of any of the core questions about the emergence of consciousness. How did our universal, inherited brain structure develop? If it’s entirely material, what “caused” it?
In Incomplete Nature: How Mind Emerged from Matter. Deacon wants to make the case that consciousness is (1) entirely the result of material processes, (2) not itself material, and (3) amenable to empirical discovery on some level.
That’s an ambitious claim, but it contains the attractive notion that what we experience as consciousness is an emergent property of physical events, requiring neither supernatural genesis nor metaphysical dualism.
In fact, Deacon has nothing but scorn for the traditional and neo-homunculi which, he believes, infest most attempts to explain consciousness — even most explanations which believe themselves to be entirely material and reductionist.
Deacon’s “gotcha” insight is the idea that “brain researchers and philosophers of mind have focused on brain processes, neural computations and their correspondences with the physical world. But what if we should be focusing on what is not there instead?”
This question draws us directly into the heart of the topic for Deacon. Thoughts and feelings, imaginings and longings — all of the sensations or events or whatever they are that are not “things” but are nonetheless the “real” contents of consciousness — they aren’t “there,” are they? Deacon writes that “the function of an engine, meaning of a word, or content of a thought are also not actually present in the machine, the text, or the firing patterns of neurons.” He asks, ” Does this render these missing attributes outside the realm of empirical science?”
His answer is, in a word, “No.” By studying the ways that constraints can construct complex order, how interactions can create emergent structure, Deacon believes that we can use “what isn’t there” to examine what is:
My aim is to provide a thoroughly naturalistic account of how true purposiveness can emerge from purely mechanistic physical processes when they become organised in a way that preserves specific absences, that is, constraints.
In another part of the article, he writes:
To illustrate, consider how a quickly flowing stream forms stable eddies as it curls around a boulder, or how a snow crystal spontaneously grows its precise, hexagonally symmetric, yet idiosyncratic branches. In both cases, the resulting order is a consequence of possibilities that become increasingly improbable by the compounding of constraints.
Deacon believes that consciousness is the inevitable order that arises from the constraints of a dynamic and flexible system of brain and body systems, of feelings and homeostatic states and neural networks.
We get a kind of teleology without a teleologer. By the “reverse logic” of natural selection, Deacon argues, we understand that the ends of a spontaneous system generate the system itself without prior guidance. That is, only those systems which include a way to replicate themselves survive. That requirement operates without regard to the mechanism that effects the replication, creating an end-determined system without end-direction.
In taking this position, Deacon, like Eagleman, rejects the idea that there can be any simply reductionist explanation for consciousness. He believes that there is no one “thing” that is consciousness.
Rather, he thinks, consciousness is that state of mind that arises moment-to-moment all over the brain-body-environment system. That state of mind doesn’t reside any particular where, or even in any one causal branch of events. The elements of mind create consciousness the way that the elements of falling snow create flakes — each one different, each one inevitable, each one the same, each one random.
The metabolic signals we map with fMRI and PET-scan imagery may be serendipitously providing evidence that conscious arousal is not located in any one place, but constantly shifts from region to region with changes in demand.
– * –
The notion that consciousness rests on ground that is constantly shifting is consistent with one of the more controversial theories of the nature of the human sense of self – the “self as narrative” theory championed by Daniel Dennett, among others.
In Consciousness Explained, Dennett describes consciousness mechanically, as the result of the mental processes by which the brain constantly organizes the perceptual inputs, which are its source of information. He calls this group of processes the “Multiple Drafts” model of consciousness.
Dennett proposes a concurrent series of processes, brain actions that blend and interact dynamically, hence the idea that rather than a single “narrative” there is a group of drafts whose threads the brain can combine in a wide variety of ways.
This stream of contents is only rather like a narrative because of its multiplicity; at any point in time there are multiple “drafts” of narrative fragments at various stages of editing in various places in the brain.
Why is there no single narrative, no “stream of consciousness”? Because, Dennett argues, there is no whole identity, no central observer, which weaves our multiple drafts into a single, coherent story.
There are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of “narrative” play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain.
According to Dennett, this set of processes is not “hard-wired.” Rather, it is an adaptation of mental processes that began with other functions.
They were not developed to perform peculiarly human actions, such as reading and writing, but ducking, predator-avoiding, face recognizing, grasping, throwing, berry-picking, and other essential tasks. They are often opportunistically enlisted in new roles, for which their native talents more or less suit them.
Innate mental processes, shared to greater or less extent with other animals, work in conjunction with uniquely human factors.
… it is augmented, and sometimes even overwhelmed in importance, by microhabits of thought that are developed in the individual, partly idiosyncratic results of self-exploration and partly the predesigned gifts of culture. Thousands of memes, mostly borne by language, but also by wordless “images” and other data structures, take up residence in an individual brain.
In this conception of consciousness, there is no Cartesian self, no non-material soul, no single “I” to take its place in “I think, therefore I am.” Our identities are closer to “my brain does stuff, and we are they.” This ruthlessly mechanistic view of consciousness, of identity, of self, removes the soul, Rile’s “ghost in the machine,” and replaces it with an organic computer, a biological Von Neumann machine.
Related to this idea, but somewhat differing from it, is the notion that “we are our narratives.” Combining physiological data and psychological insights, the narratives explanation of consciousness is another way of denying the reality of a separate, single self. In this view, the left cerebral cortex works to organize our sensory input and experiences into a fluid autobiography, one with all of the key features we recognize in the creative fictions of storytellers.
John Bickle and Sean Keating put it this way:
“We are our narratives” has become a popular slogan. “We” refers to our selves, in the full-blooded person-constituting sense. “Narratives” refers to the stories we tell about our selves and our exploits in settings as trivial as cocktail parties and as serious as intimate discussions with loved ones. We express some in speech. Others we tell silently to ourselves, in that constant little inner voice. The full collection of one’s internal and external narratives generates the self we are intimately acquainted with. Our narrative selves continually unfold.
Bickle and Keating cite Gazzaniga’s research with “split-brain” patients, subjects whose brain hemispheres have been separated surgically (typically to treat a severe form of epilepsy). This empirical neuroscience shows that the left hemisphere of the brain is hard-wired for language and hypothesis, and responds to the right hemisphere’s sensory inputs by creating narrative interpretations of our perceptions:
… Gazzaniga argues that the human brain’s left hemisphere … possesses the unique capacity to interpret — that is, narrate — behaviours and emotional states initiated by either hemisphere. Not surprisingly, the left hemisphere is also the language hemisphere, with specialised cortical regions for producing, interpreting and understanding speech. It is also the hemisphere that produces narratives.
These left-hemisphere functions, according to Gazzaniga, what he terms the “interpreter,” give us our sense of self, our personal identity:
The interpreter sustains a running narrative of our actions, emotions, thoughts, and dreams. The interpreter is the glue that keeps our story unified, and creates our sense of being a coherent, rational agent. To our bag of individual instincts it brings theories about our lives. These narratives of our past behaviour seep into our awareness and give us an autobiography.
How does this work? Bickle’s explanation is that we have what he calls a “little inner voice,” a running stream of narrative produced by the always-active left hemisphere’s language centres:
One compelling study used PET imaging to watch what is going on in the brain during inner speech. As expected, this showed activity in the classic speech production area known as Broca’s area. But also active was Wernicke’s area, the brain region for language comprehension, suggesting that not only do the brain’s speech areas produce silent inner speech, but that our inner voice is understood and interpreted by the comprehension areas. The result of all this activity, I suggested, is the narrative self.
The interior narrative function of the left hemisphere’s language centres creates the same kinds of stories as we encounter externally. In effect, our public literary traditions are built of the same stuff as our personal histories, and for the same reasons, since we seem to be hard-wired to see the world in a certain way, which we then reconstruct when creating stories for others:
If we create our selves through narratives, whether external or internal, they are traditional ones, with protagonists and antagonists and a prescribed relationship between narrators, characters and listeners. They have linear plots with a fixed past, a present built coherently on it, and a horizon of possibilities projected coherently into the future.
How did we develop this story-telling capacity? Gazzaniga suggests that the interpreter provided an evolutionary advantage by reinforcing “a new capacity for relentlessly hypothesising about possible causal patterns, combined with an older, right hemisphere capacity to make probability-based decisions.”
In this second scenario, Descartes fares no better than he does in Dennett’s “multiple drafts” model. Rather than a separate, enduring entity, in Gazzaniga’s “interpreter” the self becomes a never-ending narrative, a story that the brain constantly spins for itself, a tale in which the “I” is not the author, but rather the protagonist.
Where does all this leave me, the self? Who knows just how far neurological research will take our understanding of ourselves, but one thing we can say with confidence: one place that all this new science doesn’t take us is back to the discredited realm of the immaterial soul.
Dennett denies the existence of a single entity, an “observer” in a “Cartesian Theater.” Gazzaniga’s description accepts the observer paradigm, but his “interpreter” is a non-unitary concatenation of physiological brain functions.
– * –
What are we to make of all this? If our world is a representation created by our minds, minds that are transactional moments of ever-shifting brain processes — what happened to reality?
And what happens to those of us who are empirical realists?
Traditional rationalism sees these inner processes as essentially chaotic, as the source of the puzzling inaccuracy of our thinking, especially of the logical lapses, which characterize our reasoning. Where do these weaknesses come from, and why do we have them?
“Quantum Minds,” by Mark Buchanan, the cover article in the September 3, 2011, issue of New Scientist, claims that “the fuzziness and weird logic of the way particles behave applies surprisingly well to how humans think.”
No one is arguing that the brain is a quantum organ. But the mathematics used to describe the world of quantum physics seems also to describe the workings of our brains.
As Diederik Aerts of the Free University of Brussels, Belgium, puts it: “People often follow a different way of thinking than the one dictated by classical logic. The mathematics of quantum theory turns out to describe this quite well.”
Many experiments have traced how poorly we do on some kinds of probability tests. Classical logic should steer us clear of these errors, but it doesn’t. But quantum mathematics offers an alternative view.
The iconic “double slit” test shows how quantum probabilities differ from their classical cousins. In our everyday experience, probabilities add up. If you flip a coin once, the probability of heads is 50%. If you flip it again, the probability of at least one heads goes up to 75%. (Three of the four possible combinations of heads and tails on two flips include at least one heads.)
Quantum probability doesn’t work that way. The double slit photon test produces a characteristic “stripe” pattern, revealing that there is an “interference” factor at work. In quantum events, probability consists not just of adding up the individual chances of each event, but also of adding in an interference effect, which can be either plus or minus.
What you get is a “fuzzy probability,” a range rather than a value. Something to do with Hilbert Space. Don’t ask me to explain it any better than that. If I could have, I would have. Just trust them — they’re scientists, so they know everything.
This kind of interference pervades quantum physics, and the mathematics used to describe it can be applied to the “fuzzy” way we think. It seems that our brains, which aren’t quantum environments, nevertheless experience a similar kind of interference effect.
When experimenters offered subjects a chance to play a game in which there was an equal chance of winning $200 or losing $100, clear majorities of subjects who knew the result of their first game chose to play a second time. Why not, since the straight odds indicate that playing twice wins money three times out of four. But when subjects weren’t told whether or not they had won, they chose to play again less than 40% of the time. Why should this be? The odds didn’t suddenly change.
What appears to be happening is that the presence of two situations — I won the first game, or I lost the first game — yields odd results. It’s the cognitive equivalent to two slits. In another experiment, subjects had no trouble placing X in the Y category when told that “all X are Y.” But some people couldn’t put X into the disjunctive category “Y or Z.” Again, the presence of two situations produces a perception of “interference.”
And it’s not just quantum probability that gives insight into the ways our brains work. Perhaps the best-known tenet of quantum physics is the “uncertainty principle,” in which some of the characteristics of an object are not determined — do not exist — until they have been contextualized by being measured.
In quantum physics, contextuality is the way that particular kinds of measurement change the properties of the particles they measure. The equivalent in cognition, Aerts argues, is the contextualization of words. As Buchanan puts it, “A tall chihuahua is not a tall dog.” And a “red barn” is not the same “red” as are “red eyes.” Aerts writes, “The structure of human conceptual knowledge is quantum-like because context plays a fundamental role.”
The insight that we conceptualize in a quantum-like way, and that our language reflects this way of thinking, is pointing other scientists toward new ways of organizing computer intelligence. Google researcher Dominic Widdows are building quantum mathematics into new search engines. In a typical web search, for instance, a geologist who searches “rock” will get millions of irrelevant hits relating to rock music. Add the negating Boolean term “-songs,” and you’ll still get tons of rock music pages that merely don’t use the word “song.” If all of the words associated with song are grouped together contextually, then “not” becomes a much more effective limiter.
So haven’t we drifted rather far from where we started? Yes, and no. The application of quantum mathematics to human cognition and its replicator, AI, reinforces the idea that most of our “thinking” is not classically rational. If our brains in fact do work with “fuzzy logic” and quantum-like contextual word-fields, that would explain a lot of the otherwise inexplicable mental processes that underlie our conscious minds.
On the unconscious level, it would seem, our minds do not follow the strict formal rules of classical logic any more than quantum states in physics follow the strict formal descriptions of classical physics. Newton, meet Heisenberg. Conscious mind, meet unconscious brain.
Unlike those who resist the physical description of the ways the mind works, I find explanations like this compellingly fascinating. Rather than making up fairy stories or relying on speculative dualities, thanks to cognitive science we are coming ever closer to “seeing” into the complexities of our own minds.
Living in a physical world, as we do, that’s certainly the most fantastic voyage we can ever take.