ESSAY Is reality unreal?

Can we ever know everything? Or will there always be mysteries?

The scientific debate concerns how much we can know about the universe: whether we can measure, at least in principle, all that there is out there to be measured.

In the scientific milieu, Einstein thought that we could learn everything, famously asserting that “God does not play dice.”

Today’s most celebrated scientist, Stephen Hawking, co-author of the recent The Grand Design, has determined that not only was Einstein wrong, but that there’s even more uncertainty than we previously thought.

Hawking has written, ”The question is: is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science ‘God’, but it wouldn’t be a personal God that you could meet, and ask questions.”

If “God” means to you “the structure of the world so far as our science can reveal it” or “the laws of science,” then Einstein’s or Hawking’s metaphor is perfect for you. Otherwise, you’re out of luck. Sorry.

But the absence of a manipulating and meddling God doesn’t in and of itself answer the question of whether the universe is, from our point of view, determined or random.

To decide that, we look to quantum theory, including strangely-named things like Planck’s constant, Heisenberg’s uncertainty principle, and Schroedinger’s equation. This is all pretty arcane stuff — in fact, it’s the closest science ever gets to traditional metaphysics, requiring frequent thought experiments and metaphors, which is very lucky for me, because if the issue were even a tiny bit less insubstantial it would be much too concrete for someone trained in the ephemera of Jesuitical casuistry.

The short history that follows is a minimalist summary of Hawking’s history of the 20th century development of physics.

The founding father of the determinists is Laplace, who claimed that if we were ever able to measure the mass, position, and velocity of every object in the universe at a single moment, it would then be a matter simply of calculation (but not a matter of simple calculation!) to determine the movement of every object in the universe forward in time forever — to predict the future mathematically, and to end the debate over free will philosophically in the bargain.

Laplace was dealing with a Newtonian universe, one with objects large and material enough to understand as “objects” in the same way that we think of tables and ping pong balls as “objects.” Things soon changed, however, and Laplace’s determined universe was undetermined by one with stranger inhabitants: the quantum universe.

Along came Max Planck, who was fooling around with just the kind of problem we all like to toss about on a lazy weekend afternoon: why isn’t everything in the universe the same temperature? If bodies all radiate heat, shouldn’t there have been sufficient time by now for the exchanges all to equal out, leaving everything with the same amount of energy? Since this wasn’t the case, Planck created  a mathematical restriction on energy, such that it could be radiated not in any old bits but only in discrete units, or quanta. Energy could no longer be thought of as the stream of water but rather as the tiny pebbles on the bottom. You could throw as many pebbles as you had strength to throw, but you had to throw pebbles. You couldn’t grind up the pebbles and throw dust.

Planck thought that this was a neat mathematical trick, but he didn’t know at first that his numbers game was the best bet going for describing the universe at the level of the very small. When his quantum packet idea proved to be good at predicting real events, like the actions of elementary particles, the science of quantum mechanics was born. But Laplace’s determinism wasn’t completely dead yet — that would wait for another, even more counter-intuitive idea.

In 1926, Werner Heisenberg explained the notion that would make him infamous, the Uncertainty Principle. In order to do Laplace’s measurements, you had to observe the position and the speed of the particles you encountered. Heisenberg realized that, according to Planck’s math, you could observe very small objects with no less than one quantum of light. That is, you couldn’t scale the observation down to any old arbitrary and convenient value. But the energy in even a single quantum of light is enough to disturb the particle you’re trying to measure.

Worse, to measure position very accurately, you need very powerful light, like x-rays or gamma rays, which are even more disturbing to little particles than normal light is. So the more precisely you try to measure position, the more you disturb speed. And vice versa, it turns out. Hawking puts it more clearly — or less, depending on your comfort level with the jargon: “the uncertainty in the position of a particle, times the uncertainty in its speed, is always greater than a quantity called Planck’s constant, divided by the mass of the particle.”

So we can’t know the position and the speed accurately at the same time, but we can measure their combined probability — the”wave function” of a particle. This means that, thanks to Schroedinger’s equation, we can predict the future wave functions of these particles, which is a kind of low-rent determinism substitute. It doesn’t look like butter, and it tastes a bit like motor oil, but it’ll go on your pancakes if you can’t get your hands on anything better.

Einstein didn’t like this newfangled notion at all. Even if the quantum characteristic of light meant that we couldn’t see precisely, the precision was still sure to be in there somewhere. You’re not a blur because my glasses are smudged, even if that’s what I see. So Einstein disputed the ultimate reality of the Uncertainty Principle, saying — well, we all know what he said.

Einstein’s insistence that God doesn’t play dice is a version of Laplace’s notion, but Einstein knew that thanks to the curvature of space-time (one of his own little discoveries) there would be things we couldn’t see. Still, he was convinced — by his sense of the inherent orderliness of the universe if not by his mathematics  – that there was an underlying order, even if “only God can know it.” So his was a “hidden-variable” theory.

Unfortunately for Einstein, later experiments showed that “hidden-variable” theories are incorrect. And then along came Hawking, with his work on black holes, and the last little shreds of determinism blew away in the breeze.

It seems that space-time can be distorted so strongly that it develops regions that we cannot observe at all, regions that we call “black holes.” Imagine a round piece of putty. If you stick your finger into it, the putty depresses and you have to look at a sharper angle to see your fingernail. Now push further and harder than that, and the putty deforms so much that your fingertip “disappears” into the centre of the mass. You can’t see it at all, although it’s still there. Distorting space-time is something like that. Well, no, it isn’t, but we can think of it more easily if we pretend that it is.

The more massive and more compact an object is — the more dense it is — the stronger this distortion becomes. The “hole” in the putty ball becomes effectively bottomless, to the point that a fingertip that goes in can no longer come back out. Neither can even light itself, which is why it’s called a “black hole” in the first place. So we can’t “see” anything that’s in a black hole, meaning that there are parts of the universe which remain ever closed to us.

It gets worse. Hawking has shown that very small black holes do, in fact, spit out random bits of particles and radiation. The reason has something to do with the Uncertainty Principle, with the fact that you can’t have truly empty space, since it would then have completely determined position (0) and completely determined speed (0). This “0,0″ state is impossible, so even “empty” space is occupied by pairs of particles and anti-particles, called “virtual particles,” which do a little dance before annihilating each other in the physics version of a high energy mosh pit. This whole process is known as “vacuum fluctuation,” which doesn’t sound as sexy as “slam-dancing,” which would have been my term of preference.

Anyway, it seems that sometimes only one of a virtual particle pair is drawn into a black hole, or sometimes only one of a virtual particle pair is spewed back out as part of the “empty” space around the black hole. In either case, there’s a virtual particle with no murder-suicide partner, so it has no choice but to become a real particle, one that we can detect. To us, it would look like the detected particle has been emitted by the black hole. It would also not matter what kind of matter went into the black hole — what came out would appear completely random to an observer. Not a single particle of determinability in sight.

What all this means, Hawking says, is that in the case of black holes, since the wave function of any particle that enters the black hole can’t be determined (that information was lost when the particle entered the putty ball and sank), then the position or speed of any particle that “emerges” (the way it looks to our eyes, anyway) is completely random.

Hawking puts it this way:

But there’s no combination of the position and speed of just one particle that we can definitely predict, because the speed and position will depend on the other particle, which we don’t observe. Thus it seems Einstein was doubly wrong when he said, God does not play dice. Not only does God definitely play dice, but He sometimes confuses us by throwing them where they can’t be seen.

He also puts it this way:

– * –

So, is Stephen Hawking right, is there really no real there there?

There are two characteristics of the quantum wave function that are so counter-intuitive, so downright weird, that they sure seem to say so.

I don’t think that anybody else really knows what to do with them, either. Richard Feynman apparently agreed: “I think I can safely say that nobody understands quantum mechanics.” Still, Feynman is the “author” of the first weird characteristic of the wave function. With the inimical lack of clarity so typical of modern physics, Wikipedia puts it this way:

A Feynman diagram [example, right]  is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory.

Stephen Hawking puts the underlying principle, known as “sum over histories,” a bit more accessibly in The Grand Design. Writing of the classic “double-slit” experiment:

According to Newtonian physics — and to the way the experiment would work if we did it with soccer balls instead of molecules — each particle follows a single well-defined route from its source to the screen. There is no room in this picture for a detour in which the particle visits the neighborhood of each slit along the way. …

So far, so good. Throw soccer balls at a barrier with two holes, and any ball that makes it through to the back screen travels through one hole or the other. No surprise there. That’s the way the world in which we live works.

Well, it seems that the world doesn’t work that way at the extremely small scale of quantum mechanics:

… According to the quantum model, however, the particle is said to have no definite position during the time it is between the starting point and the endpoint. Feynman realized one does not have to interpret that to mean that particles take no path as they travel between source and screen. It could mean instead that particles take every possible path connecting those points.

We just lost contact with the world in which we live. During the time it travels the particle not only doesn’t have a position, it has every position? That can’t be right. Can it?

This, Feynman asserted, is what makes quantum physics different from Newtonian physics. The situation at both slits matters because, rather than following a single definite path, particles take every path, and they take them all simultaneously!

How are we supposed to deal with this kind of idea? The future is composed of the probability function of all possible pasts?

In the double-slit experiment Feynman’s ideas mean the particles take paths that go through only one slit or only the other; paths that thread through the first slit, back out through the second slit, and then through the first again; paths that visit the restaurant that serves that great curried shrimp, and then circle Jupiter a few times before heading home; even paths that go across the universe and back.

Maybe we’ll have more luck with the second strange idea — quantum entanglement. Or maybe we won’t, since it’s also known as “spooky action at a distance.” Doesn’t sound very promising, does it?

“Entanglement” refers to the way that two paired particles (how they are paired is beyond the scope of this article — way beyond!) act in certain ways as a single particle. No matter how far apart they are, if one is + the other is -. If one spins up, the other spins down. The “spooky” part is that neither particle is + or -, or up or down, until we measure the other one. At the instant that the value of one particle is established, the value of the other becomes established, without communication over distance or even the passage of time. Instantaneous, uncommunicated conservation of matter and energy.

Ironically, entanglement was first proposed in a thought experiment by Einstein, as part of his “God doesn’t play dice” attack on quantum mechanics. It was Einstein who called it “spooky action at a distance.” He thought that he had proved quantum mechanics logically impossible by demonstrating that according to quantum theory entangled particles could exist. Decades later, experiments proved entanglement right. So much for the “real” world!

Brian Clegg, author of The God Effect, a book on entanglement, puts the implications of “spooky action” this way:

Entanglement doesn’t throw away the concept of cause and effect. But it does underline the fact that quantum particles really do only have a range of probabilities on the values of their properties rather than fixed values. And while it seems to contradict Einstein’s special relativity, which says nothing can travel faster than light, it’s more likely that entanglement challenges our ideas of what distance and time really mean.

It just keeps getting worse. Now time and distance don’t really mean what they seem to mean. What’s left? At least I know that I exist. Good old metaphysics comes to my rescue,  just when I’m losing my last grip on reality.

Or does it?

– * –

Michael Shermer, science writer and historian of science, writing for the Big Questions Online  website, discusses Stephen Hawking’s view that what we know of the world are only models of an unknowable reality. According to this view, the actual world is beyond our reach, and we can approach it only through our perceptions and our explanations of those perceptions.

At any given moment there are, in fact, hundreds of percepts streaming into the brain from the various senses. All of them must be bound together for higher brain regions to make sense of it all.

In this way, we construct reality out of multitudes of tiny perceptual bits.

The models generated by biochemical processes in our brains constitute “reality.” None of us can ever be completely sure that the world really is as it appears, or if our minds have unconsciously imposed a misleading pattern on the data.

Doesn’t this suggest that our knowledge is not real? Are the theists and the postmodernists right, that human knowledge is always arbitrary, relative, approximate — that “truth” is not “real”? After all: “It is not possible to understand reality without having some model of reality, so we are really talking about models, not reality.”

Does this spell doom for human knowledge? No, Shermer says, it doesn’t.

Is there a way around this apparent epistemological trap?
There is. It’s called science.

Even if we can never apprehend reality entirely or for certain, we can find and adopt models that come ever closer to explaining, to representing, reality.

In the long run, we discard some models and keep others based on their validity, reliability, predictability, and perceived match to reality. … I believe there is a real reality, and that we can come close to knowing it through the lens of science — despite the indelible imperfection of our brains, our models, and our theories.

And this idea of knowing “through the lens of science” is Shermer’s key point, for it’s in the methodology of empirical investigation that science differs from other belief systems, in its requirement of testability and falsifiability.

When the empirical information changes, the model changes. This is not a very common occurrence with literalist religious models, which too often claim to be not representations but revelations, not models but truths. It is its very impermanence which affords science more credibility, inspires a greater sense of confidence, than does the rigidity of “infallible” theistic models.

The inspired and revealed religions, the ones that rely on faith in the literal word of their God or of his spokes-prophets, are typically unresponsive to advancements in science. As our ability to investigate the world increases — or, to give Hawking the point, as our ability to construct models which explain our perceptions increases — through Galileo’s telescope, or carbon dating, or whatever other technology, the scriptural models resist: they do not change to keep up. The earth remains the spiritual and often the physical centre of a specially-created universe, one which is apparently 6,014 years old.

In other words, there is no discernible investigative methodology in these models. Their authoritarian invariance makes them attractive, in a way, steady and dependable, but that inflexibility is a fatal flaw when the evidence is against them.

As Shermer reports, Hawking believes that when two models of reality equally explain the observations we are able to make, there’s no reason to choose one over the other. Take your pick. Let light be a wave, or a particle, or both, or neither, so long as the observations are not contradicted by any of those descriptions.

This imprecision should not provide any comfort to literal theists, despite their frequent tendency to think that it does. It shouldn’t be necessary to point out that it does not follow from “X is not final and complete” that “therefore, Y is true.” It shouldn’t be necessary to point this out, but for the less-sophisticated critics of science, it often is.

Again, the core issue is this: the reason that empirical science and scriptural religion are not co-equal systems of belief, are not in fact similar at all, is that scriptural religion ignores the evidence of relevant observations. As time goes on, these observations mount up, and the scriptural model increasingly does not explain the world (or articulate our representations of it).

Thanks to the implementation of an empirical methodology, science is no longer merely “natural philosophy” but contests the accuracy of its models in the arenas of the material world. For example, evolution over long eons of time usefully describes the empirical observations of geographic strata and fossil evidence. A recent and complete creation does not. Why are dinosaur relics found only in strata millions of years older than those strata in which human remains are found?

Science attempts to answer the question, presenting ever-more detailed and ever-more complete approximations, while literalist religion typically either ignores the question entirely, idiosyncratically recasts experimental results, or mounts a flanking assault on the scientific method itself, in an attempt to avoid the unacceptable consequences of the evidence.

To some extent, it can all be cast as a clash between induction and deduction. Where science says “if this is true, then …,” literalist religion says “that can’t be true, therefore ….”  Through this crucial difference, of both approach and aim, one can see how Shermer is certainly right, that the scientific method affords a unique and invaluable way of investigating the world.

Such is the nature of science, which is what sets it apart from all other knowledge traditions.

– * –

Fair enough, but we’re not really out of the woods yet:

Why is there something rather than nothing? Why do we exist?
Why this particular set of laws and not some other?
Stephen Hawking

The question here is not the usual science question “What do we know?” but the philosophical question “What can we know?” about the world in which we live.

Why turn to a scientist for an answer to a philosophical question? Hawking anticipates the challenge, asserting baldly at the very beginning of The Grand Design that “philosophy is dead”:

How can we understand the world in which we find ourselves? Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.

The problem for philosophy is that it is ill-equipped to deal with the emerging physical characteristics of the universe — that is, of the universes, most of which must remain unknowable to us. Using a suite of related theories known collectively as “M-theory,” science can describe what philosophy can’t:

According to M-theory, ours is not the only universe. Instead, M-theory predicts that a great many universes were created out of nothing. Their creation does not require the intervention of some supernatural being or god. Rather, these multiple universes arise naturally from physical law. They are a prediction of science.

Why are we aware of this universe, among all the possibilities? Theists like to drag out the anthropic principle at this point, arguing that we are in a universe that has been created to accommodate our existence. Right idea, but entirely backwards. We could not exist in any universe which did not support life of our sort, so it is neither surprising nor particularly marvelous that we are here — it not here, where?

Only a very few [universes] would allow creatures like us to exist. Thus our presence selects out from this vast array only those universes that are compatible with our existence. Although we are puny and insignificant on the scale of the cosmos, this makes us in a sense the lords of creation.

Since the only universe we can possibly observe is one that makes us possible, the key human behaviour is not adoration but observation. Yet we know from previous postings that neither the physical nor the mental realm is provably real — indeed, both appear to be demonstrably unreal, full of swarming quantum potentials and swirling brain activity. So how can we perceive our universe accurately at all? Hawking’s answer is that we can’t, not really, but we can create a useful version of our unapproachable realities, employing a technique he calls “model-dependent realism”:

There is no picture- or theory-independent concept of reality. Instead we will adopt a view that we will call model-dependent realism: the idea that a physical theory or world picture is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations. This provides a framework with which to interpret modern science.

Hawking recognizes that the world we create inside is not the world that exists outside. Hence the notion of model-dependent realism, which is not traditional realism, but a representation based on sensory information:

It is based on the idea that our brains interpret the input from our sensory organs by making a model of the world. When such a model is successful at explaining events, we tend to attribute to it, and to the elements and concepts that constitute it, the quality of reality or absolute truth.

Is this reality or truth “absolute” in the usual sense? No, it isn’t. But don’t despair quite yet. Even though traditional realism doesn’t survive the facts of modern physics — if the “holographic principle” is true, the four-dimensional space-time of current physics doesn’t survive, either! – model-dependent realism provides a satisfying form of concreteness. This concreteness is based not on a meta-physical definition of reality but on a real-world application of observation.

In a model-dependent theory of the world, proposed realities must match our actual observations. That means no angels, no pixies, no fairy dust, no bearded patriarch sitting on a cloud. That means also no word game justifications of this, that, or the other thing. No ontological arguments, no Thomistic proofs, no untestable hypotheses with no more than assumed or asserted existence. If it can’t be observed, if it can’t be tested, if it can’t be replicated, if it is incapable of disproof — then it isn’t “real” in the way that a table is real.

Is a table really a table? Who knows. But we do know that our perception “table” is a more concrete kind of model than is our ideation “heaven.” A model is not “real” or “unreal” but rather “accurate” or “inaccurate.” Point at a table. Now point at heaven. Any difference? In this model-dependent approach, we replace a philosophical category with a set of empirical data. If you ask me, it’s an improvement — a more complex and a more mature way of conceptualizing the world. Hawking writes:

According to model-dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. … We make models in science, but we also make them in everyday life. Model-dependent realism applies not only to scientific models but also to the conscious and subconscious mental models we all create in order to interpret and understand the everyday world. There is no way to remove the observer—us—from our perception of the world, which is created through our sensory processing and through the way we think and reason.

What can we conclude? Even if there are many universes and no single reality, there is at any time a single, practical reality for us, a model by which we understand our world – a model that is as precise as possible, as complete as possible, as consistent with our observations (our experiences) as possible.

Within the limits of our perception, the model is knowable, testable, and reliable. Airplanes fly. Electric motors work. Quantum computers are coming. None of this would be possible if our theories did not tightly match our observations.

There’s no need for classical metaphysics here, no call for a supernatural realm — but there is “reality,” as far as we can know it, and there is “truth,” as far as we can perceive it. It seems that the realists can relax a little. There’s less threat here — and more promise — than at first we thought.

And that’s the real truth.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s