Our Final Invention

final invention

James Barrat
2013

While the risks involved with sharing our planet with superintelligent AI strike many in the AI community as the subject of the most important conversation anywhere, it’s been all but left out of the public dialogue. Why?

Are we busily constructing our own destructing? Is the dystopia of “The Matrix” just around the next virtual corner?

I don’t know.

Our Final Invention: Artificial Intelligence and the End of the Human Era is one of those annoying books that makes a strong case about something with which I am not conversant enough to judge, with any confidence, whether or not its author proves his point.

I’m an enthusiastic computer user, and I even dabbled in a little programming back in the dawn of time (the 80’s). But artificial intelligence is such a large and diverse topic that, while I understand what Barrat says, I don’t know if he’s a true prophet of just one of the new Chicken Littles (or is that Chickens Little?)

Here’s the core of his argument:

This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence.

Barrat warns that we’ve seen this kind of thing before:

We’ve learned what happens when a technologically advanced civilization runs into a less advanced one: Christopher Columbus versus the Tiano, Pizzaro versus the Inca, Europeans versus Native Americans. Get ready for the next one. You and me versus artificial superintelligence (ASI).

European invaders of the Americas had two kinds of motives, both of them disastrous for native populations. Whether seeking their own material gain or the spiritual salvation of the “savages,” the newcomers wreaked havoc on existing cultures. And the unintended, unanticipated consequences — principally, the introduction of disease pathogens against which their victims had no defense — were even worse. Barrat sees great danger in our naive assumption that the goals of a “superintelligence” would be our goals:

Not knowing how to build a Friendly AI is not deadly, of itself. . . . It’s the mistaken belief that an AI will be friendly which implies an obvious path to global catastrophe.

The primary goal of an intelligent system would be to survive, not to serve us. Whatever goals we give it, the system can achieve them only if it continues to exist.

A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.”

Even worse, once we activate a self-improving, learning system, it will soon advance beyond our understanding. This type of unknown process already happens, with everything from the mechanics of consciousness to the reason that vaccines every so often create the disease they usually prevent. Barrat writes:

Like genetic algorithms, ANNs (artificial neural networks) are “black box” systems. That is, the inputs—the network weights and neuron activations— are transparent. And what they output are understandable. But what happens in between? Nobody understands. The output of “black box” artificial intelligence tools can’t ever be predicted, so they can never be truly and verifiably “safe.”

There’s much more, from reasonable concerns about the current military prominence in AI research (more than fifty countries are building or trying to build drones) to rank speculation that unchecked computer intelligence will expand to consume all of the matter (= energy) in the galaxy and rule the universe. Whatever your fear, Barrat has thought of it.

As noted at the beginning of this review, I’m not quite sure what I think about this book. On the one hand, Barrat’s alarm that no one seems to be taking the potential risks of self-aware AI seriously enough. On the other, there’s a bit too much Matrix-inspired doom and gloom. Is the universe really in peril because we’re building smarter and smarter computers?

One serendipitous cultural tie-in is to the new Spike Jonze film, Her.

I saw the film a few days ago, after I had finished reading Our Final Invention. Certainly, Her is not a scholarly consideration of the potential of a looming explosion of computer intelligence. It’s a romantic comedy, at its core the story of a woman who grows out of her first love relationship. OK, she’s a computer program, but let’s not be prejudiced against the non-corporeal. Samantha learns and changes. She explores and expands her personality, her “self.” In the end, she occupies a state of being which her human lover can’t share.

So far, so good. Barrat is not alone in positing that a super-computer would exist in a very different world than ours, so different that communication, much less a relationship, will be difficult if not impossible.

Jonze’s film ends non-tragically (to say any more, or anything else, would be too much of a “spoiler” for those who haven’t yet seen the film), but isn’t that the whole idea of romantic comedy? After all, the crucial difference between tragedy and comedy is that no one dies in a comedy. It doesn’t have to be happy; it has only to be not fatal.

I don’t think that we can rely on a Hollywood movie for reassurance about the future of our relationship with computers,.

But I also don’t think that we should swallow Barrat’s bitter pill without a lot of further thought.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s