The Glass Cage

The-Glass-Cage-book-coverNicholas Carr

“How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can’t. Those are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone.”

In The Glass Cage: Automation and Us, Nicholas Carr expands the warning in his previous best-seller The Shallows (reviewed here) from the virtual online world to the increasingly-real world of automation and artificial intelligence.

Well, not so much intelligence. One of Carr’s key points is that computers are smart, but they’re not intelligent. That is, they can outperform us on any number of tasks, but they do so by calculation, not cognition. And the difference, Carr believes, threatens some of the key ways in which we relate to the world.

Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.

Carr warns that as software takes over more and more of our “routine” tasks at work and at home, this automation reduces our opportunities to test our skills and eliminates the sense of accomplishment and satisfaction that comes from successful effort. Rather dramatically, Carr writes that “all too often, automation frees us from that which makes us feel free.”

Instead of remaining in control, we cede too much authority to automated systems. This leads us to suffer from two “cognitive ailments,” automation complacency and automation bias. The first “takes hold when a computer lulls us into a false sense of security.” The second “creeps in when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it.”

Automation tends to turn us from actors into observers. …That shift may make our lives easier, but it can also inhibit our ability to learn and to develop expertise. Whether automation enhances or degrades our performance in a given task, over the long run it may diminish our existing skills or prevent us from acquiring new ones.

Carr laments that “when automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.”

How do unthinking machines accomplish the tasks that until now were the exclusive province of the human brain? Not by replicating our thinking, but by replicating the results. A computer program can’t juggle the contexts and subtleties of complex tasks, but if it’s well constructed it can use its data-crunching superiority to reach by other means the kinds of outcomes that we reach by thinking. But, Carr argues, “the replication of the outputs of thinking is not thinking.”

Its our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge. Its this supple quality of mind, spanning conscious and unconscious cognition, reason and inspiration, that allows human beings to think conceptually, critically, metaphorically, speculatively, wittily to take leaps of logic and imagination.

Carr warns that it may not be in our best interest to give so much power to our tools. He writes that “we’re disembodying ourselves, imposing sensory constraints on our existence. With the general-purpose computer, we’ve managed, perversely enough, to devise a tool that steals from us the bodily joy of working with tools.”

And what of the moral dimension? Will a calculating but unthinking robot ever be able fully to parse the complex situations with which we struggle? Will a robot car swerve to avoid a child in the street, if that action puts the passengers in the car at risk? What about robot soldiers — what will be their “attitude” toward civilians? Carr writes that “the idea that we can calculate our way out of moral dilemmas may be simplistic, or even repellent, but that doesn’t change the fact that robots and software agents are going to have to calculate their way out of moral dilemmas.

The Glass Cage wants to make us think, perhaps while we still can. In that sense, it’s successful. But it’s not without its small flaws.

Although Carr fills the book with well-chosen and richly-detailed examples of how software negatively affects not only professionals like airline pilots and general practitioners but also most of us who use smartphones, there is nevertheless an annoying sense of repetition. Carr’s focus on just a few main points means that it will be only a few pages before we’ll encounter another version of the “thinking good, automation not so good” sentence that you’ve already encountered several times in this review.

That said, The Glass Cage remains a worthwhile warning about the potential cost of automation.