From Literature to Biterature

Layout 1
Peter Swirski

2013

From Literature to Biterature (FL2B) sounds like a book that explores the possible transition from human to machine authorship, a transfer of “creativity” that bubbles away while we’re paying attention to other internet issues.

It’s when you see the complete title that you realize how little time Swirski has spent on the rather narrow issue of “author” versus “computhor.” Here’s the whole thing:

From Literature to Biterature: Lem, Turing, Darwin, and Explorations in Computer Literature, Philosophy of Mind, and Cultural Evolution

Source code this isn’t, much less is it machine language. Closely-structured clarity isn’t the goal here. FL2B is a new-fashioned mashup, a head-spinning mixture of topics and subtopics that only at the beginning (and later, from short time to short time) pays much attention to “computhors” and whether or not we humans will be able to read the “novels” that machines produce. Swirski informs us clearly that the concerns of a “computhor” are likely to be so different from our concerns that we’ll have little or no basis for interacting with the machine world of a “biterature” novel.

FL2B‘s variety creates an often-engrossing series of side trips — until, that is, the eminently exorable last section, which wanders off into the details of weaponized computers and never comes back. Frankly, I only very quickly skimmed the end of the book. You may have different tastes, but I just wasn’t interested in that much detail on how personalized drones could update the old wartime saying to “a missle with your DNA on it.”

That said, much of the rest of FL2B easily more than held my attention.

One of Swirski’s best-developed topics concerns the core question of how, if at all, we can know when a computer has begun to think? Through a long analysis of the Turing Test and the philosophical responses to the issues it raises, Swirski arrives at a satisfyingly imprecise notion of the nature of consciousness.

He begins with the central dilemma:

How can you determine that I am not a thinking being, or conversely, that I am? Cut me open, eviscerate me inside-out, but you will not see “thinking.” All you will find are mutually catalyzing cocktails of biochemicals, synaptic flare-ups and flare-downs, red and white corpuscles pumping more or less urgently through the squishy oversized walnut we call brain, and so forth. None of them is evidence of my thinking – or not.

Swirski warns that a search through computer programming for the machine parts of consciousness will continue to be fruitless: “The top-down, syntax-based approach to machine thinking displays all the signs of a degenerating research program.” This is due to the simple, restricting reality that  “it is impossible to understand ‘understanding’ in any other terms than its actual use.”

He explains:

A river is, after all, but an agglomeration of droplets swirling and twirling around one another. But as no one droplet carries the quality of riverness, where is the river? Where are streams, oceans, and soups in the world where streamness, oceanness, and soupness are not localized in any of their constitutive elements?

If this is true, then it is also true that

Many complex systems, in short, display emergent properties, which is another way of saying that many complex systems are selforganizing. We can know about them as much as there is to know, and we will still not be able to derive their higher-level autocatalytic properties.

Swirski argues that while traditional computer programming will not create truly “thinking” machines, advances toward “self-organizing” algorithmic entities are already being made, in subfields with such fanciful names as “zoobots,” “biomorphs” and “neuromorphs.” a category of “robotic critters” that “try to imitate with neural nets what bioneural nets do in living creatures.”

If we ever succeed in creating self-organizing “zoobots,” around what principles — in human terms, what “needs” and “goals” — will the creature organize itself?

Swirski points out that much, probably most, of human thinking is not rational, but emotional, and he identifies some “needs” and “goals” about which the “zoobot” likely will become “emotional”:

Unsecured energy supply, disruption (unplugging) or corruption (virus) of processing, and compromised integrity of the system would be tagged as negative. Continuance and more efficient tasking would go on the plus side of the program’s ledger.

What would a computer-written novel read like if it were a story about an algorithm’s emotional struggle during the search for an uninterrupted power supply? Would it “read” at all?

From Literature to Biterature is a bumpy ride along an unfamiliar road, and we don’t have a very good map. But Peter Swirski is never dismayed, as he invites us to follow him into this new landscape and trust that he will not lead us astray.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s