Someone has loaned me a novel she just read and now it sits on our hutch, beckoning me to read it. It’s a commercial novel, heavily promoted the way a second rate film is heavily promoted when the producers are nervous that it won’t gross enough to cover the obscene salary they advanced to the star. But a vague sense of duty eats away at me and I know I’ll read the book in spite of every instinct that tells me not to.
There’s a lot of good writing out there these days and I’m sure this loaner is an example of that good writing. Clean prose. Sentences with clear subjects and predicates, mostly in the active voice. Thoughtful. Clever even. It does everything good writing does these days. It makes for a good product. It’s packageable. It’s commodifiable. It’s utterly fungible. If this one doesn’t move, then we’ll put another on the shelf (or the online recommendations list), then another, and another, an endless stream of interchangeable good writing. After a while, the sameness induces a soporific feeling, our chin settles onto our chest, and the latest in a long succession of similarly good books slips from our fingers and clatters onto the floor where mice chew on the words and sprinkle their turds in the corner while we sleep.
It won’t be long—probably less than a decade—before the first question a discerning novel reader asks is: was this book written by a robot? Already, much of what we read from online news outlets is sourced by algorithms that aggregate data, and many of those same outlets are starting to deploy AI to generate the articles that present that data. Admittedly, the stylistic requirements of journalism are more standardized and, for that reason, probably easier to encode. Even so, the leap to novels can’t be far away.
It wasn’t long ago that Google attracted controversy to itself by embarking on a book-scanning project. It sought to digitize everything ever written but found itself staring down angry writers who maintained that Google was ignoring one minor detail: copyright laws. There was a proposed Book Rights Registry that would have created a legal way to access the 25 million books Google had scanned, but a judge killed that idea in 2011 and so, while the corpus still exists, its uses tend to the arcane. You can’t go to Google and read the latest release from your favourite author. But if you want to know how many times the word “the” occurs in the Complete Works of William Shakespeare, then Google is your best bud. To put Google’s effort in perspective, Mental Floss estimates that, as of 2016, our entire written output runs to 134 million volumes. In other words, Google has a substantial dataset that can be used to discover interesting trends in the way humans record their thoughts.
At the time of the controversy, some critics, like Jaron Lanier, suggested that the Google project was consistent with a philosophy of cybernetic totalism espoused by its founders: the idea that if you accumulate enough raw information then, by a strange autocatalytic alchemy, it would organize itself into self-aware sentience. This is the stuff of science fiction, most notably suggested by an Arthur C. Clarke short story, “Dial F for Frankenstein” (which, according to Tim Berners-Lee, was a source of inspiration for the World Wide Web). Published in Playboy in 1965, the story imagines a complex telephone network that, in 1975, rings every telephone in the world, the first cry of a newborn intelligence.
Ten years after Google buried its project, I’m not sure cybernetic totalism was ever in play. I’m more inclined to think the end game of a digital repository of 25 million books is to create a data set to train an AI algorithm to write books without human intermediaries. Presumably this would follow the same methodology that’s been applied to the development of facial recognition software. Two years ago, IBM scraped a million Creative Commons licensed images (including some of mine) from Flickr to create a training dataset for the development of less biased facial recognition software. I can’t imagine that the development of a novel-writing algorithm would stray far from this approach.
When it comes to facial recognition, algorithms have gone one further by generating convincing images of faces that have no correlate in the real world. Deep fakes. See thispersondoesnotexist.com if you ever need a fake head shot. One might say these images pass a facial Turing test insofar as ordinary people like you and me cannot tell whether or not the people represented in these images are real. At some point I expect we’ll apply a novel-writing Turing test, too, and find that we cannot tell whether or not the words we are reading have been generated by a flesh-bound author. We’ll delight in the personality behind the words, and then feel betrayed when we learn that those words were generated by a complex code. It’s Borges’ library with the benefit that it doesn’t take up so much space.
There is something offensive about being fooled in this way, but I can’t quite fix why I feel offended. When it comes to the simple communication of information, the artificial generation of articles seems eminently practical. But the suggestion that creative expression rooted in my personal experience is indistinguishable from the output of an algorithm makes me want to tear out my eyes or jump off a bridge. Creative expression is intimately tied to personal identity. Take that away and what remains? For some, it’s a matter of personal testimony: a narrative of suffering and a plea for empathy. But if a machine can successfully pass the novel-writing Turing test by mimicking all of this, then we have trivialized a great swath of human experience.
Fortunately, things are not so dire for novelists, nor for creatives generally, for the simple reason that while certain outputs may become easy to mimic, creativity itself is not an output but a process of endless reinvention. If, as I suspect is the case, coders understand the novel in static terms which conform to the prevailing notions of book-as-commercial-commodity, then the code that produces novels will function like a sophisticated mad-lib. What it will fail to capture is the novel as organic process forever poised to become something fresh. (Why else would we call it a novel?) A novel that conforms to established conventions can be mimicked, but there will also and always be writers who push the limits of those conventions and create fresh modes of expression. These are the writers who answer our feelings of anxiety at the prospect of AI novels. Their explorations reassure us that there are some things about the way we use language that are coextensive with the soul and irreducible to code.
I see that I’ve taken a long time to get around to the matter at hand, Nadine Gordimer’s novel, Get A Life. What drew me to it initially was the fact that its premise echoes an experience that has touched me directly. A young man named Paul is diagnosed with thyroid cancer, has a thyroidectomy, and although married and a father, retreats to the family home to convalesce under the care of his parents. In this scenario, I identify not with the patient but with the parent. Familiarity is no reason to read a book; there has to be something more. Here, Gordimer delivers that more in many ways.
There is the more of irony: the young cancer patient is an activist investigating a proposed nuclear power facility but after radiation treatment has to withdraw from people because he himself is “hot”. He poses the very risk he is fighting to eradicate from the planet.
There is the more of intractable conflict at the heart of primary relationships. Paul is married to a successful advertising executive, Berenice/Benni, (Get A Life is one of her firm’s tag lines) and she often handles accounts for companies whose interests are diametrically opposed to those that Paul and his crew are called to investigate. Meanwhile, Paul’s father, Adrian, recently retired and exploring archaeological sites in Mexico, sends a letter home to Paul’s mother, Lyndsay, announcing that he has fallen in love with his guide and won’t be coming home. Lyndsay finds it hard to be judgmental given that she had carried on a four-year affair many years earlier.
There is the more of a remarkable compassion that permeates the novel. Set in post-Apartheid South Africa, truth and reconciliation, tempered justice, it feels worlds and lifetimes away. Social media hadn’t yet swept the globe with its insistence that we engage one another in anonymous rage. One wonders if it would have been possible to dismantle Apartheid if Facebook had existed with fake accounts stoking the flames from afar. In Get A Life, virtually every character is flawed in some way, but they don’t tear one another to shreds either. Instead, they rest a little more easily in their lives than we are accustomed to do today.
Finally, there is the granular more of the language that gives the novel its life. It is a language that defies algorithmic reduction. Perhaps it is best to offer an example:
It turned out to be real that the inconceivable can become routine. At least so far as contact is decreed. Relationships. Their new nature, frequency, and limits. So if he does not get up to eat breakfast with them in the morning—sometimes too early a reminder that when he does join them his plate is paper and the utensils he uses must be put aside separately by Lyndsay or Adrian before they leave for the day—they make it their habit to open his door for an affectionate word of goodbye he knows is actually to see unobtrusively what state he’s woken in.
This excerpt is not an anomaly. You can read from any page and find a passage like this. The entire novel is a clinic in indirect narrative, which is to say that while it is written in 3rd person singular and creates the impression of a detached and objective voice, it surreptitiously catches characters in mid-thought. We watch discrete acts of consciousness unspooling themselves on every page. Like you and me in mid-thought, there is less attention to grammatical propriety. Instead, we have choppy bits. Subjects gone missing. Or implied. More fidelity to emotional states and to memory than to the logic of algorithmic prose.
A feature that characterizes indirect narrative is the divided consciousness. The writer places herself simultaneously inside and outside her characters. At first, that might sound like quite a trick until we consider that all humans do this. Unless we have a brain injury or cognitive disorder, we all place ourselves in a context—grocery shopping, for example—while simultaneously engaging in mental processes that set us apart from that context. We see ripe fruit and it triggers a memory, so we drift backwards through time. We think of other things we need to do later in the day, so we drift forwards through time. We play over and over in our minds a difficult exchange we had earlier that morning, maybe an argument we had with our spouse or partner, and we wonder if maybe we should have tempered our words, so we re-imagine time altogether. And, of course, we engage in tiny acts of meta-consciousness: we observe ourselves having thoughts even as those thoughts are playing out in our heads. I know this is true for me. How else could I have written this paragraph?
In the end, I think the Turing test is inadequate because it fails to register the second-order thinking endemic to all human mental activity. I think we need a second-order Turing test, one that would require the subject to report and reflect upon its own mental processes even as those processes are unfolding, and whose report and reflection would be so convincing that a person listening to it would be unable to say that the subject isn’t a human being. But I think it will be a long time before we need such a test. As long as there is poetry and the ragged-edged prose of people like Nadine Gordimer, writers who stretch the limits of how we engage language to share our experience, the coders will find it difficult to keep pace.