Technology can be frustrating.
You may remember reading a news item from July, 2009 about a 17-year-old Kindle user named Justin Gawronski who was reading George Orwell’s 1984 for a school assignment when the book disappeared from his device. The problem was that he had been making notes on his Kindle to help him complete his assignment and his notes disappeared too. Amazon claimed that whoever had posted that edition of 1984 did not have the right to sell it, and so Amazon deleted it from all Kindle devices and refunded the purchase price—but not the kid’s school work
Or what about the May 16th article from Chilling Effects examining the volume of DMCA takedown notices issued by the International Federation of the Phonographic Industry (IFPI)? The IFPI polices websites to ensure that they don’t post digital music that is subject to copyright protection. If it has a “good faith belief” that the posted content is unauthorized, the onus rests on the owner of the website to establish that the posting is legitimate. Since the process of challenging a takedown notice is expensive and complex, even legitimate websites typically disappear once they receive such a notice. At the height of its activity in January, the IFPI was averaging about one takedown notice every three minutes. Not bad for an organization with a small staff. The inference, of course, is that the whole process of developing a “good faith belief” is performed by bots.
Or what about an example closer to home? Ever had to struggle with Microsoft Word’s formatting tools? The software tries to anticipate your intentions as you develop your brochure or newsletter or outline. Meanwhile, you wish it would stop guessing wrong all the time.
These are all examples of digital design issues that were locked in years ago by software developers who gave little thought to the way in which their automation tools would interact with real human beings.
This is the main theme of Jaron Lanier’s You Are Not A Gadget, a critical examination of the digital world we have created for ourselves and how that is affecting our understanding of what it means to be human.
The examples I’ve given above end up frustrating us because they arise from a world view which seems to be ubiquitous among technologists. They are manifestations of “cybernetic totalism”—the view that our technology is progressing inexorably to a state of self-awareness and intelligence. Digital designers subscribe to this view as if it were a religion. Like a religion, it comes in different forms which we might call denominations. There’s an apocalyptic denomination that anticipates a rapture. Only, in this rapture, there is no heaven for virtuous humans. Instead, we either become irrelevant or die in one of any number of end-game scenarios.
There are more benign examples of cybernetic totalism. There is the denomination that believes that if we amass enough information and network it, a governing intelligence will emerge. Google belongs to this sect. Its project to scan all the world’s books is driven by this belief. A competing model believes in the idea of a “hive mind.” A greater intelligence arises, not from the machines themselves, but from the way the machines facilitate the networking of humans to perform certain tasks. Wikipedia belongs to this sect. It believes the contributions of many ultimately hone intellectual output to create “products” that are superior to the “products” an individual could create.
In his paean to contrarian thinking, Lanier believes these accounts are nonsense. To explain why he believes “cybernetic totalism” is a crock, he takes us down a rabbit hole of philosophy, religion and culture. In the end, his view is determined by ethics. He makes the simple observation that cybernetic totalists have no way to determine when they’ve achieved their goal of machine intelligence. The best they can do is appeal to the Turing test which Lanier frames thus:
“Turing presented his new offering in the form of a thought experiment, based on a popular Victorian parlor game. A man and a woman hide, and a judge is asked to determine which is which by relying only on the texts of notes passed back and forth. Turing replaced the woman with a computer. Can the judge tell which is the man? If not, is the computer conscious? Intelligent?”
Lanier rejects the test. The problem lies with the judge. How can the judge draw an adequate comparison between the performance of a machine and a human if the judge has no criteria for understanding what it means to be human? There is a fundamental arrogance at the heart of the Turing test and, Lanier argues, at the heart of digital design. We simply have no criteria, and never will, because there is something ineffable about being human. The Turing test works only if we degrade humans. Lanier’s book is an appeal to treat humans as wondrous creatures. Digital design must work with an ethic which respects that in our nature.
Lanier sets out current problems with digital design.
Lock-in. With the development of software, lock-in occurs when too many other programs become dependent upon it and the cost of change becomes prohibitive. The result is that sometimes bad design becomes permanent. Lanier cites the MIDI protocol as an example. In a limited set of situations, MIDI can be an empowering tool for musicians, but because its use has become so widespread, it has ultimately had a reductive influence upon digital music. The phrase he uses is “nuance-challenged” and, speaking of lock-in, he observes that “a thousand years from now, when a descendant of ours is traveling at relativistic speeds to explore a new star system, she will probably be annoyed by some awful beepy MIDI-driven music to alert her that the antimatter filter needs to be recalibrated.” Seriously. Whatever the device you are using to read this review, it is MIDI capable.
Fragmentation. “Information is alienated experience.” A global information network decontextualizes the experience which grounds the information it generates.
How did you arrive at this blog? If you arrived via a search engine, then you may have landed on this page without walking through my “front door.” What were you expecting? An impersonal discussion that pretends (like wikipedia) to a detached, objective, impersonal point of view? Are you aware that this blog post of mine occurs within a sequence of thoughts that I have been recording online for six years now? I would like to think that this post is a lot like a brick in an edifice that I am building. I think of this edifice as an expression of my world view. It is shaped by more than information. It is also shaped by the particularity of my personal experience which includes concerns like my roots, my personal relationships, cultural and religious influences, education, aspirations, gender, sexuality.
If you were to do a proper job of contextualizing this review, you’d have to read my blog from top to bottom—not an easy task because it’s grown to something like half a million words. Even then, you wouldn’t have much of a context. You’d have to call me on the phone for a chat, get a feel for my personality, arrange to meet for a coffee. Even then, you wouldn’t have much of a context. You’d have to get to know me, foster a friendship.
Contextualizing information takes work. We think machines can do the work for us. But they can’t. How can a machine foster my friendships for me?
Anonymity. This is related to fragmentation. In the world of web 2.0, the ability to post anonymous comments was a deliberate design decision. It has proven to be a disastrous design decision. It has degraded public discourse. It has encouraged good people to say and do horrible things to others. It has allowed us to evade personal responsibility.
It is not surprising then, that in a list of suggestions “of things you can do to be a person instead of a source of fragments to be exploited by others,” Lanier places “Don’t post anonymously” at the very top.
Cultural Erosion. In the conflict that has arisen between Big Media (RIAA, MPAA & the IFPI mentioned above) and Open Culture proponents, Lanier takes a surprising view. Both camps subscribe to versions of cybernetic totalism and so their dispute is a distinction without difference. He argues that unless cultural efforts are shared in ways that produce artificial scarcity, the infinite replicability of digital media will, in the long run, render all digital creation worthless. The experience of the music industry demonstrates that music shared through open channels like the creative commons does not and (Lanier argues) cannot generate a livable income. The only people who make money online in the guise of “Open Culture” are those who come to it with already flourishing careers. There are one-off exceptions of course, but these deserve as much attention in strategic thought about the promotion of online cultural activities as winning the lottery deserves in developing a personal financial plan.
Lanier takes aim at those who claim that “information wants to be free.” Even today, Cory Doctorow posted a piece in the Guardian addressing this statement. Doctorow’s quibble is with the words “information” and “free.” As he says, it’s people, and not information, who want to be free. Lanier would say Doctorow should be looking at the word “want” instead. It would make more sense for Doctorow to reject the rather juvenile fantasy of cybernetic totalism which, like a bad religion, reifies something that isn’t even real. Information can’t “want” anything. It is nothing until we humans give it a context and a meaning.
If we care about culture, whether we describe it as “open” or otherwise, then we should take a long hard look at design issues. How do human beings interact with digital culture? What is possible? Are there alternatives to DRM? (Lanier makes some proposals.) Can we make artificial scarcities which allow creators to earn a living in the digital environment? Does publishing have to face the same issues that the music industry has faced? Has the design of the Kindle already locked-in?
These are important questions and they need to be answered now. Otherwise, in very short order, the internet will reduce to a machine that monetizes the value of our free efforts (like this book review) and delivers that value to a handful of elites. You already know their names: Google, Amazon, eBay, Yahoo, Apple, Facebook, Twitter.