To those hassled by technology, the title of Jaron Lanier’s book – ‘You Are Not a Gadget: A manifesto’ (www.landmarkonthenet.com) – can be very reassuring. The book, written for people, not computers, begins with the question ‘What is a person?’ Drawing inspiration from Publilius Syrus’ statement that speech is the mirror of the soul, the author defines that a person is not a pat formula, but a quest, a mystery, a leap of faith.
He acknowledges that his words will mostly be read by non-persons, ‘minced into atomised search-engine keywords within industrial cloud computing facilities located in remote, often secret locations around the world,’ and ‘copied millions of times by algorithms designed to send an advertisement to some person somewhere who happens to resonate with some fragment. Alas, fragments are not people, he frets.
Something started to go wrong with the digital revolution around the turn of the twenty-first century, narrates Lanier. The World Wide Web was flooded by a torrent of petty designs sometimes called web 2.0; and this ideology promotes radical freedom on the surface of the web, but that freedom, ironically, is more for machines than people, he adds.
Demeaned interpersonal interaction
While conceding that anonymous blog comments, vapid video pranks, and lightweight mashups may seem trivial and harmless, the author is of the view that as a whole, this widespread practice of fragmentary, impersonal communication has demeaned interpersonal interaction. “Communication is now often experienced as a superhuman phenomenon that towers above individuals. A new generation has come of age with a reduced expectation of what a person can be, and of who each person might become.”
He observes that when developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program. “When they design an internet service that is edited by a vast anonymous crowd, they are suggesting that a random crowd of humans is an organism with a legitimate point of view.”
More importantly, it takes only a tiny group of engineers to create technology – be it webcam or mobile phone – that can directly manipulate your cognitive experience, and thus shape the entire future of human experience with incredible speed, Lanier notes. Therefore, he reasons that crucial arguments about the human relationship with technology should take place between developers and users before such direct manipulations are designed.
Ghost of UNIX
For instance, if you find at times that iPhone can be unnerving, the author explains that the thing has what is essentially UNIX in it, and is haunted by a weird set of unpredictable user interface delays. “One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while. An odd tension builds during that moment, and easy intuition is replaced by nervousness. It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.”
Students of UNIX may remember a core design feature called ‘command line interface’ – whereby you type instructions, hit ‘return’ and the instruction is carried out. A unifying design principle of UNIX is that a program can’t tell if a person hit return or a program did so, informs Lanier. “Since real people are slower than simulated people at operating keyboards, the importance of precise timing is suppressed by this particular idea. As a result, UNIX is based on discrete events that don’t have to happen at a precise moment in time. The human organism, meanwhile, is based on continuous sensory, cognitive, and motor processes that have to be synchronised precisely in time.”
It may anger Facebook/Twitter enthusiasts that the author considers them to be no different from ‘the anarchists and other nutty idealists who populated youth culture’ of yesteryears. The new strain of gadget fetishism is driven more by fear than by love, he rues. “I am always struck by the endless strain they put themselves through. They must manage their online reputations constantly, avoiding the ever-roaming evil eye of the hive mind, which can turn on an individual at any moment. A ‘Facebook generation’ young person who suddenly becomes humiliated online has no way out, for there is only one hive.”
Facebook reminds Lanier of the No Child Left Behind Act of 2002 in the US, which forced teachers to choose between teaching general knowledge and teaching to the test. What computerised analysis of all the country’s school tests has done to education is exactly what Facebook has done to friendships, he laments. “In both cases, life is turned into a database. Both degradations are based on the same philosophical mistake, which is the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.”
But scientists and engineers are not giving up. You perhaps know that computers can now decipher a smile. Facial expressions were imbedded deep within the imprecise domain of quality, reminisces the author. “No smile was precisely the same as any other, and there was no way to say precisely what all the smiles had in common. Similarity was a subjective perception of interest to poets – and irrelevant to software engineers.” Yet, engineers have finally gained the ability to create software that can represent a smile, and write code that captures at least part of what all smiles have in common, he writes.
A note of caution, though, is that computational neuroscience takes place on an imprecise edge of scientific method. An example cited in the book is again of facial expression tracking software, which might actually add more ambiguity than it takes away, because it draws scientists and engineers into collaborations in which science gradually adopts methods that look a little like poetry and storytelling. “The rules are a little fuzzy, and probably will remain so until there is vastly better data about what neurons are actually doing in a living brain.”
Search for smell pixel
In a chapter titled ‘making the best of bits,’ Lanier explores the world of smell, which is fundamentally different from one of images or sounds. The latter can be broken down into primary components that are relatively straightforward for computers – and the brain – to process, he describes. “The visible colours are merely words for different wavelengths of light. Every sound wave is actually composed of numerous sine waves, each of which can be quickly described mathematically.”
Not so with odours, you would realised, if you were to take a tour inside the nose. “Deep in the nasal passage, shrouded by a mucous membrane, sits a patch of tissue – the olfactory epithelium – studded with neurons that detect chemicals. Each of these neurons has cup-shaped proteins called olfactory receptors. When a particular molecule happens to fall into a matching receptor, a neural signal is triggered that is transmitted to the brain as an odour… The human nose contains about one thousand different types of olfactory neurons, each type able to detect a particular set of chemicals.”
While odours can be mixed together to form millions of scents, it can be sobering to learn that the world’s smells can’t be broken down into ‘just a few numbers on a gradient,’ and that there is no ‘smell pixel.’ Perhaps someday we will be able to wire up a person’s brain in order to create the illusion of smell, but it would take a lot of wires to address all those entries in the mental smell dictionary, Lanier muses. “Maybe at some level smells do fit into a pattern. Maybe there’s a smell pixel after all…”
For a contemplative study, away from all gadgets.
“To improve the health of our employees, we give them a new allowance based on…”
“How many hours they work?”
“Yes, but without computers, phones or other gizmos!”