“We are on an incredible journey of co-evolution with our machines,” says Grady Booch, scientist, storyteller and philosopher who is leading IBM’s research and development for embodied cognition. “…every new technology brings with it some measure of trepidation. When we first saw cars, people lamented that we would see the destruction of the family. When we first saw telephones…people were worried it would destroy all civil conversation. We saw the written word become pervasive, people thought we would lose our ability to memorize. These things are all true to a degree, but it's also the case that these technologies brought to us things that extended the human experience in some profound ways.”
Marrying fiction to scientific reality, Booch says his life changed when he saw, “Stanley Kubrick's ‘2001: A Space Odyssey’… I loved everything about that movie, especially the HAL 9000. HAL was a sentient computer designed to guide the Discovery spacecraft from the Earth to Jupiter. HAL was also a flawed character, for in the end he chose to value the mission over human life. Now, HAL was a fictional character, but nonetheless he speaks to our fears, our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity.”
“If you look at movies such as ‘The Matrix’, ‘Metropolis’, ‘The Terminator’, shows such as ‘Westworld’, they all speak of this kind of fear. Indeed, in the book ‘Superintelligence’ by the philosopher Nick Bostrom, he observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity…such systems will eventually have such an insatiable thirst for information,” and Booch says something so significant, “...that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs.” The hope comes with, “…but super knowing is very different than super doing. Superintelligence would have to have dominion over all our world…Practically speaking, this is not going to happen,” assures Booch. “We are not building AIs that control…if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us…”
Inculcating values
“But,” says Booch, “AI will eventually embody some of our values…We teach them the cognitive system. If I want to create an artificially intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice that is part of that law, we are therefore teaching them a sense of our values. To that end, I trust artificial intelligence the same, if not more, as a human who is well-trained.”
Can a machine use values as we do, differently in different situations? Booch says, “It is possible for us to take a system of millions upon millions of devices, to read in their data streams, to predict their failures and act in advance… build systems that converse with humans in natural language, build systems that recognize objects, identify emotions, emote themselves, play games and even read lips, build a system that sets goals, that carries out plans against those goals and learns along the way, build systems that have a theory of mind.” Booch says we are learning to build systems that have an ethical and moral foundation and that will clinch the argument of whether or not to be afraid of AI.