Can the new Google chatbot be sentient?

Is there a possibility of future Artificial Intelligence technologies becoming ‘conscious’?

June 14, 2022 11:12 pm | Updated June 15, 2022 05:35 pm IST

Inspired by the mathematician Alan Turing’s answer to the question ‘Can a machine think?’, AI tech today aims to satisfy the Turing test to qualify as ‘intelligent

Inspired by the mathematician Alan Turing’s answer to the question ‘Can a machine think?’, AI tech today aims to satisfy the Turing test to qualify as ‘intelligent | Photo Credit: Getty Images

The story so far: Blake Lemoine, a U.S. military veteran, identifies himself as a priest, an ex-convict and an Artificial Intelligence (AI) researcher. He was engaged by Google to test for bias/hate speech in the Language Model for Dialogue Applications (LaMDA), Google’s nextGen conversational agent. He was sent away on paid leave after claiming that the updated software is now sentient. He claims that the neural network with deep learning capacity has the consciousness of a child of seven or eight years old. He argues that consent from the software must be obtained before experiments are run on it. Google and many tech experts have dismissed the claim. However, this episode, which came on the heels of Google firing Timnit Gebru over her warnings on their unethical AI, has caused ripples in social media.

Is AI technology here?

AI technology appears futuristic. However, Facebook’s facial recognition software which identifies faces in the photos we post, the voice recognition software that translates commands we bark at Alexa, and the Google Translate app are all examples of AI tech already around us.

Inspired by the mathematician Alan Turing’s answer to the question 'Can a machine think?', AI tech today aims to satisfy the Turing test to qualify as ‘intelligent’. Turing was the designer and builder of the world's first computer, ENIGMA, which was used to break the German codes during the Second World War. To test if a machine 'thinks', Turing devised a practical solution. Place a computer in a closed room and a human in another. If an interrogator interacting with the machine and the human cannot discriminate between them, then Turing said that the computer should be construed as 'intelligent'. We use the reverse Turing test, CAPTCHA, to limit technology access to humans and keep the bots at bay.

Which were the first chatbots to be devised?

As electronics improved and first-generation computers came about, Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory built ELIZA, a computer programme with which users could chat. ALICE (Artificial Linguistic Internet Computer Entity), another early chatbot developed by Richard Wallace, was capable of simulating human interaction. In the 1930s, linguist George Kingsley Zipf analysed the typical human speech and found that most of the utterances began with 2,000 words. Using this information, Wallace theorised that the bulk of commonplace chitchat in everyday interaction was limited. He found that just about 40,000 responses were enough to respond to 95% of what people chatted about. With assistance from about 500 volunteers, Wallace continuously improved ALICE’s responses repertoire by analysing user chats, making the fake conversions look real. The software won the Loebner Prize as “the most human computer” at the Turing Test contests in 2000, 2001, and 2004.

What is a neural network?

A neural network is an AI tech that attempts to mimic the web of neurons in the brain to learn and behave like humans. Early efforts in building neural networks targeted image recognition. The artificial neural network (ANN) needs to be trained like a dog before being commanded. For example, during the image recognition training, thousands of specific cat images are broken down to pixels and fed into the ANN. Using complex algorithms, the ANN’s mathematical system extracts particular characteristics like the line that curves from right to left at a certain angle, edges or several lines that merge to form a larger shape from each cat image. The software learns to recognise the key patterns that delineate what a general ‘cat’ looks like from these parameters.

Early machine learning software needed human assistance. The training images had to be labelled as ‘cats’, ‘dogs’ and so on by humans before being fed into the system. In contrast, access to big data and a powerful processor is enough for the emerging deep learning softwares. The App learns by itself, unsupervised by humans, by sorting and sifting through the massive data and finding the hidden patterns.

What is LaMDA?

LaMDA is short for 'Language Model for Dialogue Applications', Google's modern conversational agent enabled with a neural network capable of deep learning. Instead of images of cats and dogs, the algorithm is trained using 1.56 trillion words of public dialogue data and web text on diverse topics. The neural network built on Google's open-source neural network, Transformer, extracted more than 137 billion parameters from this massive database of language data. The chatbot is not yet public, but users are permitted to interact with it. Google claims that LaMDA can make sense of nuanced conversation and engage in a fluid and natural conversation. The LaMDA 0.1 was unveiled at Google's annual developer conference in May 2021, and the LaMDA 0.2 in 2022.

How is LaMDA different from other chatbots?

Chatbots like 'Ask Disha' of the Indian Railway Catering and Tourism Corporation Limited (IRCTC) are routinely used for customer engagement. The repertoire of topics and chat responses is narrow. The dialogue is predefined and often goal-directed. For instance, try chatting about the weather with Ask Disha or about the Ukrainian crisis with the Amazon chat app. LaMDA is Google’s answer to the quest for developing a non-goal directed chatbot that dialogues on various subjects. The chatbot would respond the way a family might when they chat over the dinner table; topics meandering from the taste of the food to price rise to bemoaning war in Ukraine. Such advanced conversational agents could revolutionise customer interaction and help AI-enabled internet search, Google hopes.

How intelligent are AI’s?

The Turing test is a powerful motivator for developing practical AI tools. However, scholars, such as philosopher John Searle, use the ‘Chinese Room Argument’ to demonstrate that passing the Turing test is inadequate to qualify as intelligent.

Once I used Google Translate to read WhatsApp messages in French from a conference organiser in France and in turn replied back to her in French. For some time, she was fooled into thinking that I could speak French. I would have passed the ‘Turing test’, but no sane person would claim that I know French. This is an example of the Chinese room experiment. The imitation game goes only so far.

Further scholars point out that AI tech uses a false analogy of learning. A baby learns a language from close interaction with caregivers and not by ploughing through a massive amount of language data. Moreover, whether intelligence is the same as sentience is a moot question. However, the seemingly human-like conversational agents rely on pattern recognition, not empathy, wit, candour or intent.

Is the technology dangerous?

The challenges of AI metamorphosing into sentient are far in the future; however, unethical AI perpetuating historical bias and echoing hate speech are the real dangers to watch for. Imagine an AI software trained with past data to select the most suitable candidates from applicants for a supervisory role. Women and marginalised communities hardly would have held such positions in the past, not because they were unqualified, but because they were discriminated against. While we imagine the machine to have no bias, AI software learning from historical data could inadvertently perpetuate discrimination.

T.V. Venkateswaran is Scientist F at Vigyan Prasar, Dept of Science and Technology

THE GIST
LaMDA, Google’s modern conversational agent is enabled with a neural network capable of deep learning. LaMDA is Google’s answer to the quest for developing a non-goal directed chatbot that dialogues on various subjects. Such advanced software could revolutionise customer interaction and help AI-enabled internet search.
With access to big data and a powerful processor, deep learning softwares can learn by itself, unsupervised by humans, by sorting and sifting through massive data and finding hidden patterns. Google claims that LaMDA can make sense of nuanced conversation and engage in a natural conversation. However, these seemingly human-like agents rely on pattern recognition, not empathy, wit, candour or intent.
The challenges of AI metamorphosing into sentient are far in the future; however, unethical AI perpetuating historical bias and echoing hate speech are the real dangers to watch for.
0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.