Explained | What are hallucinating chatbots?

Hallucinating chatbots are not a new phenomenon and developers have warned of AI models being convinced of completely untrue facts, responding to queries with madeup answers

February 17, 2023 03:15 pm | Updated 05:22 pm IST

Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers.

Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers. | Photo Credit: Getty Images

The story so far: On Feb. 11, Google’s head of Search and senior vice president Prabhakar Raghavan, while warning of the pitfalls of artificial intelligence in chatbots, said that it can sometimes lead to “hallucination”. A few days later, reports emerged that Microsoft’s Bing chatbot’s beta testers received disturbing replies and accusations from the AI.

These reports emerged even as Google and Microsoft were opening up their AI-enabled chatbots for test users. In the meantime, platforms like Quora and Alibaba are also working on their own AI chatbots for general use.

What are hallucinating chatbots? 

Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers, Mr. Raghavan shared. It is not a new phenomenon and developers have warned of AI models being convinced of completely untrue facts, responding to queries with madeup answers.

In 2022, Meta released their AI conversational chatbot called BlenderBot 3. At the time the company shared that BlenderBot 3 was capable of searching the internet to chat with users about any topic and would learn to improve its skills and safety through feedback from users. 

(For top technology news of the day, subscribe  to our tech newsletter Today’s Cache)

However, even at that time, Meta engineers had warned that the chatbot should not be relied upon for factual information and that the bot could apparently “hallucinate”.

An example of this was seen in 2016 when after being live on Twitter for just 24 hours, Microsoft’s chatbot Tay started parroting racist and misogynistic slurs back at users. The chatbot, designed as an experiment in “conversational understanding”, could be manipulated by users by just asking it to “repeat after me”. 

Why do AI chatbots start hallucinating? 

A defining feature of sophisticated generative natural language processing (NLP) models, hallucinations, can occur because these models require the capability to rephrase, summarise and present intricate tracts of text without constraints. This raises the problem of facts not being sacred and they can be treated in contextual form when sifting through information. An AI chatbot could possibly take widely available information rather than factual information as an input. The problem becomes especially acute when complex grammar or arcane source material is used. 

Therefore, AI models can start presenting and even believing in ideas or information that may be incorrect but which are fed to them by a large number of user inputs. And since these models are unable to distinguish between contextual information and facts, they respond to queries with incorrect answers. For example, when asked “What does Albert Einstein say about black holes?” AI models can return a quote made famous on the Internet rather than factual information based on Einstein’s research. 

What is the way forward? 

One of the biggest challenges is the identification of hallucinated texts, without the need to imagine entirely new NLP models that will incorporate ways to authenticate facts.

Research is on to tabulate and collate hallucinated texts from AI models that can be used to formulate a method to identify hallucinated output and incorporate filters in AI models to identify such text and remove them. 

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.