That OpenAI’s ChatGPT has received a lot of attention is well known by now. Recently, two journalists from the United States polling company, FiveThirtyEight, asked the artificial intelligence chatbot to create an 800-word piece regarding public perception of AI chatbots. “A 2021 survey by the Pew Research Center,” the chatbot wrote in the article, “found that 71% of Americans believe it is generally a good thing for society if robots and computers become more capable and sophisticated, while only 27% believe this would be a bad thing.” The 2021 Pew survey that ChatGPT was citing, however, was not found by the FiveThirtyEight journalists. When questioned, Pew’s media team had the same problem. With regard to the growing use of artificial intelligence in daily life, the FiveThirtyEight team, however, discovered a 2021 Pew survey that came to the opposite conclusion. Only 18% of respondents said they were more excited than concerned, 37% said they were more concerned than excited, and 45% said they were both equally concerned and excited.
The downside
Clearly, AI-powered search engines could be inaccurate and biased. They may even lie blatantly. This is particularly concerning because society as a whole appears to be almost ready to coexist with AI. Within days of its launch, Samantha Delouya of Business Insider asked ChatGPT to rewrite a piece she had written on a Jeep factory in Illinois that was idling production because the cost of producing electric vehicles was rising. A nearly pitch-perfect piece was created by ChatGPT, except that it contained fake quotes from Jeep-maker Stellantis’ CEO Carlos Tavares, which sounded convincingly like what a CEO might say when faced with the difficult decision to lay off workers.
But it was all made up.
People became aware that the chatbot Microsoft introduced to its Bing search engine was disseminating a variety of false information about the Gap, Mexican nightlife, the musician, Billie Eilish, and numerous other topics. As a result of the chatbot mania, Google had to introduce “Bard”. Alphabet’s shares plummeted by more than $100 billion after Bard gave an “incorrect” answer in a demonstration. In 2016, Microsoft apologised after a Twitter chatbot, Tay, began generating racist and sexist messages. Meta’s BlenderBot was telling journalists it had deleted its Facebook account after learning about the company’s privacy scandals. There are other examples too.
The problem with bot logic
In an interview with Time magazine, OpenAI’s chief technology officer Mira Murati said the bot “may make up facts” as it writes sentences. She called this a “core challenge”. But what are the reasons for such “hallucinations”? ChatGPT generates its responses by predicting the logical next word in a sentence, she said; but what is logical to the bot may not always be accurate.
In reality, AI models are based on vast amounts of digital text that are extracted from the Internet. This content contains a significant quantity of untruthful, biased and toxic materials that may be a bit outdated and that are subsequently inherited by AI models. These technologies do not directly copy text from the Internet when they generate it. And, importantly, they do not have any human-like concept of “true” or “false”. Yet, incorrect input may not be the only reason for such AI-generated untruths. “Even if they learned solely from text that was true,” Cade Metz, a technology correspondent, wrote in a recent article in The New York Times, “they might still produce untruths.”
A personal experience
Well, here is a snapshot of my personal experience. The most well-known “living” Bengali novelists are Shirshendu Mukhopadhyay, Samaresh Majumdar, Sunil Gangopadhyay, and Subodh Ghosh, said ChatGPT in response to my query. I immediately pointed out that Sunil Gangopadhyay died in 2012, whereas Subodh Ghosh passed away in 1980. ChatGPT responded, “I apologize, my training data cutoff is 2021.” But when I brought up the fact that they both passed away before 2021, it immediately responded, “I apologize for that error in my previous response... Thank you for bringing this to my attention.”
Hence, these chatbots are currently platforms that can resemble human writing without making any commitment to the truth. They are entertaining, for sure. However, can we conduct any real business with them? Can one, for instance, rely on a chatbot to prepare teaching materials or news articles? In a piece, in mid-February, in the MIT Technology Review, Melissa Heikkilä, senior writer, argues that the technology is not simply ready to be employed in this way at this scale. She referred to large language model chatbots as “notorious bulls***ters” because they frequently convey falsehoods as facts. “They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means,” she wrote.
Will they be able to acquire such “knowledge”? Will they ever be reasonably truthful? Maybe to some extent, with further training and development. Finally, will they ever comprehend the notion of commitment or the distinction between the truth and lies, though?
Atanu Biswas is Professor of Statistics, Indian Statistical Institute, Kolkata
COMMents
SHARE