The real and immediate impact of a distant AI doomerism

Every notable AI intellectual, from former Google Brain head Andrew Ng to Meta AI chief Yann le Cun, is weighing in.

November 23, 2023 11:34 am | Updated 12:24 pm IST

FILE PHOTO: On October 27, OpenAI announced a “preparedness” team to ward off potential “catastrophic risks” from AI systems/Reuters

FILE PHOTO: On October 27, OpenAI announced a “preparedness” team to ward off potential “catastrophic risks” from AI systems/Reuters | Photo Credit: Reuters

A little over a year ago, AI apocalypse was an improbable idea. But since the launch of Chatgpt, ideas of AI doomerism are now in the forefront. And every notable AI intellectual, from former Google Brain head Andrew Ng to Meta AI chief Yann le Cun, is weighing in.

On October 27, OpenAI announced a “preparedness” team to ward off potential “catastrophic risks” from AI systems. Another AI research school of thought has warned against fanning the flames of doomerism over fear of real-world implications on regulation.

What started the new wave of AI doomerism?

In March, Tesla CEO Elon Musk signed an open letter calling for a six-month pause on advanced AI development, citing serious risks to humanity. The letter was signed by some of the biggest names in tech, including AI pioneer Geoffrey Hinton and former Google CEO Eric Schmidt.

The alarm was contagious and accelerated. In May, Time magazine published an article authored by AI researcher Eliezer Yudkowsky which demanded data centers be “nuked” to halt AI development. Even as some scoffed at the extremism, the AI companies leading the AI revolution were hardly laughing.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

A month later, executives from leading AI companies including Sam Altman from OpenAI, as well as from Google’s AI arm DeepMind and Microsoft signed a statement repeating that the risk of extinction from AI was as serious as a pandemic or a nuclear war. Altman has admitted several times in the past that he was a “little bit scared” of AI.

Musk himself founded his own AI startup the same month the open letter was out.

A couple of days ago, xAI launched its first AI model called Grok. While Musk has repeatedly voiced that guardrails in AI tools are more important than ever, Grok which is trained on data from his microblogging platform X is as politically incorrect a chatbot as they come.

Are there advantages to it?

Some AI startups like Anthropic were a product of this dread. Their chatbot Claude was built to be much nicer than its rivals and has an AI model that follows a list of principles written by the company. Claude then tasks a second model to police the first one and check if the principles are being adhered to.

Anthropic’s co-founder and CEO, Dario Amodei and his sister and co-founder, Daniela Amodei were both a part of OpenAI’s safety and policy leadership before they started Anthropic. Their premise was that good AI could fight bad AI and they would be the ones on the right side of history.

Fear of the unknown has been common in our technological past. “In discussions about the risks of artificial intelligence, it’s not uncommon to find concerns arising from a limited understanding of the technology and certain individuals or groups using these same concerns to advance their own agendas. Throughout history, apocalyptic narratives have frequently accompanied major technological shifts, akin to those seen during the Industrial Revolution,” Giada Pistilli, principal ethicist of Hugging Face said.

Not all worrying is bad; but Pistilli warns about a tendency to go overboard. “Imagining potential dystopian outcomes can be a healthy exercise in defining what kind of future we want to avoid, while envisioning utopias helps us shape the future we aspire to. However, it’s crucial to remain pragmatic and not let fear-based narratives dominate our approach to the evolution and capabilities of AI.”

What is the impact on AI regulation?

“A lot of doomerism is a distraction from regulation now. It’s a well-crafted tactic to create an environment of irrational fears that divert people from very rational fears around AI. It’s like a red herring with the intent to say that ‘We are the only ones that understand this technology. Leave it to us to make the rules,” Anupam Guha, assistant professor of Ashank Desai Centre for Policy Studies, IIT-Bombay said.

The rational fears around AI are that they will be the same issues that tech has had only exacerbated many times more. There’s disinformation, realistic deepfakes and amplifying existing structures in different ways, like the racial and gender bias that AI has long had or the Kenyan workers OpenAI was employing at cheap costs to scrub toxicity from ChatGPT.

“These companies would have us believe that AI is too unique and revolutionary a technology to be governed and normal political processes can’t be used in this case to make regulation. But that’s not true - LLMs have been around since 2011; its just that they have been scaled. One should be scared of AI but not for the reasons as have been shown,” Guha noted.

Pistilli expressed a similar worry that the growing focus on these fear-based narratives around AI could overshadow more real and immediate risks of AI already present today. “Drawing on a philosophical perspective, reminiscent of Thomas Hobbes, this situation can be likened to a Hobbesian approach: when fear is instilled in the public regarding something powerful looming over them, it often leads to those people becoming easier to govern and control. In other words, this means that both individuals and institutions might become more inclined to heed those who claim to offer salvation, especially when faced with existential threats, potentially skewing the focus and direction of AI regulation,” she explained.

It becomes much easier for dominant AI companies to present themselves as sole guardians capable of harnessing and controlling such potent technology, she continued. They can manipulate the discourse to a great extent and to their advantage.

Is there a risk of AI monopoly?

The ignorance of lawmakers amid prevalence of lobbying makes them even more vulnerable, Guha pointed out. (In a recent interview, Bruce Reed, the White House’s deputy chief of staff said that President Biden was especially concerned after watching the new Mission Impossible film, a perfect movie for AI panic)

“To me this is very much the classic case of labour versus capital which has been made to look like its mystified. If we see any tech forums that are making laws, companies like Google and Microsoft are inevitably on the panel. How can the companies for whom the laws are being made be a part of the lawmaking body? But this is the reality that we are dealing with,” he stated.

The solution, Guha says, is to have a more democratic approach when it comes to regulation where people as a whole have some say in how the tech is used.

Top News Today

Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.