The story so far: On March 29, Elon Musk and a group of AI experts signed an open letter calling for a moratorium on developing artificial intelligence (AI) systems that are more powerful than OpenAI’s recently launched large language model (LLM), GPT-4. The letter, which at the time of writing had over 1,300 signatories, called on all AI labs to immediately pause, for at least six months, training of any systems more powerful than GPT-4.
What does the letter say?
Citing one of the Asilomar AI principles on how advances in AI could profoundly impact people’s lives, the Future of Life Institute (FLI) issued letter noted that “AI labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” The Asilomar AI Principles are one of the earliest sets of AI governing principles laid out at the Beneficial AI 2017 conference hosted by FLI.
What is the context?
FLI’s open letter comes amid rapid development and deployment of AI technology in several industries. Since OpenAI dropped its generative pre-trained (GPT) chatbot in November, allowing people to freely interact with ChatGPT, there has been a dramatic rise in adoption of AI by many firms.
The Microsoft-backed chatbot wowed people with its instant, and often apt, replies. It could correct software codes, and explain almost everything on the Internet. Roughly three months after its launch, the company allowed developers to integrate ChatGPT’s API with their applications for a fee. Enterprise version of the chatbot, called ChatGPT Plus, has already been integrated by Snapchat, Unreal Engine and Shopify in their applications. Such rapid adoption by businesses is heating up competition. Google has taken up the gauntlet. The Alphabet-owned company launched Bard, a version of its Lamda language model. In China, Internet giant Baidu has launched Ernie, an AI-powered chatbot than can summarise financial statements. Separately, the Massachusetts Institute of Technology’s media lab has developed ELSA, an AI bot that can act as a psychotherapy counsellor. It could potentially be deployed in cognitive behavioural therapy sessions.
Why does FLI’s letter focus on GPT-4?
On March 14, OpenAI launched its most advanced LLM yet: GPT-4. This model has been trained on a trillion parameters, compared to ChatGPT’s 175 billion parameters. Having access to such a large dataset allows GPT-4 to learn and understand complex patterns and nuances in natural language far better than its predecessor. Unlike ChatGPT, GPT-4 is capable of handling both text and image-based queries. This makes it versatile when compared to other AI language models. Such multi-modal advances give GPT-4 scale to get closer to artificial general intelligence (AGI). That means machine intelligence could be as good as human intelligence. Computer scientists and ethicists are grappling with this eventuality. Even Sam Altman, OpenAI’s co-founder, wrote about the potential negative aspect of AGI stating this technology could come with “serious risk of misuse, drastic accidents, and societal disruption.” He called on developers of AGI to figure out a way to get it right. “A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place,” he wrote.
What is the view of AI experts?
AI experts who have signed FLI’s letter approach the rise of LLM-based bots from different vantage points. Some see these developments as accelerating humanity towards a doomsday scenario where machines will triumph over humans. Others see these advances as mediocre intelligence that are potentially unreliable.
Will it have the desired impact?
It is hard to say whether the letter will have its desired impact as OpenAI is already training GPT-4’s successor. And the Microsoft-backed company has not yet responded to the open letter. According to some developers and tech entrepreneurs, GPT-5 could be indistinguishable to humans, and the language model could achieve AGI by the end of this year. This prediction comes at a time when there are no regulations in place to enforce a ban on developing AI. Currently, governments do not have any policy tools to halt work in AI development.
- The Microsoft-backed chatbot wowed people with its instant, and often apt, replies. It could correct software codes, and explain almost everything on the Internet.
- AI experts who have signed FLI’s letter approach the rise of LLM-based bots from different vantage points.
- It is hard to say whether the letter will have its desired impact as OpenAI is already training GPT-4’s successor. And the Microsoft-backed company has not yet responded to the open letter.
COMMents
SHARE