OpenAI executive denies claims that GPT4 has been dumbed down

While users on social media platforms have complained OpenAI’s star benchmark AI model, GPT-4 isn’t what it used to be, the company has denied the speculations

Updated - July 14, 2023 05:12 pm IST

Published - July 14, 2023 04:50 pm IST

Current user reactions are a far cry from the initial euphoria around GPT-4.

Current user reactions are a far cry from the initial euphoria around GPT-4. | Photo Credit: AP

For a couple of months now, there have been inklings that OpenAI’s star benchmark AI model, GPT-4 isn’t what it used to be. Social media platforms like Twitter and Reddit are flooded with multiple user complaints wondering why the model seems to have become “dumber” over time. Mumblings about OpenAI purposely degrading GPT-4 have consistently grown, forcing the company’s VP of Product, Peter Welinder to come out and deny these speculations. “No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one,” he tweeted on July 13. Welinder theorised that the model appears to have more issues because it is used more regularly now.

But the complaints weren’t as simple as that - users said that the model has become strangely evasive. While it had earlier responded to a certain prompt right away, now it needed more prompts to do the same task, which naturally uses more tokens. “Instead of just implementing a simple function, it sometimes suggests the user to do that, so I have to waste an extra command asking it to just implement the function?” a developer wrote.

Another wrote that it simply refused to answer questions it could very easily before. “I’m sorry but I can’t answer that” Is essentially what all my prompts have boiled down to,” he noted. The general consensus was that the model had lost its long-term memory or context, dragging out a task. “I use GPT-4 to augment long-form content analysis and creation, and in the past, it seemed to understand my requests well. Now, it seems to be losing tracking of information, giving me wrong information (from the set of information I gave it), and it misunderstands what I’m asking far more often,” a user posted on OpenAI’s community forum. Peter Yang, a Product Lead at Roblox tweeted suggesting that OpenAI may have done this to cut costs.

In April, a report by ‘The Information’ revealed that the Sam Altman-led company was spending a steep $700,000 per day on servers to keep their viral chatbot, ChatGPT running on the GPT-3.5 base model. Although there are no details out, to operate GPT-4 - an even more powerful AI model, would naturally cost much more.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Current user reactions are a far cry from the initial euphoria around GPT-4, which has since been declared as the “most powerful AI model.”

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.