For a couple of months now, there have been inklings that OpenAI’s star benchmark AI model, GPT-4 isn’t what it used to be. Social media platforms like Twitter and Reddit are flooded with multiple user complaints wondering why the model seems to have become “dumber” over time. Mumblings about OpenAI purposely degrading GPT-4 have consistently grown, forcing the company’s VP of Product, Peter Welinder to come out and deny these speculations. “No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one,” he tweeted on July 13. Welinder theorised that the model appears to have more issues because it is used more regularly now.
But the complaints weren’t as simple as that - users said that the model has become strangely evasive. While it had earlier responded to a certain prompt right away, now it needed more prompts to do the same task, which naturally uses more tokens. “Instead of just implementing a simple function, it sometimes suggests the user to do that, so I have to waste an extra command asking it to just implement the function?” a developer wrote.
Another wrote that it simply refused to answer questions it could very easily before. “I’m sorry but I can’t answer that” Is essentially what all my prompts have boiled down to,” he noted. The general consensus was that the model had lost its long-term memory or context, dragging out a task. “I use GPT-4 to augment long-form content analysis and creation, and in the past, it seemed to understand my requests well. Now, it seems to be losing tracking of information, giving me wrong information (from the set of information I gave it), and it misunderstands what I’m asking far more often,” a user posted on OpenAI’s community forum. Peter Yang, a Product Lead at Roblox tweeted suggesting that OpenAI may have done this to cut costs.
In April, a report by ‘The Information’ revealed that the Sam Altman-led company was spending a steep $700,000 per day on servers to keep their viral chatbot, ChatGPT running on the GPT-3.5 base model. Although there are no details out, to operate GPT-4 - an even more powerful AI model, would naturally cost much more.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Current user reactions are a far cry from the initial euphoria around GPT-4, which has since been declared as the “most powerful AI model.”