How the cracks in OpenAI’s foundation reignited mistrust in Sam Altman

A string of researchers working on AI policy and governance in the tech company have quit in succession. For a company that started as a non-profit, OpenAI’s lack of transparency has emerged as a more serious issue than its lackadaisical approach to the future of AI safety

Published - May 27, 2024 10:30 am IST

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21.

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21. | Photo Credit: AFP

In November last year, over the two-day snafu when OpenAI chief Sam Altman was fired and reinstated, his perception was dramatically different. Mr. Altman, who had led the company into spearheading an artificial intelligence changeover with the release of ChatGPT couldn’t seem more adored. OpenAI employees had collectively flooded X with tweets saying, “I love OpenAI” in what was seen as an uprising against the decision of the OpenAI board. However, in the week gone by, much of the goodwill towards Mr. Altman seems to have changed. And the board’s statement that called Mr. Altman “not consistently candid”, while announcing his firing, has returned in a boomerang effect.

Concerns over AI safety

OpenAI’s rough week started with the departure of Ilya Sutskever, the co-founder and former chief scientist at the company. Mr. Sutskever, who was a key member of the team that had built ChatGPT had surprisingly backed the three board members who had voted to fire Altman. The speculation was that Mr. Altman’s views on AI safety was very different from the board’s which was worrying given the momentum of AI development. Since Mr. Altman’s reinstatement, Mr. Sutskever has practically vanished into oblivion.

AI safety seemingly was of importance to Mr. Sutskever who formed the ‘superalignment team’ in the company last year in July. Mr. Sutskever co-led the team, with Jan Leike, with the goal of shepherding superintelligence so it stayed on track with its reins firmly in human hands by 2027. Aside from alignment, the team would also be “improving the safety of current models like ChatGPT, as well as understanding and mitigating other risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others,” the statement for the announcement read.

And for an ambition this lofty, the company said it would commit “20% of the compute we’ve secured to date over the next four years” for the initiative.

Last week, Mr. Sutskever waved goodbye to the company he founded. Two days later, Mr. Leike, a longtime researcher at OpenAI, announced his resignation as well saying he had reached a dead end after continuous disagreements with “OpenAI leadership about the company’s core priorities.” Signalling that the promised share of compute wasn’t granted to the team, Mr. Leike expressed concern that in the recent past “safety culture and processes have taken a backseat to shiny products.” Shortly after, the team which still had more than 25 people was disbanded.

A Fortune report shared that there was no specification around when and how the 20% compute would be distributed — was it equally over the four-year period or 20% every year or an arbitrary amount each year that would total to 20%? Regardless, it was enough reason for Mr. Sutskever and Mr. Leike to quit.

String of resignations

Even as rumblings of discord had just started, a few more researchers working on AI policy and governance quit soon after. Cullen O’Keefe quit his role as research lead on policy in April. Daniel Kokotajlo who had been working on the risks around AI models quit and responded on a forum saying he “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.”

Gretchen Kruege, another policy researcher, shared that she had resigned from the company on May 14. “One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power. I care deeply about preventing this,” her post read on X.

Severe non-disparagement policies

For a company that started as a non-profit, OpenAI’s lack of transparency has emerged as a more serious issue than its lackadaisical approach to future AI safety.

On May 17, Vox reported that former employees had been under duress to sign lengthy exit documents that restricted them from ever speaking negatively about the company if they wanted to retain their vested equity in the company. Leaked emails showed that employees asking for more time to review the documents or seek legal counsel weren’t given any leeway. “The General Release and Separation Agreement requires your signature within 7 days,” a reply said for someone who had requested another week. Mr. Altman professed on X that he had been ignorant to this clause and apologised for it. The backlash to the severe tactics forced the company to backtrack and take the non-disparagement clause down.

But this might not be the end of the saga.

Jacob Hilton, a researcher at the Alignment Research Center, who quit OpenAI a year ago tweeted on X saying tech companies are responsible for protecting researchers who speak about the tech in public interest because of how powerful it is. Mr. Hilton, who had also signed the NDA lest he lose equity said while he had received a call from OpenAI management about the change in policy, he would feel more secure if they legally enforced “non-retaliation” against ex-employees by “preventing them from selling their equity, rendering it effectively worthless for an unknown period of time.”

“I invite OpenAI to reach out directly to former employees to clarify that they will always be provided equal access to liquidity, in a legally enforceable way. Until they do this, the public should not expect candour from former employees,” he tweeted.

ScarJo vs OpenAI

Hollywood actor Scarlett Johansson’s accusations against Mr. Altman deepened public mistrust further. Post OpenAI’s latest demo of ChatGPT last week, murmurs started that the voice of the AI assistant Sky was eerily close to Ms. Johansson’s voice in the sci-fi film Her. Ms. Johansson released a statement saying Mr. Altman had reached out to her twice requesting to use her voice for Sky to which she had refused. Even more damning was Mr. Altman’s own tweet on X when the demo released, simply saying “her.”

While Mr. Altman provided proof later that the actress who did voice Sky wasn’t directed to imitate Johansson, collective evidence shows that the cracks in OpenAI’s foundation run deep and that Mr. Altman and his company are not in fact “consistently candid.”

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.