OpenAI CEO Sam Altman has responded to criticism about the safety culture at the company and the way it handles the equity of departing employees, claiming that OpenAI has never clawed back vested equity if the person did not sign a separation agreement.
A media report by Vox last week pointed to the ChatGPT-maker’s “extremely restrictive off-boarding agreement” and how employees had to agree to certain nondisclosure and non-disparagement provisions, or face losing all their vested equity.
Altman admitted that there was previously a provision concerning potential equity cancellation, but said the company never clawed back anyone’s vested equity and said this was being fixed in the exit paperwork.
“This is on me and one of the few times I’ve been genuinely embarrassed running Openai; I did not know this was happening and I should have,” Altman posted on X on Sunday, asking former employees to reach out to him regarding this.
“Very sorry about this,” he posted.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
OpenAI’s former head of alignment, Jan Leike, posted on May 17 that he was leaving the company. While he expressed love for his team and experience, he criticised OpenAI’s safety culture and said he disagreed with the leaders’ approach.
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI,” said Leike across several X posts, urging OpenAI employees to focus on safety.
In response, Altman appreciated Leike’s contributions to the company and agreed that OpenAI had a lot more to do. He said the company was committed to safety and that he would share a longer post later.
He later re-shared a joint post he and OpenAI co-founder and president Greg Brockman wrote, defending the company’s launch procedures and pointing to the way the team released GPT-4 “in a safe way”.
“There’s no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions,” said Altman and Brockman.
However, OpenAI disbanded its AI risk team, reported AFP over the weekend.
OpenAI co-founder Ilya Sutskever on May 15 also announced he was leaving the company, but posted a more positive message in support of leaders Sam Altman, Greg Brockman, and Mira Murati, noting “I’m confident that OpenAI will build AGI that is both safe and beneficial”.
Sutskever was previously part of the OpenAI board, but was later removed and apologised for his involvement in the temporary firing of Altman in November last year.
AGI, or artificial general intelligence, refers to the concept of an AI which can match or surpass humans in most activities.
Published - May 20, 2024 10:50 am IST