The last time Elon Musk took AI safety seriously was in 2015 when he co-founded OpenAI with Sam Altman. While the venture gradually took off, Musk left the founding team early on after a public spat erupted between him and Altman.
Now, the head of Tesla has founded another AI startup that aims to loosen OpenAI’s grip over AI, and the multi-billionaire wants the AI firm, xAI, to do nothing less than “understanding the nature of the universe.” When asked what that exactly meant, his response was all-encompassing - he wanted to understand and help others understand questions that were still unanswered about the nature of gravity, dark matter and how old the universe actually is.
To a question on how xAI will be different from other AI startups, he said: “From an AI standpoint, a maximally curious AI, one that is trying to understand the universe, is I think going to be pro-humanity.” There would be none of the bias and political correctness that OpenAI’s ChatGPT has so often been accused of. xAI, Musk said would build an AI that was “truth seeking.”
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Who is in the xAI inner circle?
Musk’s madcap reputation might make it easier for people to ignore that xAI is well-positioned to be serious competitor to Google AI and OpenAI. For starters, xAI’s team is star-studded with top researchers from Google’s DeepMind and Microsoft Research. Igor Babushkin, former Google DeepMind and OpenAI employee, worked on GPT-3.5, the base AI model behind ChatGPT. Mathematician Greg Yang worked at Microsoft Research. Jimmy Ba, assistant professor at University of Toronto, has authored research papers for Google DeepMind.
Other names include Manuel Kroiss, who is also former DeepMind and Google AI, and Yuhai Wu, who has worked at OpenAI, DeepMind and Google in the past. (Wu has previously worked on the PaLM 2 model from Google AI, which powers Bard, Google’s star chatbot).
That Musk is single-minded about xAI is clear from the people he handpicked: prolific scientists who had worked on seminal projects for their companies. Senior AI scientist at NVIDIA, Linxi Fan posted on LinkedIn about the founding members on Musk’s team saying, “I’m really impressed by the talent density - read too many papers from them to count.” Another scientist and physicist, Bojan Tunguz shared a similar sentiment saying, “Despite the fairly lofty and general aim, it is clear from the list of founding members that this is a very hard-core AI-focused endeavor.”
Musk also has Dan Hendrycks from the Center for AI Safety on the advisory for xAI. The AI regulatory body popped up first in June when it published an open letter signed by AI industry leaders warning against the risks of building powerful AI models, in the aftermath of OpenAI releasing GPT-4. Notably, Musk had signed the letter.
How might Musk leverage Twitter and Tesla?
Besides the storied names attached to xAI already, there were other natural advantages that Musk already has. Tesla’s Full Self-Driving (FSD) team has years of experience with building massive training datasets, perfect for an AI model to learn from. Dojo, a supercomputer that Tesla is developing for computer vision training, collects video clips and data to improve features for Tesla vehicles, are a goldmine to train an AI model. Musk explicitly stated that he expected xAI and Tesla to “mutually benefit each other” while also hoping that software developed at xAI would advance FSD capabilities for Tesla.
There’s also the small matter of the bank of multimodal data from Twitter that Musk has at his disposal. “I think every AI org has used Twitter’s data for training their chatbots illegally. We were being scraped like crazy. I guess, we will be using public tweets like everyone else has,” he noted. This would make xAI the only AI company with direct and legal access to the microblogging network’s daily churn of tweets.
But for Musk, data is just the starting point. He may eventually want to build a model that could work on its own like DeepMind’s programmes AlphaGo and AlphaZero. The AI programmes worked in a way where they gradually evaluated how a game was played and then taught itself how to beat humans at it. Musk believed that only if a machine behaved in this manner would it lead to AGI.
What are some of the problems the AI industry is facing?
Prominent players like OpenAI and Google have come under fire recently over privacy issues around the training data as well as AI-generated content. The Federal Trade Commission has launched an investigation into OpenAI for potential breaches of consumer protection laws, even as the EU’s AI Act is in the final stages of talks. Last week, Google was hit with a lawsuit alleging it stole data from millions without their permission via scraping to train their AI models.
The solution to AI’s copyright problem seems imperceptible yet, given that AI models have to be trained on human content to emulate human behaviour.
Published - July 19, 2023 02:49 pm IST