Vice President Kamala Harris met on May 4 with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.
The release late last year of popular AI chatbot ChatGPT — even President Joe Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.
But the ease with which it can mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.
Also read | ChatGPT: Careful balancing between human and machine learning is key
The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.
In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.
But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.
“I think we’re at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,” Mr. Conner said.
The Thursday meeting was designed for Ms. Harris and administration officials to discuss the risks they see in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Microsoft-backed OpenAI and Google-backed Anthropic. The government leaders' message to the companies is that they have a role to play in reducing the risks and that they can work together with the government.
Ms. Harris said in a statement after the closed-door meeting that she told the executives that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”
Authorities in the United Kingdom also said Thursday they are looking at the risks associated with AI. Britain’s competition watchdog said it's opening a review of the AI market, focusing on the technology underpinning chatbots like ChatGPT, which was developed by OpenAI.
President Joe Biden noted last month that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilizing ways. Mr. Biden also stopped by the event Thursday. "The president has been extensively briefed on ChatGPT and knows how it works,” White House press secretary Karine Jean-Pierre told reporters at Thursday’s news briefing.
A flurry of new “generative AI” such as chatbots and image-generators has added to ethical and societal concerns about automated systems.
Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.
Companies worried about being liable for something in their training data might also not have incentives to rigorously track it, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.
"I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” Mitchell said in an interview Tuesday. “From what I know of tech culture, that just isn’t done.”
Theoretically, some kind of disclosure law could force AI providers to open up their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy for companies to provide greater transparency after the fact.
“I think it’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”
While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.
The companies also face potentially tighter rules in the European Union, where negotiators are putting the finishing touches on AI regulations first proposed two years ago. The rules could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.
When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.
But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
Foundation models are a sub-category of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of data.
A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.
Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and the European Data Protection Board set up an AI task force, in a possible initial step to draw up common AI privacy rules.
In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though the one-time event might not be as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.
Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.
“This would be a way for very skilled and creative people to do it in one kind of big burst,” Frase said.
Published - May 05, 2023 05:04 am IST