U.S. President Biden’s executive order on AI | Explained

How does President Biden plan to regulate the use and development of AI? What are the key highlights of the executive order signed by the U.S. President and is it enough?

November 14, 2023 08:51 pm | Updated November 15, 2023 10:14 am IST

U.S. President Joe Biden signs the Artificial Intelligence Safety, Security, and Trust executive order on October 30, 2023.

U.S. President Joe Biden signs the Artificial Intelligence Safety, Security, and Trust executive order on October 30, 2023. | Photo Credit: AP

The story so far: Artificial intelligence (AI) is making advancements globally even as governments struggle to establish a regulatory framework for the evolving technology. Joining the global efforts to govern AI, United States President Joe Biden last month issued an executive order to promote the “safe, secure, and trustworthy” use and development of AI by addressing broad issues related to privacy, misinformation and discrimination. 

The order, signed by Mr. Biden on October 30, lays down a preliminary set of guidelines for American companies and federal agencies to follow when dealing with the design, acquisition and deployment of advanced AI systems, with security as its core, and before making such technologies available to the public. Mr. Biden has insisted that the order is the “most significant action” any government in the world has ever taken on AI safety, and also called upon the Congress to pass bipartisan legislation to stop Big Tech platforms from collecting the personal data of citizens.

The latest action follows the AI Bill of Rights issued by President Biden last October, and voluntary commitments from technology giants to comply with safety standards for AI. Before this, the Trump administration had also issued an EO in 2019 which laid out basic standards for the use of AI.

Also Read | The potential of generative AI: creating media with simple text prompts 

What is an executive order?

The President manages the operations of the executive branch of the U.S. federal government through executive orders (EO). These are signed, written and published directives from the President to the executive branch. Only a sitting U.S. President can issue such orders to clarify and further existing laws, or overturn an existing EO by issuing another order to that effect. An EO is not a legislation and doesn’t require the approval of the Congress. It can, however, be subject to review by either Congress or courts, or both. 

Executive orders have primarily dealt with regular administrative affairs and the internal operations of federal agencies. However, in recent times, Presidents have used such orders to implement policies and programmes. Mr. Biden has issued over 120 executive orders since he took over as the U.S. President in 2021.

How does the executive order seek to regulate AI? 

Artificial intelligence, as defined in the order, is “any computer system or application that performs tasks that normally require human intelligence, such as perception, reasoning, learning, decision making, or natural language processing.” The EO signed by Mr. Biden lists eight principles and issues directions that companies must follow while dealing with AI tools and tech. As per the text of the order, the issued directions will be implemented and fulfilled anywhere between 90 days to 365 days.

Safety and Security 

Emphasising the safety and security of AI due to its growing capabilities and potential implications, the order invokes the Defense Production Act, which is mainly used in critical moments such as war. The Act was last used during the COVID-19 pandemic.

Companies developing powerful AI models that could pose a risk to national security, economic security, or public health and safety will have to notify the U.S. federal government when training such a system and share results of safety tests. Separately, the heads of government agencies will have to publish an annual report on potential risks related to the use of AI in critical infrastructure areas, including an assessment of how technology could be deployed to render infrastructure systems more vulnerable to critical failures, and physical and cyber attacks.

To ensure that AI systems are safe, secure and trustworthy, the order sets standards for testing such models and addressing risks to critical infrastructure and cybersecurity. As per the order, the National Institute of Standards and Technology will set the standards for extensive red-team testing (structured testing to identify potential flaws and vulnerabilities in an AI system) to ensure safety before public release, while the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.

The Department of Energy and Homeland Security will, meanwhile, deal with serious chemical, biological, radiological, nuclear, and cybersecurity risks associated with the development of AI tools and technologies. The department is also tasked with r finding ways to mitigate such threats. . Notably, the mandate does not apply to AI systems that have already been developed and are available to the public.

Image for representation only

Image for representation only

The order further directs action to safeguard the privacy of Americans and protect them from AI-enabled fraud such as deepfakes, which use AI-generated audio and visual content. The Department of Commerce has been tasked with developing standards to label AI-generated content to make its detection easier also known as watermarking.

So far, such labelling has proved ineffective as AI-generated videos and images involving children have flooded the Internet in the absence of a definitive regulation. The order also directs federal agencies to utilise developed tools to ensure that the public is aware that they have received authentic information from official sources.

It promotes ethical use of AI by the military and intelligence community. The National Security Council and White House Chief of Staff will develop a National Security Memorandum for actions on AI and security to ensure that the military and intelligence community use AI “safely, ethically, and effectively” in their missions.

Preserving privacy

In the EO, the President addresses privacy concerns while acknowledging a limited ability to pass laws. The order urges the U.S. Congress to pass new data privacy laws to protect citizens, especially children. Directions include enhancing research and technologies that prioritise privacy, and framing guidelines for federal agencies to evaluate the effectiveness of privacy norms in AI systems.

“Artificial intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed. To combat this risk, the federal government will ensure that the collection, use, and retention of data is lawful, and secure, and mitigates privacy and confidentiality risks. Agencies shall use available policy and technical tools, including privacy-enhancing technologies where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people’s data,” reads the order.

Fairness 

Besides privacy, there are also equity and civil rights at risk when it comes to AI. Mr. Biden acknowledged in his speech ahead of the signing that AI can lead to discrimination, bias and other abuses if the right safeguards are not in place. “From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms,” he noted.

The order issues directions to prevent the use of algorithms to exacerbate discrimination, including certain guidelines for landlords and federal contractors. The Department of Justice and Federal Civil Rights Offices have been tasked with working on best practices for investigating and prosecuting civil rights violations related to AI.

Pro-people approach

The order lists ways to advance the responsible use of AI in healthcare to maximise its benefits, including the development of affordable and life-saving drugs. Mr. Biden explains: “To protect patients, we’ll use AI to develop cancer drugs that work better for and cost less. We’ll also launch a safety programme to make sure AI health systems do not harm.” It also calls for creating resources to support AI-enabled educational tools, such as personalised tutoring in schools. AI best practices for sentencing will be framed to promote fairness in the criminal justice system.

It calls for the development of a new criminal justice system and best practices to determine how AI can be used in sentencing, parole, early release, surveillance and forensic analysis.

Additionally, with AI transforming the workplace, Mr. Biden called for the development of principles and best practices to ensure maximum benefits and minimal negative impact of AI for workers.

Promote innovation, competition

The Biden order includes provisions to attract talent to the country to “advance American leadership” in the global race to regulate AI. It promises researchers and students access to key resources and data, and grants for research in critical areas like healthcare and climate change. 

To promote a fair, open and competitive AI ecosystem, it provides for technical assistance and resources for small developers and entrepreneurs in commercialising AI breakthroughs. Notably, AI is expected to modernise and streamline visa processes to attract highly skilled individuals, including immigrants, to study, stay, and work in the U.S.

Global collaborations

To support the safe and secure deployment and use of AI worldwide, the order directs the State and Commerce Departments to lead efforts to establish international frameworks for harnessing the benefits of AI and managing its risks. It further calls for the implementation of vital AI standards with international partners, ensuring that the technology is safe, secure, trustworthy and interoperable.

Responsible and effective AI use by government

Guidelines will be issued for agencies regarding the responsible use of AI, including standards to protect rights, improve AI infrastructure, and strengthen its deployment. Mr. Biden has also issued a direction to convene an AI and Technology Talent Task Force to accelerate and track the hiring of AI and AI-enabling talent across the government.

Will this be enough?

The executive order has evoked mixed reactions from various stakeholders, with some critics calling it toothless and vague and other expressing optimism and hailed it as a step in the right direction.

The American Civil Liberties Union argues that the order makes important strides, such as requiring agencies to protect civil rights and civil liberties in any use of AI in governmental programmes, but fails to meaningfully address AI use in national security and offers insufficient protection from law enforcement uses of AI.

Albert Fox Cahn of the Surveillance Technology Oversight Project, a tech privacy advocacy nonprofit, contends that the worst forms of invasive technologies like AI deserve bans and not just regulations. “Many of these proposals are simply regulatory theatre, allowing abusive AI to stay on the market… the White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies,” he said , as per a Gizmodo report.

Microsoft President Brad Smith, meanwhile, has called the executive order “another critical step forward” in the governance of AI technology.

Digital rights advocacy group, Fight for the Future, termed the order a “positive step,” but added that it was “hard to say that the document, on its own, represents much progress.” 

“Biden has given the power to his agencies to now actually do something on AI. In the best-case scenario, agencies take all the potential actions that could stem from the EO, and use all their resources to implement positive change for the benefit of everyday people. But there’s also the possibility that agencies do the bare minimum, a choice that would render this EO toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI,” the group said in a statement.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.