E.U. proposes rules for high-risk artificial intelligence uses

The proposals are the 27-nation bloc's latest move to maintain its role as the world's standard-bearer for technology regulation

April 21, 2021 05:22 pm | Updated April 28, 2021 02:34 pm IST - London

Vice-President Margrethe Vestager speaks at a media conference on the EU approach to Artificial Intelligence following a weekly meeting of EU Commission in Brussels, Belgium, April 21, 2021.

Vice-President Margrethe Vestager speaks at a media conference on the EU approach to Artificial Intelligence following a weekly meeting of EU Commission in Brussels, Belgium, April 21, 2021.

European Union officials unveiled proposals on Wednesday for reining in high-risk uses of artificial intelligence such as live facial scanning that could threaten people's safety or rights.

The draft regulations from the EU's executive commission include rules on the use of the rapidly expanding technology in activities such as choosing school, job or loan applicants. They also would ban artificial intelligence outright in a few situations, such as “social scoring” and systems used to manipulate human behaviour.

The proposals are the 27-nation bloc's latest move to maintain its role as the world's standard-bearer for technology regulation. EU officials say they are taking a “risk-based approach” as they try to balance the need to protect rights such as data privacy against the need to encourage innovation.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission's executive vice president for the digital age, said in a statement.

“By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.” The proposals also include a prohibition in principle on “remote biometric identification,” such as the use of live facial recognition on crowds of people in public places, with exceptions only for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person.

The draft regulations say chatbots and deepfakes should be labelled so people know they are interacting with a machine.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.