An overview of the European Union’s Artificial Intelligence Act

The objectives of the EU AI Act, the world’s first legislation on AI, are to create a regulatory framework for AI technologies, mitigate risks associated with AI systems, and establish clear guidelines for developers, users, and regulators

Updated - December 18, 2023 02:04 pm IST

For representative purposes.

For representative purposes. | Photo Credit: Getty Images

The European Union’s Artificial Intelligence (AI) Act is a significant legislative initiative aimed at regulating artificial intelligence technologies within the EU. With the growing influence of AI across various sectors, the EU seeks to strike a balance between fostering innovation and ensuring ethical and responsible AI development. The objectives of the EU AI Act are to create a regulatory framework for AI technologies, mitigate risks associated with AI systems, and establish clear guidelines for developers, users, and regulators. The act aims to ensure the responsible use of AI by protecting fundamental rights and promoting transparency in AI applications.

Explained | What is the EU’s Artificial Intelligence Act and how does it plan to rein in tech like ChatGPT? 

The strengths of the Act

One of the notable strengths of the EU AI Act is its risk-based approach. The legislation categorises AI applications into different risk levels, ranging from unacceptable to low. This approach enables tailored regulations, with higher-risk applications subject to more stringent requirements. This flexibility acknowledges AI technologies’ diverse potential impact on society. It also explicitly prohibits certain AI practices deemed unacceptable, such as social credit scoring systems for government purposes, predictive policing applications, and AI systems that manipulate individuals such as emotional recognition systems at work or in education. This prohibition reflects the EU’s commitment to preventing the misuse of AI technologies.

The EU AI Act emphasises transparency and accountability in AI development and deployment. It requires developers to provide clear information about the capabilities and limitations of AI systems, enabling users to make informed decisions. Additionally, the legislation mandates that developers maintain comprehensive documentation to facilitate regulatory oversight. Moreover, to ensure compliance with the regulations, the EU AI Act introduces the concept of independent conformity assessment. Higher-risk AI applications like medical devices, biometric identification, and access to justice and services, must undergo assessment processes conducted by third-party entities. This approach enhances objectivity and reduces the risk of conflicts of interest, contributing to the credibility of the regulatory framework.

The limitations

One of the criticisms of the EU AI Act is the challenge in accurately defining and categorising AI applications. The evolving nature of AI technologies may make it difficult to establish clear boundaries between different risk levels, potentially leading to uncertainties in regulatory implementation.

Critics have also argued that the stringent regulations in the EU may hinder the competitiveness of European businesses in the global AI market. While the Act aims to ensure ethical AI practices, some fear that overly restrictive measures could stifle innovation and drive AI development outside the EU. Additionally, compliance with the EU AI Act may impose a significant burden on smaller businesses and start-ups. The resources required for conformity assessments and documentation may disproportionately affect smaller players in the AI industry, potentially limiting their ability to compete with larger, more established counterparts. Striking the right balance between regulation and fostering innovation is crucial, with critics arguing that the EU AI Act may lean too heavily towards stringent controls.

The potential implications

The EU AI Act is likely to have a global impact, influencing the development and deployment of AI technologies beyond the EU’s borders. As a major economic bloc, the EU’s regulatory framework may set a precedent for other regions, shaping the trajectory of AI development on a global scale, just like the MiCa regulation did for crypto-assets.

By prioritising ethical considerations and fundamental rights, the EU AI Act contributes to the establishment of global norms for AI development. And the impact on innovation and competitiveness will depend on the balance struck by the EU between regulation and fostering a conducive environment for AI development.

It encourages collaboration and cooperation between regulatory authorities, fostering a unified approach to AI regulation. International collaboration in regulating AI technologies is essential to address global challenges and ensure consistent standards across borders.

The administrative side

Any individual has the right to report instances of non-compliance. The EU member states’ market surveillance authorities will be responsible for enforcing the AI Act. There will be specific limits on fines applicable to small and medium-sized enterprises (SMEs) and start-ups. The EU will establish a centralised ‘AI office’ and ‘AI Board.’ In case businesses do not adhere to the EU AI Act, fines could range from $8 million to almost $38 million, depending on the nature of the violation and the company’s size. For instance, fines may amount to up to 1.5% of the global annual turnover or €7.5 million for providing incorrect information, up to 3% of the global annual turnover or €15 million for general violations, and up to 7% of the global annual turnover or €35 million for prohibited AI violations.

The EU’s AI Act represents a significant step towards regulating AI technologies responsibly and ethically. While it addresses key concerns associated with AI, such as transparency, accountability, and risk mitigation, there are challenges and potential drawbacks that need careful consideration. The global impact of the EU AI Act and its potential to shape international norms make it a landmark initiative in the ongoing discourse on the responsible development and deployment of artificial intelligence.

Sanhita is a Technology Lawyer.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.