Understanding the Artificial Intelligence Act: Key Aspects and Impact

Understanding the Artificial Intelligence Act: Key Aspects and Impact

Table of Contents

  1. Introduction
  2. What is the Artificial Intelligence Act?
  3. Applicability of the AI Act
  4. Categories of AI Systems
  5. Obligations and Requirements for AI Providers
  6. Sanctions for Non-Compliance
  7. Regulation of AI Systems Before the AI Act
  8. Conclusion

The Artificial Intelligence Act: What You Need to Know

Artificial intelligence (AI) is rapidly advancing, and with it comes the need for regulations to ensure that AI systems are transparent, reliable, and safe. The European Commission has proposed the Artificial Intelligence Act, which aims to establish a legal framework for the development, deployment, and use of AI systems in the European Union. In this article, we will discuss the key aspects of the AI Act, including its applicability, categories of AI systems, obligations and requirements for AI providers, and sanctions for non-compliance.

What is the Artificial Intelligence Act?

The Artificial Intelligence Act is a regulation proposed by the European Commission that aims to establish a legal framework for the development, deployment, and use of AI systems in the European Union. The goal of the AI Act is to ensure that AI systems are transparent, reliable, and safe, and that they respect fundamental rights. The AI Act was proposed in April 2021 and is expected to be applicable on December 6, 2022, after being reviewed and adapted by the European Parliament and the Council of the EU.

Applicability of the AI Act

The AI Act will Apply to AI developers and providers, which means that companies and organizations placing AI systems in the European market will need to comply with the requirements and obligations set out in the regulation. Users and operators of AI systems will also have to comply with the requirements from the regulation. This includes not only companies that use or operate AI systems but also companies located outside of the EU if the results produced by the AI system are used in a member state. The AI systems will impact European citizens and Interact with them, and the AI Act aims to protect their rights and interests during those interactions.

Categories of AI Systems

The AI Act identifies three categories of AI systems, each subject to specific obligations and security requirements depending on the level of risk involved. The first category includes AI systems with an unacceptable risk level, where safety will be strictly prohibited. This category includes systems that deploy subliminal or purposely manipulative techniques, exploit people's vulnerabilities, or are used for social scoring. The AI Act also regulates emotion recognition systems in law enforcement, workplace, and educational institutions, as well as the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of reinforcement.

The Second category includes high-risk AI systems that are creating an adverse impact on people's safety or their fundamental rights. This includes AI systems that use biometric identification and gatekeeper functions, human management of critical infrastructure, education, employment, access to essential public services and benefits, law enforcement, migration, asylum, and border control management, administration of justice, and democratic processes. The AI Act imposes a range of mandatory requirements on the providers of such systems related to documentation, risk management systems, transparency, and safety.

The third category includes AI systems that pose specific risks of manipulation, such as systems that interact with human chatbots or systems that generate deep fakes. These systems are subject to transparency requirements, which means that the user must clearly be informed that they are interacting with an AI system.

Obligations and Requirements for AI Providers

The AI Act imposes several obligations and requirements on AI providers, including the obligation to ensure that their AI systems comply with the requirements set out in the regulation. Providers must also ensure that their AI systems are transparent, reliable, and safe, and that they respect fundamental rights. They must also provide documentation on the AI system's characteristics, capabilities, and limitations, as well as a risk assessment and management plan.

Providers of high-risk AI systems must also ensure that their systems undergo conformity assessments before being placed on the market. They must also provide a technical file and a declaration of conformity, and they must register their systems with the European Commission. Providers of high-risk AI systems must also establish a quality management system and appoint a person responsible for regulatory compliance.

Sanctions for Non-Compliance

The AI Act provides for sanctions for non-compliance, which can be significant. National authorities will be allowed to find companies up to 30 million Euros or up to six percent of worldwide annual turnover in two cases: first, if a prohibited AI system is placed on the European market, and secondly, if an AI system doesn't comply with the quality requirements for the data. The supply of incorrect or misleading information to competent authorities in reply to requests will be fined up to 10 million Euros or two percent of the worldwide annual turnover. The non-compliance of the AI system with any other requirements or obligations or other regulations could be penalized up to 20 million Euros or four percent of the annual turnover.

Regulation of AI Systems Before the AI Act

Before the AI Act, the main regulatory framework for AI systems was the General Data Protection Regulation (GDPR), which is the main regulatory framework for the processing and collection of personal data. The European Data Protection Board (EDPB) has discussed the enforcement action undertaken by data protection authorities (DPAs) in relation to AI systems that process personal data. The EDPB has also released guidelines on the appropriate and lawful use of AI. Some countries, such as France, have provided organizations with a self-assessment guide for AI systems.

Conclusion

The Artificial Intelligence Act is a significant step towards regulating AI systems in the European Union. The AI Act aims to ensure that AI systems are transparent, reliable, and safe, and that they respect fundamental rights. The AI Act imposes several obligations and requirements on AI providers, and non-compliance can result in significant sanctions. The AI Act will be applicable on December 6, 2022, after being reviewed and adapted by the European Parliament and the Council of the EU.

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
4.9
AI Tools
20k+
Trusted Users
5000+
No complicated
No difficulty
Free forever
Browse More Content