top of page

A New Era for AI Regulation: The First AI Act

On March 13, 2024, the European Parliament approved the world's first harmonized regulatory act, The AI Act (hereinafter referred to as "The Act"), establishing a groundbreaking legal framework for Artificial Intelligence (AI). The Act aims to establish principles and obligations for AI developers and users concerning the ethical use of AI.


It applies to:

  • Providers operating within the EU regardless of their registration or founding place.

  • Users utilizing AI situated within EU territory.

  • Providers and users utilizing AI and its products developed outside the EU in EU member states.

Notably, the Act excludes AI created for specifically military purposes and does not affect governments of non-EU members and international organizations.


This article provides a brief overview of the main purposes of The Act and proposed AI categories. The Act determines obligations and standards for AI, its products, including categorizing AI and transparency requirements.





The Act defines the following AI categories:

  • Unacceptable risk;

  • High risk;

  • Minimal risk.

Unacceptable risk:

AI falls into the unacceptable risk category if it can:

  • Manipulate individuals through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups to significantly alter their behavior (e.g., children and/or persons with disabilities);

  • Engage in social scoring, classifying individuals based on their behavior, socio-economic status, and personal characteristics;

  • Identify and categorize individuals based on their biometric data.

  • Utilize real-time, remote biometric identification systems (e.g., face recognition in street surveillance cameras).

Although AI with unacceptable risk is not outrightly prohibited, its usage by states is strictly limited to situations where it is necessary to achieve a substantial public interest, such as searching for potential crime victims, including missing children.


High risk:

AI falls into the high-risk category if it poses significant risks to health, safety, or fundamental rights of individuals. High-risk AI includes:

1.    AI systems intended as safety components of products subject to third-party ex-ante conformity assessment;

2.    Stand-alone AI systems, which should be registered in the EU database, such as:

  • Systems managing critical infrastructure (e.g., transportation, natural resource supply, etc.);

  • Systems used in education or vocational training (e.g., automated feedback and scoring systems);

  • Systems used in employment, workforce management, and self-employment (e.g., automatic CV reviews, task allocation, employee monitoring, etc.);

  • Systems determining essential private and public services and benefits (e.g., granting loans, credits, etc.);

  • Systems used by law enforcement authorities (e.g., detecting the emotional state of individuals, polygraphs, etc.);

  • Systems used in migration, asylum, and border control management (e.g., automated application and document review systems);

  • Systems intended for interpreting legislation (e.g., researching and interpreting facts and laws, applying the law to specific facts).

All AI systems falling into the high-risk category must undergo examination, assessment, and CE marking before being placed on the market. Also, throughout the AI system's lifecycle, individuals will have the opportunity to submit complaints to relevant authorities.


AI categorized as minimal risk must adhere to transparency standards (additional information is provided below). The Act recommends (though not mandatorily) that such systems also follow the standards set for high-risk AI and/or develop internal guidelines and regulations.


Regarding transparency, the Act establishes that AI systems interacting with humans, identifying emotions or social categories based on biometric data, or generating/manipulating content (e.g., deep fakes) must provide relevant information to the person interacting with the system. For instance, individuals should be informed if they are communicating with a chatbot. Moreover, when AI systems are used to create photo/video/audio content, there is an obligation to disclose and indicate the artificial origin of the product.


The Act will come into force twenty days after its publication in the Official Journal of the European Union and will be enforceable twenty-four months thereafter, with exceptions:

  • Norms prohibiting specific actions/systems will be effective 6 (six) months after the Act's enforcement;

  • Practical Codes that will be effective 9 (nine) months after the Act's enforcement;

  • General Purpose Artificial Intelligence Regulatory Norms will be effective 12 (twelve) months after the Act's enforcement;

  • Obligations imposed on high-risk AI systems will be effective 36 (thirty-six) months after the Act's enforcement.

The AI Act represents a significant milestone in establishing a global legal framework for artificial intelligence. It sets a precedent for addressing AI challenges at the legal level and aims to ensure the reliability of AI processes, safeguard fundamental human rights and freedoms, and uphold safety and ethical principles within and beyond the European Union.

bottom of page