Skip to main content

With the recent emergence of AI in everyday life, the EU has promulgated the EU AI Act to ensure that all European Union citizens can trust AI. It goes without saying that AI does have its risks and that is the main reason the EU has created this legal framework. The aim of the Act is to lay out clear requirements and regulation for AI developers and providers etc.

The Act through article 3(1) defines what an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The EU AI Act takes a risk based approach and defines 4 levels of risk:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal risk

Article 5 of the AI Act bans all AI practices that make use of subliminal AI techniques that operate outside of a person’s consciousness as well as intentionally manipulative or deceitful approaches with the goal or effect of materially changing human behaviour. These practices are considered to pose an unacceptable risk, which are considered to be harmful, abusive and most importantly go against the values the EU maintain. However, an exception is made to Biometric Identification technology that is used in “real time” around public areas.

The Act considers AI technology used in critical infrastructure, education, administration of justice, democratic processes, and in essential private and public services as High-risk. When it comes to High-risk AI systems, there will be strict obligations before the AI system can be put online and can be used by the public. The obligations imposed are aimed to minimize the risk for public use.

The term “limited risk” describes the dangers connected to the opaque use of AI. In order to build confidence, the AI Act establishes explicit transparency requirements to guarantee that people are informed when needed. For example, people should be told that they are communicating with a computer while utilising AI systems like chatbots so they may decide whether to go or stop. Additionally, providers will need to guarantee the identity of information produced by AI. Additionally, content produced by AI and released to educate the public on issues of public concern needs to be marked as artificial intelligence. This also holds true for the audiovisual materials that make up deep fakes.

The Act also tackles data governance issues related to artificial intelligence (AI) by mandating suitable data quality and data governance procedures to guarantee AI systems’ impartiality, correctness, and lack of prejudice, particularly when they handle sensitive or personal data. The Act also highlights how crucial human oversight is for AI systems, particularly in high-risk situations, in order to maintain responsibility, moral judgement, and legal and regulatory compliance. To guarantee adherence to the rules set forth in the Act, enforcement and accountability procedures are also established by the Act. If the standards are not met, there may be penalties and fines associated with it, in addition to monitoring by appropriate authorities. In general, the EU AI Act aims to strike a balance between safeguarding people’s rights and interests and advancing innovation and competitiveness in the AI sector. This will help to increase public confidence in AI technology while lowering possible hazards.

Regulation of GPAI Models

Chapter V of the EU AI Act is a chapter found in the EU AI Act which focuses on the classification and regulation of GPAI Models. The Act defines a GPAI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.

If the provider of a GPAI model learns that a GPAI model qualifies as one with systemic risk, they must promptly notify the Commission, and in any case, within two weeks. Without affecting the need to uphold and defend intellectual property rights, private commercial information, and trade secrets in compliance with EU / Member State law, the Commission will publish and update a list of AI models that pose a systemic risk on a regular basis.

Certain obligations are imposed on all providers of GPAI models, including the following:

  • supplying information to AI system providers that plan to use the GPAI model, or making available and maintaining current technical documentation, including its training and testing process;
  • collaborating with the Commission and national competent authorities; and
  •  adhering to national copyright and related rights laws

Additional requirements for providers of GPAI models with systemic risk include standardising model evaluations, identifying and reducing systemic risks, monitoring and reporting occurrences, and guaranteeing cybersecurity protection.

The Artificial Intelligence Act was approved by the Parliament in March 2024, and the Council did the same in May of the same year. Twenty-four months after it comes into effect, it will be completely operative, while some sections will take effect earlier:


Six months after it goes into effect, AI systems that pose unacceptably high dangers will be prohibited from using them. The codes of practice will take effect nine months after they are enacted. Twelve months after they go into effect, regulations pertaining to general-purpose AI systems that must adhere to transparency criteria will take effect.

Since the rules pertaining to high-risk systems will take effect 36 months after they are put into force, Member States will have an extension of time to comply with the requirements.

Article 70(2) of the EU AI Act, instructs Member states to;

  • designate at least one notifying authority and one market surveillance authority; and
  • communicate to the Commission the identity of the competent authorities and the single point of contact.

 They also will have to make publicly available information on how competent authorities and single point of contact can be contacted by 2 August 2025 

Furthermore, Article 57(1) makes it clear that every Member State is required to establish at least one operational regulatory sandbox at the national level by 2 August 2026.

Julian Mifsud

Legal Intern

Mifsud & Mifsud Advocates

For more information you can contact one of our Team Members at Mifsud & Mifsud Advocates.