top of page

Decoding the EU AI Act: What is a high-risk AI system?

Updated: May 28

Warning Sign
Picture from Markus Spiske on Unsplash

Written by Tiffany Cunha, GRC & Data Protection Specialist at Palqee Technologies

Updated by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies


 

The European Union has introduced the EU Artificial Intelligence Act, a comprehensive law that regulates the use of AI systems in the EU. In our "Decoding AI" series: The European Union’s Take on Artificial Intelligence’, we break down everything you need to know about the law for you.


Unsure if your AI system is considered high-risk according to the EU AI Act? Take the free High-Risk Risk Category Assessment:



 

Taking a risk-based approach


The EU AI Act aims to set a standard for trustworthy AI systems. The challenge however is to balance compliance requirements across all sorts of AI use cases and industries.


Depending on the AI system and its intended purpose, it can carry different levels of risks.


While the complexity of these systems, often described as 'black boxes,' makes it challenging to pinpoint the reasons behind their decisions, this issue varies in significance. For AI applications that enhance productivity, the opacity might be less concerning. However, in sectors like law enforcement or healthcare, where decisions have profound implications on individual rights and safety, understanding AI behaviour becomes essential, especially to prevent biases against specific groups.


The key objective of the EU AI Act is to prioritise safety, providing legal clarity, enforcing fundamental rights effectively and preventing market fragmentation without over-regulating AI systems that pose minimal risk.

 

To maintain this balance the Act employs a risk-based, technology-neutral approach to evaluate and define AI systems. It applies equally to both providers and users within the EU, adjusting regulatory compliance and scope according to the risk level associated with each AI system.


Categorisation of AI Systems by Risk Level

 

  • Unacceptable risk: AI systems that pose significant dangers to safety or fundamental rights are prohibited. This includes systems that use subliminal manipulation, target vulnerable groups, allow social scoring by public authorities, or enable real-time remote biometric identification in public spaces for law enforcement purposes, unless under very strict conditions.


  • High-risk: AI systems that have the potential to negatively affect people's safety or fundamental rights. High-risk AI systems that fall into this category have to comply with all the requirements in the EU AI Act, including going through a conformity assessment to obtain a CE mark prior to placing the AI system into the EU market.


  • Limited risk: AI systems that interact directly with individuals and where there is a potential for limited adverse impacts. Providers must ensure transparency to users, typically involving: AI chatbots, deepfakes and other AI-generated media.

  • Low or minimal risk: Most AI systems fall under this category, where the risk of harm is low. These systems are subject to minimal or no regulatory requirements, encouraging innovation and the broad use of AI in various applications such as video games and spam filters.

High-Risk AI Systems


Compliance requirements for high-risk AI system providers are broad in scope, and will require a considerable amount of resources to implement and maintain.


The EU AI Act defines high-risk AI systems as belonging to the following categories:


1. AI that is a Safety Components in Products or a Product itself that are subject to EU Harmonisation Legislation


AI systems that are designed to function as safety components of products or a product itself fall into the high-risk category. This includes systems integrated into products where safety is critical, such as automotive safety systems or medical devices. These technologies, whether they operate independently or as part of another product, must adhere to Union harmonisation legislation specified in Annex I of the Act, ensuring they meet Europe-wide safety standards.


2. Third-Party Conformity Assessment


If an AI system or the product containing it requires a third-party conformity assessment, it is classified as high-risk. This assessment is necessary for products that have significant implications for health, safety, or environmental protection.


3. AI Systems used in any of the following areas

 

1. Biometric Identification and Categorisation of Natural Persons

The EU AI Act categorises AI systems used for real-time and post-remote biometric identification of natural persons as high-risk. These systems are intended to recognise and categorise individuals based on unique biometric traits, such as fingerprints, facial features, or iris patterns. The potential misuse of such technology raises concerns about privacy infringement and the possibility of mass surveillance.

 

2. Management and Operation of Critical Infrastructure 

AI systems intended for use as safety components in the management and operation of critical infrastructure, such as road traffic and the supply of water, gas, heating, and electricity, are also classified as high-risk. Given the vital nature of these services, any malfunction or security breach in AI systems can lead to severe consequences for public safety and well-being.

 

3. Education and Vocational Training 

AI systems intended for determining access or assigning individuals to educational and vocational training institutions, as well as those used for assessing students' performance. These systems can impact a person's educational and career opportunities, potentially leading to biased decisions and unfair treatment.

 

4. Employment, Workers Management, and Access to Self-Employment 

AI systems used for recruitment, candidate evaluation, and performance assessment in the workplace. Making employment-related decisions based on AI algorithms can result in discrimination, lack of transparency, and the potential for biased outcomes.

 

5. Access to Essential Private and Public Services and Benefits 

Several AI systems that evaluate eligibility for public assistance benefits, creditworthiness, and emergency first response services fall under the high-risk category. The use of AI in these areas requires strict safeguards to prevent discrimination, misinformation, or abuse of power.

 

6. Law Enforcement 

AI systems employed by law enforcement authorities for individual risk assessments, detection of deep fakes, evaluation of evidence, and profiling individuals. AI in this area can have a serious impact on personal liberties, which is why their accuracy and fairness must be ensured.

 

7. Migration, Asylum, and Border Control Management 

AI systems used for assessing risks posed by individuals intending to enter or have entered the territory of a Member State, verifying travel documents, and examining applications for asylum and residence permits. Ensuring the ethical use of AI in migration and border control is crucial to prevent unjust treatment and potential human rights violations.

 

8. Administration of Justice and Democratic Processes 

AI systems designed to assist judicial authorities in researching and interpreting facts and the law are considered high-risk. The potential implications of AI in legal proceedings require careful oversight to maintain transparency and ensure fair outcomes.

 

Conclusion


The EU AI Act recognises the potential risks associated with certain AI systems and seeks to regulate their use to protect individuals and society at large. By identifying and categorising specific uses as high-risk, the EU is taking steps to ensure AI is developed and deployed responsibly.


This reflects the agreed-upon text following significant milestones within the legislative process, including the provisional agreement reached by the Parliament and the Council on December 9, 2023, and the endorsement by the Internal Market and Civil Liberties Committees on February 13, 2024. The unanimous support from the EU’s 27 member states further solidifies the political consensus achieved last December. However, some final details remain to be settled for the Act to be fully enacted.


Meanwhile, startups and companies working with high-risk AI systems must prepare to meet and comply with new standards and requirements.

Comments


bottom of page