The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence (AI). It aims to address the risks and opportunities of AI for health, safety, fundamental rights, democracy, rule of law and the environment in the EU. It also seeks to foster innovation, growth and competitiveness in the EU’s internal market for AI.
Given companies’ increased desire to use AI to drive efficiencies, particularly through Generative AI (GenAI), businesses must lay the foundations to implement AI in a responsible and controlled manner—but it’s hard to know where to start.
Businesses need to be aware of their AI exposure within their organisation to properly manage the associated risks. To manage these risks and comply with the EU AI Act, appropriate AI governance is needed.
In this insight, we explain who the EU AI Act applies to and what you need to be aware of to manage the risks associated with AI.
The EU AI Act applies to businesses who create or use AI systems. It also affects those who sell, distribute or import AI systems. It applies to entities within the EU and also developers, deployers, importers and distributors of AI systems outside the EU if their systems’ output occurs within the EU.
The AI Act adopts a risk-based approach and classifies AI systems into risk categories based on their potential use, as well as the potential impact on individuals and society.
Certain obligations are also expected for providers of general-purpose AI models, including large GenAI models such as ChatGPT and Bing Chat.
Providers of free and open-source models are exempt from most of these obligations. However, this exemption does not cover obligations for providers of general-purpose AI models with systemic risks. For example, if you use a Gen AI model as part of a process that could ultimately lead to outputs that were deemed high-risk, the use case would be treated as high-risk.
Obligations do not apply to research, development and prototyping activities preceding release on the market—for example, developing use cases that have not entered into production. It also does not apply to AI systems that are exclusively for military, defence or national security purposes regardless of the type of entity carrying out those activities.
To introduce a proportionate and effective set of binding rules for AI systems, a risk-based approach has been defined by the European Commission. There are four risk categories:
These are determined by the intended purpose of the AI system, the commensurate risk of harm to the fundamental rights of people, the severity of possible harm and the probability of occurrence. Specific transparency requirements and systemic risks are specifically called out in the Act. Potential uses that fall under each of the four risk categories are outlined below.
Compliance obligations are lighter, focusing on transparency. Users must be informed when dealing with an AI system unless the outputs are obviously generated by AI at face value. Examples of uses are as follows:
AI systems not falling into the three categories mentioned above are not subject to compliance under the EU AI Act. The primary focus for technology providers will be the high-risk and limited risk categories. All other AI systems can be developed and used subject to existing legislation without any additional legal obligations. An example of a minimal risk is:
Other risks that must be considered include:
Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified.
Providers of high-risk AI systems will also have to implement sufficient AI governance, particularly around quality control and risk management of the AI system, to ensure compliance with the new requirements and minimise risks for users and affected persons—even after a product is placed on the market.
High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database.
Following its adoption by the European Parliament and European Council, the EU AI Act will come into force 20 days after its publication in the Official Journal. It will become fully applicable 24 months after entry into force, with a staggered approach as follows:
Penalties for non-compliance are severe and will be enforced by the designated AI authority within a given EU member state.
The Act sets out thresholds as follows:
To harmonise national rules and practices in setting administrative fines, the Commission will draw up guidelines with advice from the EU AI Board.
To evaluate the risks associated with the use of AI in your organisation, a baseline of your existing AI exposure is needed. Exposure can include applications and systems that are native AI systems, existing systems that have had updates and now contain AI, and the use of AI by third-party providers that provide services including Software-as-a-Service (SaaS). An AI exposure register will allow you to assess your exposure to all AI-related risks.
Apply the EU AI Act risk framework to the AI use cases identified in your AI exposure register. Take action to mitigate identified risks and ensure governance and appropriate controls are in place to manage these risks.
In line with the EU AI Act, appropriate AI governance and AI systems risk management must be implemented. AI governance is a shared responsibility across the organisation and requires a defined operating environment that aligns to existing enterprise governance structures. This will ensure that AI governance is embedded within the organisation.
Raising awareness of the capabilities and limitations of AI will ensure that your organisation reaps the benefits.
The EU AI Act is a comprehensive and intricate piece of legislation. Our Trust in AI team is ready to guide you in adopting AI practices that align with the EU AI Act. Whether it’s compiling your AI exposure register or implementing robust AI governance, we can help you effectively manage the risks associated with AI. In doing so, you can fully embrace the benefits AI brings. Reach out to our multidisciplinary team of experts to see how we can help.