Understanding the EU AI Act: scope and purpose
The EU AI Act is a comprehensive legislative framework that governs the deployment and use of AI within the European Union. It extends its reach across various dimensions, ensuring a broad impact:
- All sectors: Regardless of industry, the Act applies uniformly, ensuring that every sector adheres to the established guidelines.
- Geographical reach: The Act holds sway both within the EU and beyond its borders, affecting any entity that operates within the EU market.
- AI value chain: From development to deployment, every stage within the AI value chain is covered, ensuring end-to-end compliance.
- AI systems and models: The legislation encompasses all types of AI systems and models, giving it a complete regulatory scope.
Purpose of the EU AI Act
- Safety and fundamental rights: The Act ensures that AI systems available in the EU are safe and uphold the fundamental rights of its citizens.
- Legal certainty for investment and innovation: By providing clear guidelines, the Act aims to create a stable environment that encourages investment and innovation in AI technologies.
- Single market development: The legislation seeks to foster a unified market for lawful, safe and trustworthy AI systems, facilitating cross-border collaboration and commerce.
- Risk-based approach: The Act employs a risk-based framework, ensuring regulations are proportional and do not stifle innovation unnecessarily.
Prohibited AI practices and business implications
In drafting the EU AI Act, the European Union identified certain AI uses as unethical, categorising them as prohibited within the risk framework. This insight aims to provide clarity on these banned practices, detailing the specific AI systems that are off-limits and the repercussions for businesses that fail to comply. By recognising and adhering to these prohibitions, AI providers and deployers can contribute to a more trustworthy and responsible AI landscape.
For organisations, understanding these prohibitions is not just about compliance — it’s about aligning with ethical standards that ensure the responsible use of AI technologies. By doing so, businesses can mitigate risks and foster trust among consumers and stakeholders alike.
What are prohibited AI practices?
Under the EU AI Act, certain AI practices have been categorised as prohibited due to their potential to cause significant harm or infringe upon fundamental rights. Here’s a breakdown of these banned practices:
- Subliminal and manipulative techniques: AI systems that covertly manipulate human behaviour, impairing individuals’ ability to make informed decisions and causing significant harm, are strictly prohibited.
- Exploitation of vulnerable persons: AI systems designed to exploit vulnerabilities in specific groups, such as children or individuals with disabilities, to distort behaviour are banned.
- Social scoring: The use of AI systems to evaluate or classify individuals based on their social behaviour, which could lead to detrimental social scoring, is not permitted.
- Emotion inference in sensitive areas: AI systems that infer emotions within sensitive settings, such as workplaces or educational institutions, are prohibited unless used for legitimate medical or safety reasons.
- Biometric data misuse: AI systems that misuse biometric data to deduce or infer sensitive attributes, such as race, political, religious or philosophical beliefs, are banned. However, this prohibition does not extend to the lawful labelling or filtering of biometric datasets for law enforcement purposes.
- Untargeted facial recognition: The creation or expansion of facial recognition databases through untargeted image scraping from the internet or CCTV footage is prohibited.
- Real-time remote biometric identification in public spaces for law enforcement: Generally prohibited, these systems may only be used in public spaces for law enforcement in specific, narrowly defined situations, such as:
- Searching for specific victims (e.g. in cases of abduction or trafficking);
- Preventing imminent threats to life, safety or terrorist attacks; or
- Locating or identifying suspects in serious criminal investigations or prosecutions.
Understanding these prohibitions is essential for businesses to navigate compliance effectively and ensure their AI practices align with ethical standards.
How the EU AI Act impacts your business
Non-compliance with the EU AI Act’s prohibitions on AI practices can lead to severe consequences, including administrative fines of up to €35 million or 7% of the global annual turnover from the previous year, whichever is higher. Beyond the financial penalties, the reputational damage from deploying prohibited and unethical AI systems could be irreparable, affecting stakeholder trust and brand integrity.
While it’s unlikely that many businesses currently employ prohibited AI systems, every organisation must be vigilant in assessing their AI systems to ensure proper governance and compliance. Here’s how to proceed:
- Collate an inventory of AI systems: Document all AI systems in use within your organisation to establish a clear understanding of your AI landscape.
- Risk-assess use cases in line with the EU AI Act: Evaluate each AI use case to determine its compliance with the Act, focusing on potential risks and ethical considerations.
- Perform an AI readiness assessment: Identify governance gaps and areas requiring improvement by conducting a thorough assessment of your organisation’s AI capabilities and practices.
- Adopt Responsible AI principles: Integrate ethical guidelines and frameworks to ensure AI systems are developed and used responsibly, aligning with both legal requirements and societal expectations.
- Roll out an AI upskilling programme: Educate and train your workforce on the EU AI Act’s requirements and the principles of responsible AI, fostering a culture of compliance and ethical AI practice.
By implementing these actions, your organisation will not only achieve compliance with the EU AI Act but also promote the responsible and ethical use of AI technologies.