EU AI Act: Prohibited AI practices rules now in effect

  • Insight
  • February 10, 2025
Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Prohibited AI use under the EU AI Act: Immediate action required

The EU AI Act’s provisions on prohibited AI systems came into force on 2 February 2025, and the stakes are high. Failure to comply could cost up to €35 million or 7% of the global annual turnover from the previous year. While the risk of using prohibited AI may seem low, it’s crucial for organisations to thoroughly assess their existing AI inventory to ensure responsible use. Understanding these regulations is essential to navigate compliance effectively. Here are key insights and actionable steps to help you prepare and stay ahead in this new regulatory landscape.

Understanding the EU AI Act: scope and purpose

The EU AI Act is a comprehensive legislative framework that governs the deployment and use of AI within the European Union. It extends its reach across various dimensions, ensuring a broad impact:

  • All sectors: Regardless of industry, the Act applies uniformly, ensuring that every sector adheres to the established guidelines.
  • Geographical reach: The Act holds sway both within the EU and beyond its borders, affecting any entity that operates within the EU market.
  • AI value chain: From development to deployment, every stage within the AI value chain is covered, ensuring end-to-end compliance.
  • AI systems and models: The legislation encompasses all types of AI systems and models, giving it a complete regulatory scope.

Purpose of the EU AI Act

  1. Safety and fundamental rights: The Act ensures that AI systems available in the EU are safe and uphold the fundamental rights of its citizens.
  2. Legal certainty for investment and innovation: By providing clear guidelines, the Act aims to create a stable environment that encourages investment and innovation in AI technologies.
  3. Single market development: The legislation seeks to foster a unified market for lawful, safe and trustworthy AI systems, facilitating cross-border collaboration and commerce.
  4. Risk-based approach: The Act employs a risk-based framework, ensuring regulations are proportional and do not stifle innovation unnecessarily.

Prohibited AI practices and business implications

In drafting the EU AI Act, the European Union identified certain AI uses as unethical, categorising them as prohibited within the risk framework. This insight aims to provide clarity on these banned practices, detailing the specific AI systems that are off-limits and the repercussions for businesses that fail to comply. By recognising and adhering to these prohibitions, AI providers and deployers can contribute to a more trustworthy and responsible AI landscape.

For organisations, understanding these prohibitions is not just about compliance — it’s about aligning with ethical standards that ensure the responsible use of AI technologies. By doing so, businesses can mitigate risks and foster trust among consumers and stakeholders alike.

What are prohibited AI practices?

Under the EU AI Act, certain AI practices have been categorised as prohibited due to their potential to cause significant harm or infringe upon fundamental rights. Here’s a breakdown of these banned practices:

  • Subliminal and manipulative techniques: AI systems that covertly manipulate human behaviour, impairing individuals’ ability to make informed decisions and causing significant harm, are strictly prohibited.
  • Exploitation of vulnerable persons: AI systems designed to exploit vulnerabilities in specific groups, such as children or individuals with disabilities, to distort behaviour are banned.
  • Social scoring: The use of AI systems to evaluate or classify individuals based on their social behaviour, which could lead to detrimental social scoring, is not permitted.
  • Emotion inference in sensitive areas: AI systems that infer emotions within sensitive settings, such as workplaces or educational institutions, are prohibited unless used for legitimate medical or safety reasons.
  • Biometric data misuse: AI systems that misuse biometric data to deduce or infer sensitive attributes, such as race, political, religious or philosophical beliefs, are banned. However, this prohibition does not extend to the lawful labelling or filtering of biometric datasets for law enforcement purposes.
  • Untargeted facial recognition: The creation or expansion of facial recognition databases through untargeted image scraping from the internet or CCTV footage is prohibited.
  • Real-time remote biometric identification in public spaces for law enforcement: Generally prohibited, these systems may only be used in public spaces for law enforcement in specific, narrowly defined situations, such as:
    • Searching for specific victims (e.g. in cases of abduction or trafficking);
    • Preventing imminent threats to life, safety or terrorist attacks; or
    • Locating or identifying suspects in serious criminal investigations or prosecutions.

Understanding these prohibitions is essential for businesses to navigate compliance effectively and ensure their AI practices align with ethical standards.

How the EU AI Act impacts your business

Non-compliance with the EU AI Act’s prohibitions on AI practices can lead to severe consequences, including administrative fines of up to €35 million or 7% of the global annual turnover from the previous year, whichever is higher. Beyond the financial penalties, the reputational damage from deploying prohibited and unethical AI systems could be irreparable, affecting stakeholder trust and brand integrity.

While it’s unlikely that many businesses currently employ prohibited AI systems, every organisation must be vigilant in assessing their AI systems to ensure proper governance and compliance. Here’s how to proceed:

  1. Collate an inventory of AI systems: Document all AI systems in use within your organisation to establish a clear understanding of your AI landscape.
  2. Risk-assess use cases in line with the EU AI Act: Evaluate each AI use case to determine its compliance with the Act, focusing on potential risks and ethical considerations.
  3. Perform an AI readiness assessment: Identify governance gaps and areas requiring improvement by conducting a thorough assessment of your organisation’s AI capabilities and practices.
  4. Adopt Responsible AI principles: Integrate ethical guidelines and frameworks to ensure AI systems are developed and used responsibly, aligning with both legal requirements and societal expectations.
  5. Roll out an AI upskilling programme: Educate and train your workforce on the EU AI Act’s requirements and the principles of responsible AI, fostering a culture of compliance and ethical AI practice.

By implementing these actions, your organisation will not only achieve compliance with the EU AI Act but also promote the responsible and ethical use of AI technologies.

Key actions businesses can take today

  1. Create an AI exposure register
    Establishing a baseline of AI usage within your organisation is crucial for understanding potential risks. An AI exposure register will help identify and catalogue all AI systems and processes, enabling you to assess your exposure to AI-related risks comprehensively.
  2. Risk-assess use cases in line with the EU AI Act
    Utilise the EU AI Act’s risk framework to evaluate each AI use case identified in your AI exposure register. Prioritise areas that are prohibited or deemed high-risk, taking necessary actions to mitigate these risks. Ensure that governance structures and controls are in place to manage these risks effectively.
  3. Complete an AI readiness assessment
    Conducting a thorough AI readiness assessment will highlight any existing gaps in your governance structures. Addressing these gaps is essential for managing the adoption and integration of AI technologies in your organisation responsibly.
  4. Adopt AI governance structures using a Responsible AI framework
    Implement appropriate AI governance and risk management processes in line with the EU AI Act. AI governance should be a shared responsibility across the organisation, requiring an operational environment that aligns with existing enterprise governance structures. This approach ensures that AI governance is embedded and consistently applied within the organisation.
  5. AI upskilling
    Organisations must ensure that users of AI systems are adequately trained in their use. Investing in AI upskilling programmes will help your workforce understand the requirements of the EU AI Act and the principles of Responsible AI. For more insights on navigating this requirement, read our recent insight here.

    By taking these key actions, businesses can not only achieve compliance with the EU AI Act but also bolster their commitment to ethical AI use.

EU AI Act

Ensure compliance with the EU AI Act.

Contact us

Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

James Scott

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Follow PwC Ireland