The EU AI Act and its impact on businesses

  • Insight
  • May 13, 2024

The EU AI Act is a groundbreaking piece of legislation that aims to regulate AI technologies within the EU’s borders. It balances innovation with ethical considerations and sets a global precedent in the governance of AI systems. At a recent executive briefing, PwC’s Head of Trust in AI, Keith Power, was joined by an expert panel including Microsoft’s Kieran McCorry and PwC’s Maria Axente, Moira Cronin, Neil Redmond and Jonathan Hayes to discuss the key things businesses need to know about the EU AI Act—from risk classifications to the consequences of non-compliance.

Close of up a Business professionals side profile

What is the EU AI Act?

Recognising the need for a harmonised regulatory approach to ensure the safe integration of AI into society, the EU embarked on a comprehensive initiative to draft legislation that balances innovation with user safety and fundamental rights. On Wednesday, 13 March 2024, MEPs gave final approval for the Act, putting the rules on track to take effect later this year.

At the start of the briefing, PwC’s Keith Power introduced the EU AI Act, which is anchored in the classification of AI systems according to their perceived level of risk. This classification scheme is a cornerstone of the legislation, he said, designed to apply a proportionate regulatory approach that varies in strictness depending on the potential impact of an AI system on users and society. It covers:

  • Unacceptable risk: AI applications that pose a clear threat to people’s safety, livelihoods or rights, such as manipulative or exploitative systems, are banned outright.

  • High risk: This category includes AI systems with significant implications for individual rights or public safety, such as those used in critical infrastructure, education, employment and law enforcement. These applications are subject to stringent compliance requirements.

  • Limited risk: AI applications that involve some level of interaction with users, such as chatbots, require specific transparency obligations to inform users that they are interacting with an AI system.

  • Minimal risk: Most AI applications fall into this category. The regulation imposes minimal requirements, allowing for the broad development and use of AI technologies.

Who and what is affected?

The EU AI Act impacts a broad range of entities and sectors. It applies to AI systems and models that are machine-based and operate with some autonomy. The Act is a horizontal regulation, meaning it applies across all sectors. Its jurisdiction extends to all AI systems operating within the EU market or impacting EU citizens, regardless of whether the system is based abroad.

What will happen, and when?

The EU AI Act will be implemented in stages:

  • Six months after implementation, prohibitions on unacceptable risk will enter into force.

  • After a year, obligations on providers of general-purpose AI models will take effect, and member state competent authorities will be appointed. The Commission will also review and possibly amend the list of prohibited AI annually.

  • Eighteen months post-implementation, the Commission will enact a post-market monitoring act.

  • Two years after implementation, obligations on high-risk AI systems listed in Annex III will apply, and member states will establish rules on penalties, administrative fees, and at least one national AI regulatory sandbox. The Commission will also review and possibly amend the list of high-risk AI systems.

  • Three years post-implementation, obligations for high-risk AI systems intended to be used as a safety component of a product, or if the AI is itself a product, will apply. These systems must undergo a third-party conformity assessment under existing specific EU laws.

  • By the end of 2030, specific AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security and justice must be brought into compliance.

Compliance requirements

The EU AI Act introduces a comprehensive set of compliance obligations for organisations developing, distributing or using artificial intelligence systems within the European Union. These obligations ensure that AI technologies are developed and deployed in a manner that is secure and prioritises user safety, data privacy and ethical considerations.

Importantly, companies outside the EU that target EU consumers or businesses must ensure their AI systems comply with the Act. This may require significant data handling, system design and operational transparency adaptations.

The consequences of non-compliance with the EU AI Act are severe. Non-compliance with prohibited AI practices or data obligations can result in a penalty of up to €35 million or 7% of total worldwide turnover in the preceding financial year, whichever is higher. If there is non-compliance with any other requirement, the penalty can be up to €15 million or 3% of total worldwide turnover in the preceding financial year, whichever is higher. Supplying incomplete, incorrect, or false information can lead to a penalty of up to €7.5 million or 1.5% of total worldwide turnover in the preceding financial year, whichever is higher.

Panel discussion

Following Keith’s opening remarks, the panel discussed the many issues business leaders must consider to capitalise on the opportunity presented by AI while complying with the principles and spirit of the EU AI Act. Here are the key points:

  • The EU AI Act has significant implications for businesses, requiring the development of a responsible AI framework. This involves creating an AI exposure register or inventory of AI systems and categorising them in line with the EU AI Act.

  • The first step in working towards compliance is understanding where your organisation uses AI. Many organisations lack standards and rules for AI use, often using it simply because they can. A Governance structure built using Responsible AI principles should provide control over—and trust in—AI systems.

  • Dealing with unstructured data like documents and emails can be complex. Every company will have a governance and risk framework, but data in AI presents a unique risk. Harmonising and updating policies alone won’t instil confidence in the business, Data Governance also needs to be considered.

  • AI is already in organisations, in many cases through third-party partnerships. You need to understand how the outputs of the third party AI models flow through your organisation and how that data is used, to ensure appropriate safeguards are in place that are consistent with your AI Governance policies.

  • Your role in the AI supply chain will help you determine your responsibility, but the most significant risk could be the misuse of AI by those in your own organisation.

  • Organisations should see the EU AI Act as a piece of consumer protection legislation that safeguards their fundamental rights and freedoms. If you are doing something that affects those rights and freedoms, you must take action to address this.

  • Boards need to ask questions to understand what AI systems are in use within their organisations and what third parties are using AI—and how.

  • Where organisations ban GenAI, employees often use it anyway in a quasi-personal capacity. Instead, they could use enterprise-grade tools with the proper boundaries for responsible usage. Boards and organisations must engage with the technology; banning it is a road to nowhere.

Conclusion

While the EU AI Act sets out rigorous standards and compliance requirements for businesses, it also paves the way for innovation within a framework of trust and safety. By embracing these regulations, businesses can navigate the complexities of AI development responsibly while differentiating themselves in a competitive and rapidly evolving digital marketplace.

Key actions Businesses can take today

The EU AI Act sets strict guidelines for AI governance, meaning businesses must reassess their AI strategies and practices. Companies should take these three steps to comply with the regulations and use AI responsibly.

  1. Take action: To ensure compliance with AI regulations, the first step is to conduct an internal audit of your AI capabilities. This will help you understand your current position and identify the key personnel who will lead your AI initiatives. Assemble a team with diverse skills and knowledge, including technical expertise and an understanding of AI’s broader implications. At the same time, establish an AI literacy programme throughout your organisation to ensure everyone comprehends AI’s potential impact, benefits and risks. This foundational step will ensure that knowledgeable and capable individuals lead your AI journey and that your organisation moves forward with a shared vision.

  2. AI exposure register: Creating an AI exposure register is essential for any organisation. This register should document every AI application and third-party AI collaboration in detail. The aim is to understand how AI is used across various departments and projects. The register should provide a strategic overview that includes the types of AI technologies used, their purposes and the data they interact with. This register can aid in regulatory compliance and serve as a strategic asset for managing AI risks and opportunities.

  3. Gap analysis: Performing a gap analysis is essential to complying with the EU AI Act’s requirements. This involves evaluating your current AI governance and compliance practices to identify areas where they fall short of the Act’s standards. You must take a balanced approach to addressing these gaps—fix immediate issues where possible and plan strategic adjustments for more complex problems. This may involve reviewing your data handling practices, increasing transparency in your AI systems, or developing more robust oversight mechanisms. A comprehensive gap analysis will help you comply with regulations and strengthen your overall AI governance framework, ensuring your AI System is built responsibly with goals of trust and transparency in mind.

By taking these steps, businesses can not only navigate the complexities of AI regulation but also leverage it to refine their AI strategies and foster innovation within a framework of responsible use.

We are here to help 

In the rapidly changing AI regulation and compliance landscape, navigating your business towards a future where innovation meets responsibility can be overwhelming. Our deep expertise in AI Governance, Responsible AI and our complete understanding of regulatory landscapes make us uniquely positioned to guide your business through these complex challenges. Our tech-powered team of experts can provide the strategic insights you need to comply with the latest AI regulations and drive your business forward. Contact us today.

Today’s Issues

Innovate and lead with confidence.

Contact us

Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

Moira Cronin

Moira Cronin

Partner, PwC Ireland (Republic of)

Tel: +353 86 377 1587

James Scott

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Follow PwC Ireland