Three no-regret moves to explore AI business potential and regulatory impact at the same time

Hero image
  • Insight
  • July 22, 2024
Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

The EU Artificial Intelligence Act is here: now what?

With the introduction of the EU Artificial Intelligence Act, exactly three years after its first draft, organisations now face the challenge of understanding the business impact of this new regulation and determining appropriate measures to take. What contributes to this dynamic is for the majority of organisations, thinking about the risk and compliance implications of AI is coinciding with the exploration of its business potential. Here are three so-called no-regret moves to deal with both of these challenges.

Exactly three years after the first version, the EU AI Act is finally here. The new European law aims to ensure responsible and ethical use of artificial intelligence while encouraging innovation and competition. The introduction of the law raises some question marks within organisations. For most companies, thinking about the risks and regulatory implications of AI coincides with exploring its business potential. 

In everyday practice, this often leads to the question : "Which should be addressed first - Risk and regulatory implications or business potential?" In reality , they are  two sides of the same coin. Therefore, as an organisation, you can benefit from some immediate no-regret moves while exploring both the opportunities and the risks of AI for your business operations.

No-regret move #1: Map your landscape of current and expected AI applications

  1. Top-down: define current and foreseeable business opportunities and issues and compare these with the potential that (generative) AI technology offers. The outcome: your top-down defined AI use cases.

  2. Bottom-Up: do a brainstorming session with appropriate representation of relevant business functions to identify potential AI use cases. The success factor in brainstorming is not overthinking it. The outcome: your bottom-up defined AI use cases.

  3. Combine both categories of AI use cases and plot these against two dimensions: 

    • overall business impact and

    • implementation effort required. 

  4. Highlight your ‘quick wins’ (high business impact, low implementation effort) and ‘high potentials’ (high business impact, high implementation effort). The outcome: your strategic landscape of AI applications.

  5. Create an inventory of your current AI applications, in use and in development, and add them to the strategic landscape of AI applications. Don’t forget third-party applications.

The inventory should at least capture:

  • the purpose and intended use of each AI system

  • the data it uses

  • its core functionality / workings

  • the processes, functions and (in)direct stakeholders it affects.

  • Risk categorisation that is consistent with the EU AI Act

Result: a robust starting point for an AI strategy and a regulatory impact analysis.

No-regret move #2: Raise awareness and upskill employees

For every job, function or role out there, the question is not if AI will change it, but when. Not having an AI strategy is not a sufficient reason to wait to offer employees upskilling opportunities or create a safe learning environment in which they can build skills in using AI and dealing with the risks of the technology. The latter is especially important because employees can start working with (generative) AI on their own initiative. Agile is the key word here. Applying the latest generation of AI technology is like learning to work with a new colleague: you have to spend time together to get attuned to each other. 

What should upskilling focus on for now:

  1. Introduction to (generative) AI and its principles: This topic provides an overview of (generative) AI and explains its fundamental principles and applications. Employees will learn to understand the potential benefits and challenges associated with using (generative) AI.

  2. Responsible use of (generative) AI: This topic highlights the importance of responsible and ethical AI use. Employees learn about risk considerations, including human impact, ethics, bias, fairness, privacy, and transparency, in the context of AI applications and the consequence(s) of their use. They will gain an understanding of the need to ensure that AI systems are developed and deployed in a responsible and accountable manner, in accordance with new legal requirements under the AI Act.

  3. Prompt engineering: This topic focuses on the concept of prompt engineering, which involves designing effective prompts or instructions to direct the behaviour of a Generative AI model. Employees will learn how to craft prompts that produce desired outputs while avoiding unintended biases or undesirable outcomes. They will gain an understanding of the significance of prompt engineering for achieving reliable and ethical AI results.

By covering these three key topics, organisations can provide employees with a comprehensive understanding of (generative) AI, responsible AI use, and the importance of prompt engineering for effective and ethical AI application.

Result: an equipped workforce to execute the (future) AI strategy, to handle AI responsibly, and to shape, implement and comply with legal requirements.

No-regret move #3: Implement responsible use guidelines

Responsible use of AI revolves around desired business conduct. Firstly, it requires awareness and clarity about what that is and secondly, the ability to recognise the associated risks in practice and to respond effectively to them. Organisations should establish simple but clear and workable responsible use guidelines. These guidelines address what should always be done and/or what should never happen (i.e. the ‘non-negotiables’) when it comes to use of AI and data. 

To determine the working principles for daily use, organisations can draw inspiration from the ethical AI principles, such as transparency, accountability, human oversight, social and ecological well-being, as formulated in 2019 by the High-Level Expert Group of the European Commission. These principles provide broad guidance and usually need to be further operationalised to be workable in daily practice.

When developing these guidelines for responsible use for the organisation, it is important to find an appropriate balance between setting boundaries and offering freedom for innovation within the organisation. After all: no innovation no risk, no risk no innovation.

Result: clear criteria to guide the AI strategy and its execution, end-to-end through the organisational AI lifecycle.

We are here to help you

As the opportunities and risks of AI evolve at an unprecedented pace, the motto ‘progress over perfection’ is more important than ever. The question for many organisations is not if they will be impacted by AI, but when. These three No Regret Moves will help organisations get started as they navigate the risks associated with AI, in a responsible and ethical way.

Responsible AI

Harness AI’s limitless potential while managing risks

Contact us

Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

Moira Cronin

Moira Cronin

Partner, PwC Ireland (Republic of)

Tel: +353 86 377 1587

James Scott

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Laoise Mullane

Laoise Mullane

Director, PwC Ireland (Republic of)

Tel: +353 87 160 6501

Follow PwC Ireland