The European Union's Artificial Intelligence Act (EU AI Act) came into force on 1 August. Organisations must ensure their workforce attains appropriate levels of AI literacy by February 2025. In this insight, we outline the benefits and challenges of AI literacy within the context of the EU AI Act and provide practical tips and best practices for organisations to comply.
Artificial intelligence (AI) is transforming the world of work in unprecedented ways, particularly following the growing adoption of Generative AI (GenAI) technologies.
According to the EU AI Act, an AI system is a machine-based system designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment. From the input it receives, it infers how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. GenAI is an AI system that uses machine learning algorithms to create content, including audio, code, images, text, simulations and videos.
The latest PwC GenAI Business Leaders Survey, which explores the perspectives of Irish executives on the impact, ownership and planned use of GenAI, shows that as of June 2024, a significant majority of respondents (83%) expect GenAI to have a substantial impact on their businesses over the next five years — a 9% increase from November 2023. The survey also shows that organisations have already begun to realise the value of GenAI, primarily in productivity, operational efficiency and enhanced experience.
The potential for AI to continue transforming the way people work is staggering. Organisations must immediately equip their people with the knowledge, skills and behaviours to leverage this transformative opportunity. However, they must also mitigate the risks introduced from ethical, societal and security perspectives.
The EU AI Act, which came into force on 1 August, is a crucial piece of legislation aimed at safeguarding against these risks at an EU level, ensuring that AI is trustworthy, human-centric and protects fundamental rights. To learn more about the EU AI Act, read our recent article: ‘The EU AI Act and its impact on business’.
To comply with this legislation, one of the challenges organisations now face is ensuring the adequate AI literacy of their workforce. AI literacy, as defined in the EU AI Act, is the “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”. In other words, AI literacy can be interpreted as the ability to understand, use and critically evaluate AI applications and their outcomes in the context of their use and risk factors. AI literacy requirements will apply from early February 2025.
While it will now be a regulatory requirement, the expected correlation between AI literacy and the ability to achieve AI’s anticipated benefits is high. In our work with clients, we see first-hand the impact that adoption activities — including communication and upskilling — have on adoption levels and benefits realisation. There is a win-win opportunity for organisations to increase productivity, efficiency, and creativity and improve customer and employee experience while effectively understanding and managing AI risks and demonstrating their commitment to responsible AI practices.
The EU AI Act is open to interpretation, with Article 4 stating: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
Microsoft’s Work Trend Index Annual Report 2024 shows that 78% of employees bring their own AI (BYOAI) to work. This highlights a growing need for organisations to define and enforce clear business rules around using AI and what is appropriate and acceptable for their employees. Doing so provides a clear foundation for organisations to understand the AI systems in use, and the communication and understanding of these rules will be vital to minimising or eliminating the use of unauthorised AI systems.
Depending on the specific AI systems and applications, employees will need different skills and skill levels. There is no one-size-fits-all approach, as organisations will have different AI systems, roles and AI use cases. Organisations should determine distinct upskilling needs and group employees based on these needs. While a general overview will be sufficient for some workers on a broad range of AI tools, others will require deep technical training due to their roles and the types of tools they use.
At the broad overview level, employees who engage with AI systems will need upskilling on the organisation’s business rules and the EU AI Act concerning the acceptable use of AI and different risk categories and requirements. Those involved in developing AI systems, where they directly affect how the system will work and what its output will be, will require more detailed and higher-level upskilling, including training in responsible AI, ethics and bias.
A clear understanding of the distinct upskilling needs across the workforce enables organisations to decide on the most appropriate upskilling interventions, now and in the future. In recent years, organisations have dealt with upskilling needs based on disruption and immediate needs. There may not have been the opportunity to develop a long-term view of the skills needed. Balancing the short-term with the skills needed in the future will be an ongoing challenge that is best tackled sooner rather than later.
Having clearly defined outcomes before developing relevant training will lay the foundation for effective training interventions, which should be specific to the employee group and their upskilling needs. Complying with the EU AI Act and ensuring AI literacy in your organisation will not be ‘finished’ in the next six months. Although compliance is achievable by the February 2025 deadline, monitoring, tracking and upskilling efforts will need to evolve continuously as the pace of AI change continues to accelerate. Adaptability and upskilling will become increasingly important, and those who embrace a growth mindset will unlock future opportunities.
Making access to AI systems contingent on an employee explicitly confirming their understanding and compliance is a practical way of incentivising engagement levels and tracking organisational AI system use and compliance.
Compliance with the EU AI Act will require organisations to maintain a clear record of the AI application landscape, capturing current and expected AI applications, including third-party applications. This raises critical questions for organisations: Who owns the AI landscape? Where will responsibility for EU AI Act compliance sit? How is the landscape assessed, and how are new AI systems approved and implemented? Establishing clarity around governance, roles and responsibilities is a necessary step in the compliance journey.
With these questions answered, organisations can develop an inventory of the current and expected AI landscape. Having a clear overview of their internal AI landscape allows organisations to address regulatory implications while leveraging the full potential of AI. Our recent article, ‘Three no-regret moves to explore AI business potential and regulatory impact at the same time’, outlines how organisations can start collating AI usage. When creating this inventory, organisations should also capture the purpose of these applications, their intended uses, the data they use, the risk classification (consistent with the EU AI Act), and the stakeholders they affect. Additionally, defining the users of these systems, their roles, responsibilities and levels of skills and knowledge will lay the foundation for upskilling interventions and monitoring their completion.
While it may be tempting to implement new and exciting AI systems, organisations should not introduce AI without clear business objectives and outcomes in mind. Organisations should define the benefits they hope to achieve and how to measure and track them over time. Tracking engagement with upskilling initiatives will be crucial in demonstrating EU AI Act compliance. However, organisations should take this a step further to offer data-driven insights into the impact of these initiatives on benefits realisation over time.
We know that employees are already using GenAI on their own initiative. Organisations cannot afford to wait until they have all the information and a detailed strategy before starting to upskill their employees. The upskilling journey for employees will be continuously iterated, and many areas can be focused on today, including:
GenAI and its principles: Understanding what this technology is and how it works is the first step in helping employees grasp the potential benefits and challenges associated with using it.
Responsible use of AI and EU AI Act risk classification: Ensuring compliance with the EU AI Act requires awareness and understanding of the different risk classifications, particularly prohibited AI and the requirements around high-risk AI systems. This topic also highlights the importance of responsible AI use, considering factors such as human impact, bias, fairness, privacy and transparency in the context of AI systems and the consequences of their use.
Prompt engineering and critical review of outputs: Designing effective prompts or instructions to direct GenAI systems and having the knowledge and skills to critically evaluate the output is becoming an increasingly important skill as people and organisations use GenAI technologies more extensively.
This upskilling journey will be more comfortable for some than others. Organisations should take steps to ensure people are not excluded or left behind. For example, the term “prompt engineering” mentioned above may be intimidating or off-putting to employees who do not consider themselves technologically advanced. In simpler terms, it involves asking a question or designing instructions for a GenAI model to produce an output — and, like most things, people get better at it with practice. GenAI also improves its understanding of our questions and instructions. Recognising that people are starting from different points in their unique learning journeys and offering the right upskilling opportunities to progress their learning will make a significant difference.
Our expert teams have extensive experience helping organisations build capability, upskill and meet regulatory requirements in practical and impactful ways. Please get in touch to find out more.