83% of Irish business leaders expect GenAI to significantly impact their business. While some insurers are unsure how to leverage GenAI across their value chain, others have begun focusing on the AI wave’s potential competitive advantages. Regardless of their current position, the EU AI Act mandates action. This new law aims to ensure the responsible use of AI while encouraging innovation and competition. For most companies, AI’s risks and regulatory implications coincide with exploring its business potential. Rather than prioritising one over the other, they should be addressed in tandem.
As outlined in ‘The EU AI Act: What you need to know’, insurers must take crucial steps to ensure compliance with the Act, which applies to all sub-fields of AI, including machine learning, deep learning and GenAI. Embarking on this journey requires a proactive approach to address the ‘who, what and when’ of the regulation. Here are three key actions insurers should take now:
The EU AI Act broadly defines an ‘AI system’ as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. This encompasses many systems that predict, classify or interpret data. For insurers, this may include forecasting tools, pricing models, fraud detection systems, chatbots and more.
Developing an AI exposure register based on this definition is a critical action businesses should take immediately. However, ensuring the completeness of such an inventory can be challenging, as it requires distinguishing between deployers and providers of AI systems. For instance, insurers may need to inventory the AI systems their managing general agents use to understand their AI exposure fully.
The EU AI Act categorises AI systems based on their risk characteristics, classifying them as permitted without restriction, permitted under certain conditions, or prohibited. This approach assesses the potential risks an AI system may pose to EU citizens’ health, safety or fundamental rights. High-risk AI systems, which are the focus of the regulation, are permitted but must adhere to stringent documentation, monitoring and quality requirements for development and use.
The Commission defines the group of high-risk AI systems as including “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.”
Correctly allocating AI systems to the appropriate risk category under the EU AI Act is crucial to avoid substantial fines due to non-compliance. In some cases, insurers may face complex interpretations and need legal advice. For example, they may need to determine whether the legal obligation for motor insurance in Ireland is considered to provide “access to and enjoyment of essential private services and essential public services and benefits”, thus classifying the associated AI systems as high-risk.
Legally compliant risk classification is relevant not only in the EU AI Act context but also due to the proposed EU AI Liability Directive published in September 2022. This directive regulates the liability of deployers, distributors and manufacturers for damages caused by AI systems and aligns with the risk levels defined in the EU AI Act. Consequently, those who misclassify AI systems must expect fines and civil liability in the event of missing, inadequate or unreliable AI governance and documentation.
The EU AI Act imposes extensive requirements for high-risk AI systems, presenting complex challenges for organisations. Key requirement areas for high-risk AI systems include:
Risk management system
Data governance
Technical documentation
Record-keeping
Transparency obligations
Human oversight
Accuracy
Robustness and cybersecurity
General purpose AI (GPAI) systems, such as OpenAI’s ChatGPT and Microsoft Copilot, which perform a wide range of intellectual tasks, were recently included in the regulation. GPAI system providers must create technical documentation, provide instructions for use, develop guidelines for compliance with EU copyright law, and publish a summary of the content used for training. Providers of GPAI systems that pose a systemic risk should also conduct standardised model evaluations (e.g. adversarial testing), mitigate systemic risks, track and report serious incidents, and ensure cybersecurity.
A key challenge is integrating these requirements into existing processes and structures, such as IT compliance systems or Solvency II model risk management processes. Failure to do so may increase personnel and IT-related expenses.
According to the 2024 PwC GenAI Business Leaders Survey, only 7% of Irish business leaders have an AI governance structure in place. Complying with the EU AI Act requires a team effort, with significant contributions from all three lines of defence and various functions and roles within the organisation.
The EU AI Act was published in the Official Journal of the European Union on 12 July 2024, and will come into force in 20 days from then. It will become fully applicable 24 months later, following a staggered approach. Insurers should closely monitor the implementation timeline and ensure they are prepared to comply with the Act’s requirements as they become effective. For a complete outline of the various milestones, read our recent insight.
The EU AI Act mandates businesses to implement comprehensive AI governance and compliance management systems, which are crucial for the rapid, secure and efficient development and operation of AI systems. The foundation for GenAI success lies in developing an AI strategy that aligns with your corporate strategy. Businesses should then focus on the following four key areas:
Apply: Identify and prioritise opportunities for value creation along the insurance value chain to ensure investments are targeted towards growth or cost optimisation.
Protect: Assess and enhance cybersecurity readiness to safeguard AI systems, counter AI-related threats and implement AI-specific data privacy strategies.
Comply: Implement a robust Responsible AI framework and comply with emerging AI-related standards and regulations (including the EU AI Act), including assurance and audit requirements.
Adopt: Define and build the necessary skills and competencies to adopt and leverage GenAI effectively, and implement appropriate governance mechanisms. With 55% of Irish business leaders believing that GenAI will either increase jobs or have no net impact, investing in upskilling and change management is essential.
As Irish insurers harness AI to drive digital transformation, the EU AI Act will significantly impact AI systems’ development, use and commercialisation in the coming years. Proactive compliance is essential. Implementing comprehensive AI governance and compliance management systems early provides a competitive edge and accelerates time-to-market for high-quality AI solutions. However, the complex requirements pose new challenges that require time to address effectively. To navigate this landscape and unlock the full potential of AI in your organisation, contact a member of our team today for expert guidance and support.
This article incorporates insights from ‘EU AI Act: European AI regulation and its implementation’, an in-depth analysis published by PwC Germany, adapted here for the Irish insurance context.