Artificial Intelligence (AI) is poised to revolutionise the insurance industry, offering unparalleled opportunities to enhance customer service, reduce costs and streamline operations. However, the path to successful AI adoption demands a commitment to Responsible AI — ensuring ethical, transparent and accountable use of these powerful technologies. With 70% of CEOs anticipating significant changes from Generative AI (GenAI) within three years, insurers must act now.
The application of AI is crucial for insurers as it offers opportunities to enhance customer convenience, reduce costs and support trust, which is essential for the industry. AI is disrupting the entire insurance value chain, enabling new products and services and changing long-standing underwriting processes.
In May 2024, PwC conducted a Finance Transformation Survey of 17 global insurers to understand key themes in finance transformation. AI was a key topic, though most respondents noted they were at an early or “proof of concept” stage of deployment. The survey highlights that AI adoption is critical to actuarial and finance functions. Respondents anticipate using AI for management reporting and on-the-fly analytics, which automate data collection, analysis and visualisation, leading to more efficient and accurate reports. This will free up time for further analysis and provide real-time insights for decision-making.
However, the complexity of AI also introduces new risks. These include model risks related to AI systems’ development and performance, data risks concerning data management and usage, usage risks related to the misuse and manipulation of AI systems, and system and infrastructure risks associated with AI implementation and operation. According to our Finance Transformation Survey, data and IT risks, costs, capability and data confidentiality were significant blockers for organisations considering AI adoption.
Reputational risk is another significant concern for insurers using AI. Managing this risk through Responsible AI practices is crucial. Trust in AI requires more than just compliant, secure systems. It means deploying the right solutions for the right situation with the appropriate data, policies and oversight to achieve relevant, reliable results.
To address these risks, insurers should implement a Responsible AI framework with robust data governance practices and explore cost-effective AI solutions. ‘Mastering the EU AI Act on Insurers' Route to AI Success’ outlines key actions to design and implement robust governance practices.
Using PwC’s Responsible AI toolkit, we can help you evaluate and augment existing practices or create new ones to harness AI and prepare for upcoming regulations, giving your organisation a distinct competitive edge.
The PwC Responsible AI Toolkit is a suite of customisable frameworks, tools and processes designed to help organisations harness AI in an ethical and responsible way. The toolkit’s five key dimensions are:
Ethics and regulation;
Bias and fairness;
Interpretability and explainability;
Robustness, security, safety and privacy; and
Governance.
Insurers must consider the ethical implications of AI, ensuring systems are fair, transparent, and non-discriminatory. The Central Bank of Ireland (CBI) emphasises the importance of data ethics in the insurance sector, highlighting the need for robust data governance frameworks. It notes that the insurance industry has long been data-centric, with data-led decision-making at the heart of risk assessment, underwriting and claims management.
In its Data Ethics Within Insurance paper, the CBI also notes that advancements in big data and related technologies present increased opportunities for collecting and processing more granular and personalised data. This can result in more efficient business processes and benefits for consumers and firms. However, these advancements also bring potential risks, including the inappropriate use of data and technology, which could lead to unfair treatment and negative outcomes for consumers, such as bias, misuse of personal data and data privacy concerns.
Incorporating Responsible AI for insurers is crucial for the ethical and fair treatment of stakeholders, especially policyholders.
Insurers must uncover bias in the underlying data and model development process, helping the business understand what processes may lead to unfairness. Bias in AI systems poses a significant risk, especially in the insurance industry, where decisions can profoundly impact customers’ lives. It is often identified as one of the biggest risks associated with AI.
AI bias can arise from various sources, such as biased training data, flawed algorithms or systemic biases within the organisation. These biases can lead to unfair treatment of certain customer groups, resulting in legal liabilities and reputational damage. Bias is more than just a technical issue; it’s a human issue, reflecting the subjective nature of fairness and the societal context in which AI operates.
As more information becomes available and the model matures, it’s important to guard against unintended bias against particular groups. Transparency is critical in identifying these biases.
Insurers must explain model decision-making overall and what drives individual predictions to different stakeholders. Our toolkit provides an approach and utilities for AI-driven decisions to be interpretable and easily explainable by those who operate them and those affected by them.
Explainability and AI governance are critical for building trust in AI applications. Models can act as ‘black boxes’, lacking insight into how and why they produce their outputs. New data sources, application areas and consumers of model output can introduce new risks that need to be managed. A lack of transparency in AI decisions is frustrating for end-users or customers and can expose an insurer to operational, reputational and financial risks. To instil trust in AI systems, people must be enabled to look ‘under the hood’ at their underlying models, explore the data used to train them, understand the reasoning behind each decision and provide coherent explanations to all stakeholders in a timely manner. Our Responsible AI toolkit is designed to help insurers build and maintain trust with stakeholders and society by ensuring AI-driven decisions are interpretable and explainable.
According to our Global Digital Trust Insights Survey 2025, 38% of respondents across various industries indicated inadequate internal controls and risk management around GenAI.
Insurers must assess AI performance over time to identify potential disruptions or challenges to long-term performance, safety and consumer data privacy. Implementing proper security controls and policies for using GenAI is crucial to promote responsible use within the business and protect data. Ensuring robustness, security, safety and privacy in AI systems is essential for maintaining trust and compliance. Our Responsible AI toolkit offers frameworks and tools to help insurers achieve these goals and ensure Responsible AI implementation.
Insurers must introduce enterprise-wide and end-to-end accountability for AI applications, data and data use, ensuring consistency of operations to minimise risk and maximise return on investment. By implementing strong governance frameworks, insurers can ensure that AI systems are used responsibly.
We are committed to guiding insurers towards successful AI adoption that is ethical, fair and trusted. Our Responsible AI toolkit enables insurers to build high-quality, transparent and explainable AI applications that generate trust and inspire confidence. Investing in Responsible AI from the outset provides a solid foundation to kick-start your AI journey. To navigate this landscape and unlock the full potential of AI in your organisation, contact a member of our team today for expert guidance and support.