Responsible AI: How insurers can lay strong foundations to start their AI journey

  • Insight
  • December 04, 2024

Donna McEneaney

Director, PwC Ireland (Republic of)

Louise Murphy

Director, PwC Ireland (Republic of)

Learn how investing in Responsible AI from the outset can harness the full potential of AI while maintaining trust and compliance.

Artificial Intelligence (AI) is poised to revolutionise the insurance industry, offering unparalleled opportunities to enhance customer service, reduce costs and streamline operations. However, the path to successful AI adoption demands a commitment to Responsible AI — ensuring ethical, transparent and accountable use of these powerful technologies. With 70% of CEOs anticipating significant changes from Generative AI (GenAI) within three years, insurers must act now.

AI in insurance: opportunities and risks

The application of AI is crucial for insurers as it offers opportunities to enhance customer convenience, reduce costs and support trust, which is essential for the industry. AI is disrupting the entire insurance value chain, enabling new products and services and changing long-standing underwriting processes.

In May 2024, PwC conducted a Finance Transformation Survey of 17 global insurers to understand key themes in finance transformation. AI was a key topic, though most respondents noted they were at an early or “proof of concept” stage of deployment. The survey highlights that AI adoption is critical to actuarial and finance functions. Respondents anticipate using AI for management reporting and on-the-fly analytics, which automate data collection, analysis and visualisation, leading to more efficient and accurate reports. This will free up time for further analysis and provide real-time insights for decision-making.

However, the complexity of AI also introduces new risks. These include model risks related to AI systems’ development and performance, data risks concerning data management and usage, usage risks related to the misuse and manipulation of AI systems, and system and infrastructure risks associated with AI implementation and operation. According to our Finance Transformation Survey, data and IT risks, costs, capability and data confidentiality were significant blockers for organisations considering AI adoption.

Reputational risk is another significant concern for insurers using AI. Managing this risk through Responsible AI practices is crucial. Trust in AI requires more than just compliant, secure systems. It means deploying the right solutions for the right situation with the appropriate data, policies and oversight to achieve relevant, reliable results.

To address these risks, insurers should implement a Responsible AI framework with robust data governance practices and explore cost-effective AI solutions. ‘Mastering the EU AI Act on Insurers' Route to AI Success’ outlines key actions to design and implement robust governance practices.

Using PwC’s Responsible AI toolkit, we can help you evaluate and augment existing practices or create new ones to harness AI and prepare for upcoming regulations, giving your organisation a distinct competitive edge.

PwC's Responsible AI Toolkit

The PwC Responsible AI Toolkit is a suite of customisable frameworks, tools and processes designed to help organisations harness AI in an ethical and responsible way. The toolkit’s five key dimensions are:

  1. Ethics and regulation;

  2. Bias and fairness;

  3. Interpretability and explainability;

  4. Robustness, security, safety and privacy; and

  5. Governance.

1. Ethics and regulation

Insurers must consider the ethical implications of AI, ensuring systems are fair, transparent, and non-discriminatory. The Central Bank of Ireland (CBI) emphasises the importance of data ethics in the insurance sector, highlighting the need for robust data governance frameworks. It notes that the insurance industry has long been data-centric, with data-led decision-making at the heart of risk assessment, underwriting and claims management.

In its Data Ethics Within Insurance paper, the CBI also notes that advancements in big data and related technologies present increased opportunities for collecting and processing more granular and personalised data. This can result in more efficient business processes and benefits for consumers and firms. However, these advancements also bring potential risks, including the inappropriate use of data and technology, which could lead to unfair treatment and negative outcomes for consumers, such as bias, misuse of personal data and data privacy concerns.

Incorporating Responsible AI for insurers is crucial for the ethical and fair treatment of stakeholders, especially policyholders.

2. Bias and fairness

Insurers must uncover bias in the underlying data and model development process, helping the business understand what processes may lead to unfairness. Bias in AI systems poses a significant risk, especially in the insurance industry, where decisions can profoundly impact customers’ lives. It is often identified as one of the biggest risks associated with AI.

AI bias can arise from various sources, such as biased training data, flawed algorithms or systemic biases within the organisation. These biases can lead to unfair treatment of certain customer groups, resulting in legal liabilities and reputational damage. Bias is more than just a technical issue; it’s a human issue, reflecting the subjective nature of fairness and the societal context in which AI operates.

As more information becomes available and the model matures, it’s important to guard against unintended bias against particular groups. Transparency is critical in identifying these biases.

3. Interpretability and explainability

Insurers must explain model decision-making overall and what drives individual predictions to different stakeholders. Our toolkit provides an approach and utilities for AI-driven decisions to be interpretable and easily explainable by those who operate them and those affected by them.

Explainability and AI governance are critical for building trust in AI applications. Models can act as ‘black boxes’, lacking insight into how and why they produce their outputs. New data sources, application areas and consumers of model output can introduce new risks that need to be managed. A lack of transparency in AI decisions is frustrating for end-users or customers and can expose an insurer to operational, reputational and financial risks. To instil trust in AI systems, people must be enabled to look ‘under the hood’ at their underlying models, explore the data used to train them, understand the reasoning behind each decision and provide coherent explanations to all stakeholders in a timely manner. Our Responsible AI toolkit is designed to help insurers build and maintain trust with stakeholders and society by ensuring AI-driven decisions are interpretable and explainable.

4. Robustness, security, safety and privacy

According to our Global Digital Trust Insights Survey 2025, 38% of respondents across various industries indicated inadequate internal controls and risk management around GenAI.

Insurers must assess AI performance over time to identify potential disruptions or challenges to long-term performance, safety and consumer data privacy. Implementing proper security controls and policies for using GenAI is crucial to promote responsible use within the business and protect data. Ensuring robustness, security, safety and privacy in AI systems is essential for maintaining trust and compliance. Our Responsible AI toolkit offers frameworks and tools to help insurers achieve these goals and ensure Responsible AI implementation.

5. Governance

Insurers must introduce enterprise-wide and end-to-end accountability for AI applications, data and data use, ensuring consistency of operations to minimise risk and maximise return on investment. By implementing strong governance frameworks, insurers can ensure that AI systems are used responsibly.

Key actions businesses can take today

  1. Establish robust data governance to ensure data quality, privacy and security. This includes setting up frameworks for data management, usage and protection to mitigate risks associated with AI deployment.
  2. Regularly assess AI systems for biases and implement strategies to mitigate them. This involves scrutinising training data, algorithms and decision-making processes to ensure fairness and prevent discriminatory outcomes.
  3. Ensure that AI-driven decisions are transparent and explainable. Use tools and frameworks to make AI models interpretable, enabling stakeholders to understand and trust AI outputs.
  4. Introduce enterprise-wide governance structures to oversee AI applications. Establish clear accountability for AI systems, ensuring consistent operations and compliance with regulatory requirements.
  5. Assess readiness to adopt Generative AI (GenAI). Insurers are at various stages of AI adoption. Progress to the next stage of maturity depends on having the right foundations in place across AI strategy, data, technology, governance and operations. We have developed two readiness assessment tools: EU AI Act and Responsible AI. These tools help clients understand where their current governance structures are fit for purpose and where adjustments are needed.

We are here to help you

We are committed to guiding insurers towards successful AI adoption that is ethical, fair and trusted. Our Responsible AI toolkit enables insurers to build high-quality, transparent and explainable AI applications that generate trust and inspire confidence. Investing in Responsible AI from the outset provides a solid foundation to kick-start your AI journey. To navigate this landscape and unlock the full potential of AI in your organisation, contact a member of our team today for expert guidance and support.

PwC’s GenAI Business Centre

Turn knowledge into scalable outcomes.

Contact us

Ronan Mulligan

Partner, PwC Ireland (Republic of)

Tel: +353 86 411 6027

Donna McEneaney

Director, PwC Ireland (Republic of)

Tel: +353 87 3520970

Louise Murphy

Director, PwC Ireland (Republic of)

Ikhlaas Pohplonker

Manager, PwC Ireland (Republic of)

Follow PwC Ireland