The world of artificial intelligence (AI) took a massive leap forward with the emergence of ChatGPT in November 2022. It has been deemed a game-changer in the ever-changing world of technology. Organisations are no longer asking whether to use AI but how to adopt it to reduce operational costs, increase efficiency, grow revenue or improve customer experience. There has been a surge in the design and implementation of use cases in AI across industries such as healthcare, retail, financial services, manufacturing and others.
While the emergence of AI is transformative, this powerful tool is not without its challenges-particularly the profound privacy concerns it raises. As organisations eagerly harness the potential of AI, it is important to explore the associated privacy risks.
Data collection: AI models rely on extensive datasets to learn and mimic human-like behaviour or provide valuable insights. The source of these datasets often includes diverse content from the internet, which might inadvertently contain personal or special category data, or sometimes even data exposed in a breach. As AI models evolve, their training datasets are likely to grow larger, increasing the risk of personal and special category data being included.
Algorithmic bias and discrimination: AI systems are trained on vast amounts of existing data, which may inherently have biases present. If left unchecked, biased algorithms may inadvertently perpetuate biases and lead to decisions that could negatively impact certain groups of people-even without the organisation’s intention to discriminate.
Data subject requests: Once the AI systems are trained and deployed, responding to certain data subject requests becomes increasingly difficult. In the case of requests by data subjects for rectification of data or deletion of their data used in training the AI system, organisations may need to retrain the model, take it down, or find an alternate resolution.
Transparency and informed consent: As AI systems become commonplace in organisations, users will increasingly interact with AI without their knowledge, including instances where users are affected by automated decision-making. Additionally, the consent provided by these individuals may not be valid if they are not aware of how their data will be used, especially by AI systems.
Data breaches: Multiple high-profile data breaches have occurred, including one in OpenAI (the company behind ChatGPT). Organisations must ensure that personal and special category data is stored and processed securely when collecting and using extensive datasets to train AI systems.
Regulatory requirements and industry standards: Even though AI is considered to be a novel technology, there are existing and upcoming regulations and standards that define and guide the usage of AI systems, including the General Data Protection Regulation (GDPR), the upcoming EU AI Act, ISO 22989, National Institute of Standards and Technology (NIST) AI Risk Management Framework and others. Organisations must demonstrate compliance with these regulations and standards to maintain customer trust and meet procurement standards in the market.
Misuse of personal data in AI-enabled cyberattacks: Malicious actors have begun using AI to create more sophisticated and effective attacks that are harder to defend against. This includes leveraging personal data such as audio clips and deep-fake content for advanced phishing attempts and other scams.
Inaccurate responses: AI is a powerful tool, but the quality of data an AI system is trained on and its ability to identify correct responses are important considerations. It is common for generative AI programs to respond based on probabilities identified within the data sets used to train the AI instead of actual, accurate data points. This can result in inaccurate responses and may cause issues if users do not verify the authenticity of the system’s responses.
To successfully traverse the concerns listed above while developing and integrating AI systems, organisations should consider the following best practices:
AI governance: The primary role of AI governance in the organisation is to document and establish a system of rules, processes, structures, frameworks and technological tools (including setting up an AI committee) to ensure ethical and responsible development and use of AI systems. To ensure success, the teams involved in developing AI governance should be interdisciplinary, including teams in AI development, legal, privacy, information security, customer success and others.
Privacy by design: The foundation of responsible AI lies in the concept of ‘privacy by design’, which states that data protection and privacy considerations must be implemented throughout the development lifecycle for any AI system. This includes incorporating privacy-enhancing technologies, ensuring appropriate security, compliance with regulatory requirements and other privacy-specific principles. Some AI systems have a black box-like nature that makes it harder to detect and fix ethical, privacy and regulatory issues once deployed, increasing the need for privacy by design. There might be other processes that pose too high a risk to move towards automation through AI and will require controls such as “a human in the loop”.
Transparency: Users must be provided with clear and transparent communication in the form of privacy notices and other means, which include the confirmation that AI systems are used to process their data (including details of automated decision-making, if present), how their data is collected and processed, how long it will be stored, their rights etc. The information helps users provide informed consent and builds trust in AI systems and the organisation.
Fairness: An important step is to perform regular testing and audits of AI systems to test their performance and ensure no bias or discrimination against users. The review should include the automated decision-making algorithm, and the process by which the algorithm makes decisions should be transparent and explainable. In some circumstances, fine-tuning learning models and incorporating ethical guidelines into AI training may also be needed.
Data management: Appropriate data management principles should be considered. These include ensuring the data ingested by the AI system during training is lawfully obtained, is of high quality, and that rigorous vetting and anonymisation have been performed. Technologies such as pseudonymisation or data aggregation should be implemented to ensure compliance with privacy principles of data minimisation and retention. Up-to-date records of processing activities (RoPA) should also be maintained to ensure data is managed effectively throughout its lifecycle. Organisations can’t use publicly available data to train AI systems without a valid lawful basis. The Italian Data Protection Authority’s earlier decision to temporarily ban the use of ChatGPT originated from the organisation’s lack of a legal basis for using publicly available datasets.
Risk management, compliance, and information security: A risk-based approach, including a data protection impact assessment (DPIA), should be implemented to assess the level of risk involved before AI systems are deployed. The organisation should also sign-off on the risk levels, controls and mitigations. AI compliance monitoring should be incorporated into the organisational regulatory compliance programme or the privacy programme. The wider organisational information security programme should also include AI systems and their underlying data to prevent data breaches and other malicious attacks. Technical and organisational measures such as encryption, data masking, password management, access controls and network security should be implemented.
Employee training: As AI is a new technology, employees must be trained periodically on responsible AI usage. Training should include the privacy impact of AI systems, compliance with data protection regulations while using AI, misuse of personal data in AI-enabled cyberattacks and how to guard against it, and data protection best practices.
The advent of AI may be compared to the invention of the combustion engine. While organisations can move faster, they will also require stronger brakes. These brakes may be in the form of addressing these multifaceted concerns, which necessitates a holistic approach, combining technological innovation, ethical practices, user empowerment and regulatory adherence. Organisations’ responsibility will not only be to innovate but also to ensure that innovation aligns with the values of privacy, ethics and user trust.
PwC Ireland offers a wide range of consultative services to help organisations use AI as a business solution. As your trusted partner, we’ll help you leverage AI responsibly, confidently and effectively.