Privacy and AI: how do you balance innovation and protecting personal data

Artificial Intelligence (AI) has become an increasingly hot topic in recent months. While some famous examples, such as chatbot ChatGPT, have captured the public imagination in terms of how AI might impact the lives of ordinary people, attention has also turned to how this technology can be used in a corporate setting.

AI can be an incredibly powerful tool for an organisation, with companies using AI solutions to analyse data sets, categorise complex sets of information, prevent fraud, and even mimic human decision making. However, the consequence of trusting AI to process data, notably personal data, presents some significant threats to privacy and security.

As is often the case given the pace of technological development, regulators are somewhat behind the curve in terms of dealing with any privacy concerns thrown up by the use of AI. However, regulatory scrutiny is certainly increasing with regards to AI use – and organisations need to apply rigour to demonstrate what they are doing is lawful, proportionate and compliant with privacy regulations.

Regulators turn their gaze on AI

Companies’ uptake of AI solutions has forced the hands of privacy regulators, with European data protection authorities predictably at the forefront of efforts to increase oversight of AI use. Potential breaches of the European Union’s General Data Protection Regulation (GDPR) relating to AI have been raised with increasing regularity as supervisory bodies pay more attention to developments in this field and misuse by companies.

Perhaps the most infamous case in this regard is that of Clearview AI, a US-based facial recognition software provider. In 2022, Clearview was fined millions of euros by bodies across Europe for ‘brazenly’ gathering more than 20 billion images of individuals without their consent to build a global database that could be used for facial recognition. However, there are other examples. For instance, also in 2022, the Hungarian regulator fined Budapest Bank Zrt. 250 million forints (c. $682,000) for using artificial intelligence software to analyse emotions during calls with customers without properly informing them about the tool.

Regulators have also begun investing in their capabilities to aid any future investigations. For instance, France’s data protection regulator announced in January 2023 that it would create an artificial intelligence unit with five employees to investigate misuse. Spain also intends to create a similar AI agency, while the Dutch data protection body has already opened a new algorithm oversight division, with a budget of €1 million in 2023 that will increase to €3.6 million by 2026.

It appears also that there will be more legal safeguards around AI coming in 2023, with the EU debating draft legislation first proposed in 2021, which will seek to limit high-risk uses of AI such as biometrics for identification purposes or using AI to operate critical infrastructure.

What do companies actually need to look out for?

Organisations face some difficult decisions with regards to using AI. On one hand, there is a clear driver to seek to innovate and seek out potential efficiencies through using AI tools. However, on the other, the increased interest from regulators coupled with the uncertainty around how to apply aspects of privacy law to AI technology, might make some companies hesitant to adopt new tools for fear of falling foul of the rules.

Increasingly regulators have sought to explain their expectations and offer some concrete guidance for organisations to follow when using AI. In this vein, October 2022 saw the UK Information Commissioner’s Office (ICO) produce a comprehensive guidance document for meeting data privacy requirements when using AI.

The ICO guidance document raises some of the major concerns that accompany AI algorithms with regards to the rights of individuals. The accuracy of AI outputs is a topic that has generated significant interest, particularly considering the risk of bias being introduced into automated decision making and leading to groups potentially being discriminated against. An example of this could be an AI system at a bank which calculates credit scores that are used to approve or reject loan applications. If the system tends to give certain groups (e.g., women) lower scores, this may constitute discrimination.

The exact reasons for an AI system in the above example giving a particular group low scores are varied. This could be imbalanced training data (i.e., due to historical imbalances where more men successfully received loans than women, the AI system has less data on women repaying loans and concludes they are less likely to pay back) or could simply reflect past discrimination (i.e., if women were rejected unfairly in the past, the AI system may reproduce this pattern).

This means that any AI tool needs to be rigorously tested and include some level of human oversight to make sure that individuals are treated fairly, which can be easier said than done considering how complex and opaque some algorithms might be. However, something being difficult or inconvenient has never been a reason not to follow privacy rules and organisations will need to get to grips with their obligations with regards to AI decision-making – especially given Article 22 of GDPR specifically governs such automated decision making.

Practical advice on staying compliant

AI tools certainly have their many complexities and organisations might wonder what they can do to ensure that they remain compliant with the GDPR and any other privacy regulations if they choose to use such tools.

The ICO advises companies to ensure that the Data Protection Officer (DPO) retains oversight over any AI implementation from a privacy standpoint – not simply deferring to AI specialists – and consider the principle of accountability, whether the processing of any personal data is fair, lawful and transparent, and how data is being stored securely.

In short, what this guidance shows us is that many of the privacy concerns of AI are no different to any other technology seen through the lens of privacy by design. Organisations need to think carefully about whether any AI solution is proportionate, lawful and necessary when processing personal data – it is not enough that AI solves a problem or increases efficiency if it negatively impacts on data subject rights.

To do this, DPOs need to make sure that:
  • Data Privacy Impact Assessments are carried out before using any AI tools that could result in high-risk processing of personal data;
  • There is a clear understanding of what the tool does, what data is processed and why, and where it is stored;
  • Any security measures are adequate and have been properly considered;
  • Consent is gathered from data subjects where required and they are informed about how their data is being processed;
  • That the processing using an AI tool is proportionate, necessary and lawful; and finally,
  • Any decisions around using AI tools and mitigating privacy risks are documented.

AI presents us with exciting new opportunities that can and will change the way the world works. However, organisations cannot disregard the basics of data privacy and security in managing these tools. Innovation brings challenges and risks, so organisations using AI must ensure above all else they are responsible in adopting it and do not use it in a way that impinges on an individual’s rights and freedoms. Failure to do so is likely to be met with an increasingly heavy regulatory hand.

How we can help?

If you would like to understand more about data protection, get in touch to arrange a free consultation with one of our experts today.