How can we help you?

In this article we look at at the key principles of current data privacy legislation in relation to AI. 

Legal considerations 

The advances in AI in recent years highlight the importance of carefully reviewing the processing and sharing of personal data, as use cases for AI often involve the usage of personal data. 

The current data protection legislation in the UK is the Data Protection Act 2018 (DPA 2018) and the UK GDPR, which is an amended version of the EU GDPR. 

Keir Starmer stated in his introduction to the King's speech that "we will harness the power of artificial intelligence as we look to strengthen safety frameworks." The only further detail given though in the speech itself was that the government will "seek to establish the most appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models." 

Whilst we do not yet have any further details upon what this legislation may involve, in the meantime there are elements of UK GDPR and the DPA 2018 which are regulate AI in relation to personal data. We have set out below the key principles of current data privacy legislation in relation to AI, and some practical tips. 

UK GDPR – key principles 

Article 5 of the UK GDPR sets out key principles which lie at the heart of the general data protection regime. Applying these principles to AI:

  • Lawful, fair and transparent processing: Ensure you have a lawful basis of processing in relation to your usage of AI, and are open about your use of AI-enabled decisions. This is normally by way of a privacy notice.
  • Purpose limitation: Tell individuals what their data is being used for and do not process it in ways which they would not expect. Ensure that your usage of AI is in line with what you have told individuals that your usage of their data will entail.
  • Adequate, relevant and not excessive: When using AI to analyse data, review which data is actually necessary for the processing and do not analyse other data.
  • Accuracy: make sure that the data being used is accurate and review it to ensure it is up-to-date.
  • Storage limitation:  personal data must be kept no longer than is necessary for the purpose for which it is processed. For example if you collect data about job applicants and use AI to sift candidates, do not indefinitely keep data of rejected applicants.
  • Security: personal data must be processed taking appropriate security measures for the risks that arise from the processing. When you appoint a processor using AI, ensure that you carry out due diligence and are comfortable that they are dealing with data in a secure manner.

The UK's data protection regulator, the ICO, has published guidance to assist organisations in complying with Data Protection requirements in relation to AI. AI systems may involve processing personal data in different phases or for different purposes. This means you can be a controller or joint controller for some phases and a processor for others. The ICO's AI guidance includes more information on this point, which can be read in conjunction with its wider controller and processor guidance and checklists.

Privacy notices and data protection impact assessments (DPIAs)

Articles 13 and 14 UK GDPR set out the information that must be provided to individuals whose data is being processed. You must set out the purposes of the processing, the legal basis of processing and the individuals' rights in relation to the processing. Where AI is being used, it is important to assess whether this will result in any new form of processing or individuals' personal data being used to train AI models, as this could result in a requirement to update your privacy notice and inform individuals about any changes.  

Article 35 UK GDPR sets out that a DPIA must be undertaken where a type of processing (in particular using new technologies) is likely to result in a high risk to the rights of individuals. This will include an assessment of the risks involved in the processing, and the mitigating steps which could be taken to reduce that risk.

Automated decision making and profiling

AI is frequently used with personal data is to undertake automated decision making and/or profiling. This is the process where decisions are made using an algorithm based on uploaded data, without any human involvement in the decision (beyond setting parameters such as, in the recruitment context, specifying that the applicant must have a degree). Data controllers must be mindful of their obligations and carefully consider their responsibilities when using automated processing. The UK GDPR provisions on automated processing state that an individual has the right not to be subject to a decision based solely on automated processing, where that decision has a legal or similarly significant effect on them. 

There are limited exceptions available where automated decision making and profiling can be undertaken where the decision will have an effect on an individual. These are where the decision is necessary for a contract (such as credit scoring for a loan application), authorised by law (such as banks identifying potential fraudulent activity), or where explicit consent is provided.

This means that, for example, if AI is used to make recommendations to individuals upon holiday destinations they may like, this is not restricted, but if AI is used to assess whether an individual should be offered a heart transplant, this will require human intervention unless an exception is available. 

Where AI is used to undertake direct marketing purposes, individuals have the right to object and their data can no longer be processed for that purpose. 

Where AI is used to undertake profiling, individuals have the right to object and unless the controller can demonstrate that they have compelling legitimate grounds for the processing then that individual's data must no longer be used for profiling. 

Contracts with AI providers

AI providers usually act as a processor on behalf of the controller of the data. Where this is the case, article 28(3) UK GDPR sets out the minimum requirements for a contract between the two parties. This includes requirements to only process data on documented instructions, requiring staff to maintain confidentiality and to only engage sub-processors with general or specific consent. Most data controllers view the article 28(3) provisions as a starting point. 

If data is to be transferred abroad for processing you will also need to consider whether a restricted international transfer is taking place. If so, you will need to have appropriate transfer documentation in place and undertake a transfer risk assessment.

Practical considerations

Compliance with data protection law 

Data privacy should be by design not by default, and transparency is key. 

Privacy notices

If you use AI, you should review your privacy notices to ensure that you set out any changes to how individuals' personal data is processed in using AI. This review should include an assessment of whether disclosure should be made of any automated processing and profiling. 

You should also, when using third party AI, identify whether the third party is acting as a controller or a processor and be transparent about this. If a third party is acting as a controller then their privacy notice should also be made available to individuals. 

DPIAs

The UK and EU guidance state that in most cases when AI technology is implemented, this will trigger the need for a DPIA.  

A DPIA will need to make clear how and why you are going to use AI to process the data. This includes describing the nature, scope, context and purpose of the processing. A DPIA should detail:

  • how you will collect, store and use data;
  • the volume, variety and sensitivity of the data;
  • the nature of your relationship with individuals;
  • the nature, scope, context and purpose of the processing;
  • risks to individuals and any mitigating measures; and
  • the intended outcomes for individuals or wider society, as well as for you.

Contracts with AI providers

Where an AI provider is acting as a processor on behalf of a business, then that business, as a controller, has responsibility for the data processed. This means that a contract with an AI provider should set out very clearly what the obligations of the provider are and what it is permitted to do with the personal data in question.

The processes of the AI provider should also be reviewed as part of due diligence, including an assessment of whether the provider is acting as a controller in relation to their usage of training data.  

The risk of bias 

Maintaining data accuracy is key for training AI and is achieved by ensuring that the data used is accurate, up-to-date and relevant. If the training data or input data is inaccurate or biased, an AI system is at risk of producing an output that reflects this. 

AI systems and algorithms must be regularly monitored and tested to detect and mitigate biases that could result in unfair treatment or discrimination. This could be done through regular bias audits or by using statistical methods and fairness metrics to evaluate and mitigate bias. 

You should assess, on an ongoing basis, whether the data you are gathering is accurate, representative, reliable, relevant, and up to date.

Data security and storage of data 

You must take appropriate security measures to protect the data collected, stored, and used in AI systems as well as the data produced by AI systems.  This may involve the use of encryption technologies or authentication, and putting the data into a separate software system. You should ensure that AI providers detail their security measures and that you review these to ensure that they are sufficient.

Robust security measures must be in place to protect personal data and regular security audits must be conducted. 

You need to consider the impact of third parties accessing the data and whether personal data can be anonymised or pseudonymised to mitigate any risks. 

If personal data is transferred or stored outside the UK, companies must ensure compliance with data protection laws regarding international data transfers.

Key takeaways 

  • Undertake due diligence on AI providers and check what training data they use, whether they act as a controller or a processor, and their data security measures. Undertake regular audits to check compliance.
  • If using AI to undertake automated decision making or profiling, assess whether the decisions will have a legal or otherwise significant impact on individuals. If so, human involvement must be built in. 
  • Review and risk assess contracts to ensure that protective measures are in place in relation to personal data and that the AI provider's obligations are clearly set out.
  • Review and assess your current privacy notices and ensure that you are being transparent about new types of processing personal data. 
  • Undertake DPIAs where there is any high risk for individuals. 
  • Consider the location of the processing and if any restricted international transfers are to be undertaken as part of the processing.
  • Train your staff upon data privacy compliance and confidentiality. Update data protection policies to take into account any usage of AI.