How can we help you?

Legal considerations

Whilst many organisations are required to have certain policies in place by law, there is currently no legal requirement in the UK to have policies in place governing the development or usage of artificial intelligence (AI). 

However, it is important to consider putting these policies in place in order to ensure that:

  • AI is being used consistently and transparently across your organisation;
  • suppliers are properly assessed;
  • data privacy legislation is being complied with; and
  • staff are fully aware of the risks of AI and how to use it safely.    

Practical considerations

Whilst there is no specific legislation in place in the UK governing the usage of AI, most organisations are starting to consider how they could use AI. As AI can be used to transform business operations it is important to consider and put into place internal governance procedures.

Internal governance regimes should account for the risks and opportunities the usage of AI presents for an organisation. The key risks include accuracy, lack of transparency, bias, accountability, data privacy and reliability issues. Some AI systems are prone to 'hallucination' and if the correct processes are not in place to review information generated by AI, this could lead to misuse of sensitive information, ethical issues and over-reliance, creating the potential for errors and a lack of clarity over liability.

It is useful to start with AI usage reviews within your business (i.e., looking at both the intended and current use of AI) with a focus on each system's scope (i.e., what it is trained to do and not do) to identify risks. Doing so determines the type and level of internal governance required and informs the creation of policies and procedures.  

Even if an organisation has not formally adopted any AI, it is likely that employees have spotted opportunities for AI and are already using AI for a variety of purposes, including for example the use of generative AI when drafting marketing material or other communications. On this basis, it is important to foster an open environment so that organisations know what their employees are using in order to set sensible parameters based on the opportunities and risks presented by the use of AI. 

Policies and procedures

Some organisations may prefer stand-alone AI policies and others may prefer to update their IT usage or governance policies. Whichever approach is taken, it is important to ensure that clear guidance is in place setting out the organisation's principles and aims in relation to its usage of AI, and its internal rules to follow when procuring or using AI.

For many organisations, the best route will be a combination of implementing new policies with a clearly defined set of rules, as well as updating other internal and external documents such as privacy notices and security policies. This is particularly the case when an organisation operates across multiple jurisdictions having differing legal and regulatory requirements. 

In addition to having clear policies in place, we also recommend creating procedures to back up those policies in order to set out the steps to be taken when considering adopting AI and giving employees clear directions on how they can use any approved AI. 

Training and staff communications

It is vital to communicate policies and procedures to staff, in particular where new expectations and rules are being set. We suggest this should be through regular updates, meetings, and training sessions, covering both how to safely use any approved AI, and the risks involved in both approved AI and open source AI. Technical information should be provided in a clear and easy to understand way, with demonstrations and examples of risks to ensure that staff understand the organisation's attitude to AI and compliance requirements.  

Clear parameters over the usage of widely available and particularly open source AI, such as Chat GPT, should also be communicated. A sensible approach to the risks should be taken as there is, for example, a difference between employees using publicly available AI to help them with short LinkedIn updates about non-confidential matters and employees using the same AI to summarise business critical information.

Assessment of suppliers 

Given the wide variety of AI available on the market and the relatively low cost of licensed usage as opposed to the cost of building and developing in-house AI systems, organisations frequently use AI systems provided by external suppliers. To minimise risks, organisations should ensure that they undertake supplier due diligence before using third-party AI systems. A sensible approach is to request that suppliers complete a questionnaire detailing how both personal data and non-personal data will be stored and other important security and ethical considerations.

However, despite thorough due diligence, it still may be that adverse effects will arise. Therefore, it is important that internal governance regimes provide for preventive measures, such as testing and human oversight and also risk mitigation, such as detailing how security breaches and incorrect or misleading outcomes are to be handled.  

Information security and testing

It is crucial that AI systems are as secure as possible, to reduce the risk of security breaches and to protect data, confidentiality and intellectual property rights. Suppliers should be asked to detail their strategies for ensuring information security and mitigating against cyberattacks. For example, it is important to ask what network controls are in place and what physical security measures are in place at suppliers' premises.    

Suppliers should also be asked about their testing of the AI system. It is particularly important to ask if penetration testing (a simulation cyberattack used to check for vulnerabilities) has been undertaken. Also, the supplier should be asked to supply details of any other testing and how often it has been and will be carried out. This is important to determine both that the AI system will work as desired (through functional testing) and that there will not be any unexpected issues (through exploratory testing). 

In addition, supplier accreditation can provide reassurance, so it is worth asking if potential suppliers can produce relevant certificates from accredited organisations, for example, the ISO 42001 certification which indicates that a supplier has robust processes in place to manage risks, and the ISO 27001 certification which is the international standard for information security. 

Data protection 

Supplier due diligence should also determine whether the supplier's provision of and the proposed usage of an AI system is compliant with applicable data protection laws. 

An assessment should be undertaken to establish what personal data is used or will be used, how is it processed, how is it stored, what protective measures are in place, and if there is any third-party processing or restricted international transfers of personal data. 

Generally, suppliers of AI act as a processor of personal data, with the organisation which is the customer of the AI supplier being the controller of the data (and determining the purpose of the processing). However, this should be reviewed on a case-by-case basis by assessing the actual usage of the relevant personal data by the supplier. If a supplier retains data without express instructions from the controller, or uses it for training their system, they will likely be deemed to be acting as a controller. This can bring in additional complications and considerations, particularly where a supplier is being provided with sensitive personal data. The data processing activities being undertaken by a supplier should be carefully assessed in each proposed usage of AI. 

Licence terms 

The supplier is usually the owner of the AI system, and a licence is usually granted to the customer within a supply agreement. The terms of these agreements should be checked to ensure that the licence permits the organisation to use the AI system for its intended commercial purpose. There may also be certain restrictions, such as prohibitions on modifications, which end-users will need to be made aware of.   

Insurance

When using AI, it is essential to review your insurance policies. Organisations should ensure the potential risks associated with AI use and misuse are covered. AI associated risks may already be covered under standard policies, such as professional indemnity and D&O insurance. However, more specialised policies may also be required depending upon the AI product used. 

In addition to reviewing your own insurance coverage, it is important to check the relevant supplier's insurance coverage and to request copies of insurance policies and certificates. 

AI governance groups 

Having a specialised governance group within your organisation to oversee AI operations and strategy is beneficial. Given the extensive and inter-disciplinary effects of AI, this group should be made up of individuals with a range of expertise across your organisation as the group is likely to have broad responsibilities involving monitoring the legal landscape and identifying commercially useful AI systems for the business. 

Key takeaways 

Effectively governing the use of AI will require a framework that is bespoke to the obligations, risks and issues AI presents to each organisation. Once in place, the rapid development of both AI and the legal and regulatory landscape means it should be kept under continuous review. When devising or updating internal governance regimes to account for AI there are several key considerations: 

  • Clearly identifying the legal and regulatory obligations and/or restrictions that are imposed on your organisation in connection with your use of AI systems.   
  • Whether new or revised policies and procedures are required to provide clarity on and adherence to the rules relating to intended or existing use of AI.
  • The organisational rules on AI usage and the risks it carries need to be effectively communicated to staff alongside training on correct and appropriate use of the AI.
  • When procuring AI systems organisations need to ensure that thorough due diligence is carried out on suppliers and systems.
  • Insurance policies should be reviewed to ensure that there is sufficient coverage for the risks associated with the use of AI.
  • The potential benefits of having an internal governance group in place with responsibility of overseeing AI operations and strategies. 

Contact

For further advice on AI, the EU AI Act or internal AI Governance please contact Victoria Robertson. Victoria is a Commercial partner specialising in data privacy law who sits on our Trowers & Hamlins AI Governance Group.

Trowers also offer advice regarding cyber risk management.