Legal Considerations
The use of AI across businesses can result in a range of legal issues, and ultimately litigation, if the risks associated with AI are not properly considered and addressed from the outset. This is a key issue that organisations across all sectors need to consider before embarking on their AI journey.
The use of AI has already led to the emergence of several high-profile copyright cases for alleged infringements of intellectual property rights. Equally, AI related claims could arise in the context of a range of other matters such as data protection, equality and employment related issues. For example, a US radio host and an Australian mayor have both threatened the AI research organisation, OpenAI, with claims for defamation after the OpenAI chatbot wrongly stated that they had defrauded a charity and been found guilty of bribery. There are also potential cases that might arise relating to breach of consumer protection laws; for instance, AI may provide misleading information about products or services during interactions with customers.
AI has a potential use case in almost every area of business. Organisations should think about the kind of liabilities that may arise from the use and deployment of AI. However, given the extensive number of use cases and potential areas for dispute, this is not a straightforward task. Risks and subsequent litigation may arise from both the information that is inputted into the system and the output that is produced. Therefore, it will be important to keep an eye on the development of the legal landscape, as well as the technology.
The issue is compounded by a lack of clarity on who should be responsible for any damage or harm caused and specifically, whether liability should sit individually or jointly, with the creator, supplier, or user. Whilst the UK government announced that it would 'seek to establish the appropriate legislation to place requirements on those working to develop the most powerful AI models', at present, there is no AI specific legislation in the UK to address a range of AI related issues. It follows that the existing law, most notably from tort and contract, applies to govern this debate. Whilst it has been proposed that AI entities should have a separate legal personality, it remains to be seen if this will be adopted in English law. In the meantime, an emerging body of case law will be required to fill in the gaps.
Furthermore, given the global nature and use of emerging technologies such as generative AI, the difficulty of establishing responsibility creates jurisdictional challenges. For example, where AI is developed and rolled out in different countries, the question arises as to which laws govern the dispute. If governed by the law of the jurisdiction in which the AI is developed then this gives rise to fears of 'forum shopping' where AI is deliberately created in permissive jurisdictions before being deployed elsewhere, notwithstanding that any harmful effects of the AI may be felt elsewhere.
Practical Considerations
Despite these difficulties, businesses can take proactive steps to manage litigation risks. For instance, attention should be paid to the content of agreements for the supply or purchase of AI where clarity of roles and responsibilities will be key. For example, it would be desirable to include provisions, such as warranties and indemnities, to apportion liability. Depending on the circumstances, it may be appropriate to seek a warranty for non-infringement of third-party intellectual property rights or an indemnity in respect of the same.
Equally, it is prudent to add clauses specifying the level of testing that the AI has and will be subject to throughout its use. There are also insurance policies available, such as Technology Errors and Omissions Insurance, which might offer coverage in the event of certain AI related claims. Naturally, robust internal governance can work to mitigate litigation risks, however litigation and risk management strategies must be bespoke to a business and the particular issues that arise.
A silver lining is that where litigation does arise, AI can be utilised to streamline the process. Already eDiscovery platforms have become widely used to carry out document reviews. Moreover, generative AI technologies have the potential to bolster eDiscovery's utility by providing summaries and translations. AI can even be used as a predictive tool with algorithms examining past cases to help establish the chances of a claim's success. Understanding how these algorithms work, and how they can be deployed with confidence will be key.
In criminal matters, algorithms have already been used to establish a person's risk of reoffending and help judges with sentencing decisions. However, the risk of bias and potential discriminatory effects has been and remains a strong argument against the use of AI in making such assessments. Despite this, as the technology develops it may also be that we see AI being adopted in civil matters to assist with determination of certain categories of cases. Litigators and judges alike should always ensure that AI is approached and used with caution. The perils of not doing so are well illustrated in a recent American case where a lawyer filed a court document citing cases that were entirely fictional courtesy of AI.
Key Takeaways
The use of AI gives rise to a wide array of potential risks and claims that should be thought about in advance. The types of claims that may be faced will be dependent on the particular use of AI and a business-specific assessment should be carried out to identify and mitigate the risks of litigation.