On 30 November 2022, San Francisco-based company OpenAI launched its latest creation, the ChatGPT chatbot. Within two months, it reached 100 million active users, making it the fastest-growing consumer application ever to be launched.
ChatGPT takes user prompts and offers almost instantaneous responses based on statistical sequences of human language and internet content that it has been exposed to during training. For example, it can autocomplete sentences, come up with jokes, essay questions and even offer legal advice / solutions. This has been shown recently when a motorist successfully used ChatGPT to generate an appeal in challenging a fine from NCP at Gatwick Airport. Part of the motorist's letter read "I understand that it is my responsibility as a driver to be aware of the rules and regulations regarding driving through an airport…However, I never received the first notice of the penalty and therefore, did not have the opportunity to contest the charge or pay the fine in a timely manner…I believe that the debt collection process has been premature and I request that you reconsider the penalties imposed". The capabilities of ChatGPT are clearly beyond that of any AI system previously designed.
So far, so good. ChatGPT has taken the digital world by storm. Microsoft has already invested £10 billion in OpenAI and integrated GPT technology into its Bing search engine. The technology is designed to extract facts, figures and answers and format them into a comprehensive response to the search engine query. In terms of next steps, improvements envisaged for the chatbot include the ability to interact with ChatGPT as if speaking to a human as opposed to using a mouse. ChatGPT integration into Outlook, Teams, Excel and Word would likely revolutionise our working days and automation in businesses.
Although companies are all vying for a slice of the AI-pie, they will need to consider the potential risks before advocating the use of ChatGPT. For example, although it may be able to produce responses of a reasonable quality such as the disputed parking ticket response mentioned above, ChatGPT cannot utilise human intelligence that can pick up on nuances. This means that it can be sensitive to tweaks in language, with paraphrasing of questions invoking entirely different responses, yet it presents the responses in such a convincing manner that you may not notice the response is inaccurate.
ChatGPT's training data relates to the period before 2021 so may not be entirely up to date given the speed and progression with which the digital world has evolved. In fact, OpenAI expressly excludes any liability for its output – warning that it can produce "inaccurate, untruthful and otherwise misleading" output. If a lawyer produced misleading advice there could be consequences with the Solicitors Regulation Authority, however, the same repercussions do not apply to ChatGPT. ChatGPT therefore cannot offer the same level of protection for businesses and any output should be checked by someone with relevant expertise to assess the veracity of its answers. If any output is sent on to third parties, applicable contractual liability provisions will need to be considered to ensure no reliance is placed on the answers produced by ChatGPT.
Businesses will also need to think about regulation. The EU is considering an AI act which will introduce regulation for AI products with some being classified as high risk and some being banned entirely. ChatGPT should not be relied on to perform critical or important functions as it may not adhere to specific regulatory requirements e.g. any outsourcing for financial service firms which are regulated by the Financial Conduct Authority. It is still under debate whether ChatGPT is compliant with the UK GDPR. Presently, OpenAI offers no procedures for individuals to check whether their personal information is stored with the company or the removal of personal data (i.e. the right to erasure). Companies will need to keep their finger on the pulse to ensure they are in compliance with any UK and EU regulations, where applicable to their business, if using ChatGPT, or any other AI technology for that matter.
And what about data privacy and cyber security? OpenAI's terms previously stated it does not undertake to keep the questions submitted confidential and advises against sharing sensitive information as the chatbot is unable to delete specific prompts from users' histories. It had also originally adopted a privacy policy reserving the right to use questions inputted for its own purposes, running the risk that any information submitted will reach a wider number of recipients. Addressing criticism and legal challenges, the company has since backtracked from its original position and said that it will only keep users' data for 30 days and no longer use customer data to train its models, although its General FAQs may not have been updated yet to reflect this position[1].
The safety of data inputted into the chatbot is far from watertight. There are already plenty of fake apps on the App Store which are not official ChatGPT apps, which may cost the user to download and have even broader data sharing policies. As an open tool, the billions of data sources ChatGPT has been trained on could be accessible to cyber criminals who could carry out a targeted attack, with examples of ChatGPT being used on underground hacking communities emerging already[2] and to craft more convincing emails for phishing scams[3]. With the surge in AI advancement, it seems protection for users such as the Online Safety Bill ("OSB") needs to come sooner rather than later. The OSB is set to impact over 25,000 tech companies in enacting a new set of laws to safeguard both children and adults online by placing more onus on social media companies in protecting user's safety by introducing offences for producing illegal content and paid for advertising and fraud. It is presently continuing its passage through parliament in the House of Lords, however it is not yet clear to what extent it could affect OpenAI given the in-scope services must be linked to the UK.
Further, there is the risk that the chatbot will produce offensive and inappropriate responses to queries. For example, OpenAI states that it has taken a number of steps to minimise bias, however, a current limitation of the chatbot is that "it will sometimes respond to harmful instructions or exhibits biased behaviour". Therefore, at present, ChatGPT has not entirely removed bias and companies will need to remain alert to this risk.
Although it is costing OpenAI around $3million per month to run, ChatGPT is currently free to use. This may be one of the reasons why there has been an "at capacity" error message on occasion to those trying to access the chatbot, with some waiting several hours before they are able to get through. There is however a premium version of ChatGPT (aptly named ChatGPT Plus) which would provide access during peak times, faster responses and first access to new features. Although not presently available, its eventual price will be $20 per month. It will be interesting to see whether the free user service becomes increasingly saturated such that users will have no choice but to pay for the subscription, and whether that cost will increase as OpenAI invest more in improvements and the pressure from investors to turn a profit mounts. At present, the OpenAI expects to make $200 million in revenue in 2023 and $1billion in 2024.
The current software for ChatGPT presents some problems at the moment, however, as it upgrades and develops it is likely that ChatGPT will have a profound impact for businesses. Rather than replacing workers and reducing the need for professional legal advice, businesses are more likely to use ChatGPT as a technology to enhance their work and streamline processes. In the meantime, the messaging is very much that users should be wary.
[1] ChatGPT General FAQ | OpenAI Help Center states "As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements." and "Your conversations may be reviewed by our AI trainers to improve our systems." as at 9 March 2023
[2] 'OPWNAI: Cybercriminals starting to use ChatGPT" (Check Point Research, 6 January 2023) accessed on 9 March 2023
[3] 'AI used to write phishing emails, claims Darktrace' (The Times, 9 March 2023) accessed on 9 March 2023