Authors
Artificial Intelligence (AI) tools can provide an efficient and low-cost solution to overcoming many of the obstacles faced by early-stage businesses with limited resources. For example, OpenAI’s ChatGPT can help with troubleshooting, processing large data sets and drafting terms and conditions, whilst Microsoft’s Clipchamp can edit and produce slick marketing videos.
However, it is important to understand the limitations of this technology in order to protect your business from being exposed to risks. We explore some of the limitations of these popular AI tools below.
1. Unreliable and unbalanced output
- AI tools such as ChatGPT are prone to “hallucinations“, meaning the generation of false information. In fact, the OpenAI Terms of Use specifically say that “Given the probabilistic nature of machine learning, use of our Services may in some situations result in Output that does not accurately reflect real people, places, or facts.” This could lead to liability and reputational issues where a business uses such output in a professional context, such as in the case of lawyers who were fined for citing fake case law generated by ChatGPT in their court filings.
- The tools also have the potential for bias that reflects biases within a society, due to using imbalanced or discriminatory datasets. This can be particularly problematic when the tools are used as part of making business decisions, for example during the recruitment process, where AI is used to review resumes and identify candidate compatibility.
- To mitigate these risks, output generated by AI tools should never be taken at face value and should instead be independently verified before being used. Further, if such output is going to be incorporated into goods or services provided to customers, then the customer terms and conditions should make clear that an AI tool has been used in the process.
2. Intellectual Property infringement and ownership issues
- AI models are trained using content produced by third parties, and there is a risk of infringing intellectual property rights in doing so. As we reported last year, there is a conflict between the UK Government’s push for rapid AI development and a lack of guidance for tech firms regarding their responsibility to obtain consent from rightsholders. Without adequate guidelines in place, there will be increased litigation in this area. A recent landmark decision in the case of Kneschke v LAION at the District Court of Hamburg addressed the intersection of copyright and AI training when a photographer brought a claim against an AI research organisation for using his photographs in its training datasets without his consent. The court dismissed the claim.
- Users of Microsoft’s Clipchamp, which uses stock media such as audio, video and graphics in order to create user videos, have faced copyright infringement claims. Whilst Clipchamp provides guidance for customers regarding how to respond to such claims and assures customers that they have the right to freely share the videos, there is a grey area in relation to stock media that Clipchamp has not licensed from its third party media partner. In addition, the Microsoft terms and conditions do not provide customers with recourse against Microsoft in the event of a copyright infringement claim and say that customers are “solely responsible for responding to any third-party claims regarding your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during your use of the AI services).”
- A business uploading data into an AI tool may also breach the licence terms pursuant to which that data was obtained, resulting in a damages claim or an injunction to stop using such content.
- By using AI to develop a product, a business may also lose the ability to later protect their work. This was demonstrated in a recent case, where technologist Dr Stephen Thaler sought to have his AI, called Dabus, recognised as the inventor of a food container and a flashing light beacon. The UK Supreme Court held that a human inventor is required and that a person who is the owner of the AI machine that invented something, but not the actual inventor, is not entitled to a patent. An AI system cannot be an inventor under the Patents Act 1977.
3. Breaches of Data Protection legislation
- AI tools process personal data as a result of scraping data such as names and images from websites. In doing so, the operator of the AI system must demonstrate compliance with the requirements of data protection legislation, such as having a lawful basis for carrying out such processing. In 2023, the Italian Data Protection Authority issued an interim emergency decision ordering OpenAI to immediately stop the use of ChatGPT for processing the personal data of Italian data subjects, on the basis that it violated several GDPR obligations.
- Similarly, businesses deciding to process personal data using AI systems as part of their operations will need to ensure compliance with their own data protection obligations.
4. Breaches of Confidentiality
- AI tools use content provided by users to train their models, meaning information that is input can later be included in output for another user. The Open AI Terms of Use specify that a user’s content will be used to develop and improve the product, unless users specifically opt out, and say that they do not want their content to be used to train the model.
- This raises particular concerns if confidential information is input, as this could result in commercially sensitive information being unintentionally shared. For example, Samsung decided to ban employees from using ChatGPT after an employee leaked sensitive internal source code by inputting this into the system.
5. Punitive contractual terms
- The terms and conditions offered by the providers of AI tools often significantly limit their own liability and place the onus and risk on users of the platform. For example, the OpenAI Terms of Use limit their liability to the greater of the amount paid for the service or $100, and require the customer to indemnify Open AI for losses arising from third party infringement claims relating to the use of content generated using ChatGPT.
- Where such terms are offered by a large enterprise such as OpenAI or Microsoft, they are likely to be non-negotiable, and so a business seeking to use such services will need to take a risk-based decision as to whether or not to accept this.
Businesses using AI technology should do so with caution. Having a corporate policy in place setting out rules and procedures for the safe use of AI can help to ensure that employees are informed and do not expose the business to the risks that we have outlined above.
If you would like further insight into this topic or advice on AI agreements, licensing arrangements or claims relating to intellectual property rights, our Technology & Innovation and Intellectual Property teams are well-placed to advise you.