Whilst recent developments in Artificial Intelligence (‘AI’) services might have some of us fearing increasingly harmful cyber-attacks, evermore convincing deepfakes and even seeing Robert Patrick’s T-1000 chasing after us in our rearview mirrors, we cannot ignore that AI is already an important part of our day-to-day lives.
First invented in the 1930s by Georges Artsrouni, AI has gone through numerous evolutions and is still traditionally focused upon detecting patterns, automation and generating insights. It is currently employed in the workplace – pun very much intended – to undertake tasks such as filtering spam, automated CV screening, task allocation and performance management. The use of this type of AI has long been widely accepted within the workplace.
Generative Artificial Intelligence (‘GenAI’) is a type of AI, which learns from existing data patterns to produce new types of content, such as text, imagery, videos, audio and synthetic data. This has been a common part of our day to day lives since the introduction of basic chatbots in the 1960s. However, with the introduction of OpenAI’s chatbot ChatGPT in November 2022 and more recently Microsoft’s Co-Pilot in March 2023, GenAI has become far more advanced and can be used to solve complex problems, draft articles in seconds – unfortunately for me, not this one – and even prepare detailed and entertaining speeches and presentations. It’s also become incredibly user friendly and, without any sign-up costs, entirely accessible to the average person. It’s therefore hardly surprising, that more and more people are using it. And that’s the rub!
According to the latest available data, ChatGPT currently has over 100 million users. And the website generated 1.6 billion visits in June 2023. It’s not hard to see why it is so popular: ChatGPT generates responses, which are quick, contextually relevant and ‘human like’. However, there are a number of limitations with its function, which means that relying on its responses can be inherently risky. ChatGPT learns from the data inputted by its users, which it then uses to inform other users. This means that if users input sensitive, fabricated, biased or, indeed, malicious data, this can then be presented by ChatGPT to other users as fact. Now you can start to see why this would make employers and, well, any of us a little nervous…
Our recent article on AI discusses this and the possible ramifications for human roles within businesses more broadly.
Whilst there is no doubt that the use of GenAI can increase productivity and be an effective tool to aid employees in their roles, appropriate safeguards must be put in place to manage risk and protect businesses.
You may recall that in March 2023, the UK government’s White Paper confirmed that the UK did not intend to introduce specific legislation nor a single governing body to regulate AI; instead it would support existing regulators to regulate AI in their sector. Following on from this, the House of Commons library published a paper on 11 August 2023 on AI and employment law, which assesses how AI is currently (and will in the future be) used at work, alongside the current legislation and policy developments.
How your employees use GenAI is likely to depend on the sector in which your organisation operates, and the type of work it carries out. As a medium-term option, we would encourage businesses to undertake a review of (1) how the people in their organisation are currently using Chat GPT and other GenAI and (2) how these tools might be used by their organisation/employees in the future, so that they can tailor their safeguards accordingly.
This, however, overlooks the immediate issue…employees are using ChatGPT and other GenAI now! With the staggering figures quoted above, it stands to reason that many of these users will be using ChatGPT et al for work related purposes. Therefore, employers need to work fast and get a GenAI policy in place as quickly as possible.
With GenAI growing in competence every day and user numbers similarly building, smart employers should be getting a basic policy in place immediately and then looking to finesse and tailor that policy to their business and sector needs over the coming weeks. Failure to do, puts businesses at risk of their employees sharing sensitive company and client data via ChatGPT and using it to obtain information and documents, which may well contain fabricated, biased and/or malicious data.
Okay, well, what should this basic policy include?
When looking to introduce such a policy, consideration should be given to the following:
As well as having an effective policy, running training sessions and an awareness campaign should help embed expectations and encourage employee buy-in. Those of us that experienced the internet and then the social media revolutions in the workplace will know all too well that this is an incredibly fast moving area, and your policy will have to be regularly monitored and updated to ensure it’s up to date and manages risk appropriately.
If you’d like help drafting a GenAI policy, or if you have any other AI-related employment or immigration queries, please do not hesitate to contact Lynsey Blyth.
Hosted and sponsored by Michelmores and organised by the SCL Tech Transactions Group. Join SCL’s Technology Transactions Group on 28 November 2024 for a half day event focusing on how...
Our next MAINstream Pitch Event will be taking place at our Exeter office on Tuesday 3 December. There will be time to catch up over...