Author
Whilst recent developments in Artificial Intelligence (‘AI’) services might have some of us fearing increasingly harmful cyber-attacks, evermore convincing deepfakes and even seeing Robert Patrick’s T-1000 chasing after us in our rearview mirrors, we cannot ignore that AI is already an important part of our day-to-day lives.
First invented in the 1930s by Georges Artsrouni, AI has gone through numerous evolutions and is still traditionally focused upon detecting patterns, automation and generating insights. It is currently employed in the workplace – pun very much intended – to undertake tasks such as filtering spam, automated CV screening, task allocation and performance management. The use of this type of AI has long been widely accepted within the workplace.
Generative Artificial Intelligence (‘GenAI’) is a type of AI, which learns from existing data patterns to produce new types of content, such as text, imagery, videos, audio and synthetic data. This has been a common part of our day to day lives since the introduction of basic chatbots in the 1960s. However, with the introduction of OpenAI’s chatbot ChatGPT in November 2022 and more recently Microsoft’s Co-Pilot in March 2023, GenAI has become far more advanced and can be used to solve complex problems, draft articles in seconds – unfortunately for me, not this one – and even prepare detailed and entertaining speeches and presentations. It’s also become incredibly user friendly and, without any sign-up costs, entirely accessible to the average person. It’s therefore hardly surprising, that more and more people are using it. And that’s the rub!
GenAI sounds awesome, so why not use it in the workplace?
According to the latest available data, ChatGPT currently has over 100 million users. And the website generated 1.6 billion visits in June 2023. It’s not hard to see why it is so popular: ChatGPT generates responses, which are quick, contextually relevant and ‘human like’. However, there are a number of limitations with its function, which means that relying on its responses can be inherently risky. ChatGPT learns from the data inputted by its users, which it then uses to inform other users. This means that if users input sensitive, fabricated, biased or, indeed, malicious data, this can then be presented by ChatGPT to other users as fact. Now you can start to see why this would make employers and, well, any of us a little nervous…
Our recent article on AI discusses this and the possible ramifications for human roles within businesses more broadly.
Hang on, surely the government is going to legislate so that users and employers are protected, right?
Whilst there is no doubt that the use of GenAI can increase productivity and be an effective tool to aid employees in their roles, appropriate safeguards must be put in place to manage risk and protect businesses.
You may recall that in March 2023, the UK government’s White Paper confirmed that the UK did not intend to introduce specific legislation nor a single governing body to regulate AI; instead it would support existing regulators to regulate AI in their sector. Following on from this, the House of Commons library published a paper on 11 August 2023 on AI and employment law, which assesses how AI is currently (and will in the future be) used at work, alongside the current legislation and policy developments.
So we need to take steps to safeguard our businesses
How your employees use GenAI is likely to depend on the sector in which your organisation operates, and the type of work it carries out. As a medium-term option, we would encourage businesses to undertake a review of (1) how the people in their organisation are currently using Chat GPT and other GenAI and (2) how these tools might be used by their organisation/employees in the future, so that they can tailor their safeguards accordingly.
What’s the rush?
This, however, overlooks the immediate issue…employees are using ChatGPT and other GenAI now! With the staggering figures quoted above, it stands to reason that many of these users will be using ChatGPT et al for work related purposes. Therefore, employers need to work fast and get a GenAI policy in place as quickly as possible.
With GenAI growing in competence every day and user numbers similarly building, smart employers should be getting a basic policy in place immediately and then looking to finesse and tailor that policy to their business and sector needs over the coming weeks. Failure to do, puts businesses at risk of their employees sharing sensitive company and client data via ChatGPT and using it to obtain information and documents, which may well contain fabricated, biased and/or malicious data.
GenAI policy
Okay, well, what should this basic policy include?
When looking to introduce such a policy, consideration should be given to the following:
- Level of prohibition: will you prohibit the use of GenAI completely or will you put limitations on who can use it and/or what it can be used for?
- Guidelines for its use: if you are going to allow staff to use tools like ChatGPT for certain tasks, you should make sure the permitted functions are expressly listed so there can be no confusion.
- Set parameters for its use: general guidelines on how to use GenAI should be expressly set out in the policy, for example:
- It should only ever be used as a starting point, not as a finished product.
- All content must be proofread and checked for factual accuracy by a human with appropriate expertise before it is used.
- Confidential information and personal data should never be divulged. Even things like your company name and other identifying features should not be disclosed when using GenAI.
- Highlighting its limitations: your staff need to be aware of GenAI’s inherent restrictions. For example, ChatGPT is based on data stored in its bank and so if that data is incomplete, inaccurate or biased (even discriminatory), it means the responses produced will be the same. Also be mindful that the bank is outdated and doesn’t have the most recent information available.
- Designated team or person: will you have a steering group or designated individual who will oversee the company’s approach to GenAI?
- Transparency: consider how you will go about ensuring that AI generated content is clearly identified as such – both internally to other employees and externally to clients.
- Data privacy and confidential information: as highlighted above, it is essential that no confidential information or personal data is shared with GenAI tools. For example, ChatGPT has no obligation to keep this information private and instead can use the information to improve and develop its systems.
- Interaction with other policies: you will need to consider how your ChatGPT policy will interact with other relevant policies, such as data protection, IT and communications, privacy, recruitment (if you are using ChatGPT/AI to make recruitment decisions) your disciplinary policy and even your grievance policy if GenAI is misused or there’s a breach of your GenAI policy.
Additional points to consider
As well as having an effective policy, running training sessions and an awareness campaign should help embed expectations and encourage employee buy-in. Those of us that experienced the internet and then the social media revolutions in the workplace will know all too well that this is an incredibly fast moving area, and your policy will have to be regularly monitored and updated to ensure it’s up to date and manages risk appropriately.
If you’d like help drafting a GenAI policy, or if you have any other AI-related employment or immigration queries, please do not hesitate to contact Lynsey Blyth.
Author
MAINstream Pitch Event
Applications for this pitch event close Wednesday 15 January 2025. Our next MAINstream Pitch Event will be taking place at our Exeter office on Wednesday...
MAINstream Cheltenham Pitch Event
Applications for this pitch event close Monday 3 February 2025. Following the success of MAINstream South West, we are delighted to be launching MAINstream Cheltenham...
MAINstream Female Founders Pitch Event
Applications for this pitch event close Wednesday 9 April 2025. We are pleased to announce that our next pitch event will be exclusively for female...