More employers need to clarify how – and to what extent – they are happy for their staff to use artificial intelligence. Kate Wyatt looks at why employers must move faster to manage the use of generative AI in the workplace.
GenAI is increasingly being used to improve efficiency and manage workloads by taking on tasks in businesses and organisations across almost every sector. Its use is no longer theoretical or niche – it’s happening, and it’s happening fast.
But the pace at which artificial intelligence is being deployed in workplaces is not being matched by the way its use is controlled.
Highly regulated areas of work are actively controlling how employees use AI. More generally, however, there are organisations where the use of AI – particularly GenAI – hasn’t been thought about fully and its use is managed on an ad-hoc basis.
As we all know, GenAI use has become normalised. But the measures that employers need to implement to set out the parameters for its use are not yet embedded. It is surprising that more employers are not being proactive in dealing with this issue.
Unmanaged AI use brings real risks
Failing to establish clear guidelines on when and how employees can use GenAI not only creates uncertainty but also exposes organisations to risk – from a quality and reputation perspective, and legally, for example in relation to ownership of generated work.
Research by PwC last year found that 57% of workers believe GenAI – such as ChatGPT – could improve their efficiency and workload.
More than two-thirds of those surveyed (68%) said they were confident that GenAI will create opportunities to learn new skills. The study found that 64% thought it would make them more creative and 62% believed it would improve the quality of their work.
Globally, PwC found that 61% of employees had used GenAI at work.
Employees are already using GenAI – with or without permission
While the risks and rewards of using GenAI differ from industry to industry, we are not seeing a consistent or proactive approach from employers in managing its use.
There are, of course, contract terms and policies in standard use which cover IT and confidentiality, for example – and there is advice available about standard use of GenAI - but there’s a question mark over whether this goes far enough.
We’re hearing more and more questions from clients about how much unauthorised use of GenAI may already be taking place within their organisations. Staff are often turning to it informally as a time-saving tool – sometimes without anyone else being aware. In that respect, many employers may have a full picture of what’s happening on the ground.
Even in workplaces where robust IT policies are in place, employees may not always realise when they are crossing the line in their use of the technology. It’s important that everyone – from leadership to frontline staff – understands where the boundaries lie.
AI use in the workplace isn’t a future issue – it’s already here.
What’s needed now is clarity, communication, and control. Employers must stop viewing GenAI as an emerging challenge and start treating it as a current business reality. That means updating policies, setting expectations, educating staff, and monitoring use.
The organisations that get ahead of this now will be the ones best placed to harness the benefits – while avoiding the risks – of this powerful new technology.
Published 23 June 2025