Your staff are using AI in secret – here’s how smart leaders should respond
Hamed Amiri
- Published
- Opinion & Analysis

Employees are already using AI to save time, improve output and get through the working day faster, yet many still feel safer hiding it than discussing it openly. Here, Hamed Amiri argues that leaders who want the gains must stop treating AI as a taboo subject and start giving staff the rules, confidence and backing to use it properly
Any leader with a bit of sense should recognise the commercial opportunity that AI presents. A workforce equipped with better tools can move faster, think more clearly and spend less time drowning in repetitive nonsense. That should excite any business serious about productivity and their bottom line.
The biggest institutions in the world already use AI as part of everyday work. NASA, as one example, says it uses AI in mission planning, data analysis and autonomous systems. NHS England, meanwhile, says AI is already being used in chest imaging, dermatology referrals, retinal screening and triage support.
But despite its benefits, many employees are using it on the sly. A 2025 Ipsos survey for The Guardian, based on more than 1,500 UK workers, found that 33 per cent do not tell managers or senior colleagues when they use AI tools at work. In fact, just 13 per cent said they openly discuss that use with senior staff. A separate 2025 study by KPMG and the University of Melbourne suggests the figure may be even higher. Its global research, covering 48,340 people across 47 countries, suggests that 57 per cent of employees hide their use of AI at work and pass off AI-generated content as their own.
That creates a genuine paradox for employers. Staff are using AI because it helps them work faster, improve quality and handle routine tasks more efficiently. Those are exactly the gains most businesses say they want. Yet the more useful the technology becomes, the more nervous some employees feel about admitting they are using it. The very thing that makes them more productive is also making them feel more exposed.
Bosses need to stop acting as though AI is a dirty secret, a moral failing or a passing fad. It is already part of working life. The sensible response is to bring it into the open, set clear rules, train people properly and decide where it genuinely adds value.
A company that does this well can get more output, better consistency and less wasted time. A company that leaves staff to improvise in silence is asking for confusion, uneven standards and unnecessary risk. Leaders who want the gains have to create the conditions for people to use the tools openly and responsibly. To do that, they should start with the following:
1. Assume AI use is already widespread
Leaders should begin from the reality that AI is already being used across the business, whether that use has been formally approved or not. It will be appearing in emails, note-taking, research, presentations, planning documents, data analysis and day-to-day administration. Treating AI as a future issue will only leave management behind the actual behaviour of the workforce. A sensible first step is to map where people are already using it, what tasks they are applying it to and which tools have become part of everyday working habits. That gives the business a starting point grounded in fact rather than guesswork.
2. Write a clear policy
Every business needs a written AI policy that people can understand and follow without ambiguity. Staff should know which tools are approved, which uses are permitted and which activities require sign-off. The policy should cover practical issues such as confidentiality, client information, intellectual property, accuracy, storage of prompts and outputs, and responsibility for final work. It should also make clear that AI is an assistive tool rather than a substitute for judgement. A short, vague statement will not do the job. People need enough detail to work confidently and enough clarity to know where the line is.
3. Make disclosure safe
If staff think that admitting AI use will damage their standing, they will keep quiet and the business will learn nothing. Leaders need to create an environment in which people can say openly that they used AI to support a task, explain how they used it and discuss whether it actually improved the outcome. That means removing any suggestion that the mere use of AI is shameful, improper or evidence of weakness. It also means making disclosure part of normal working conversation. Managers should ask sensible questions about process, judgement and quality rather than reacting as though they have caught somebody doing something suspect.
4. Protect confidential information
Data protection should sit near the top of any employer’s AI agenda. Staff must know exactly what information can never be placed into public or external AI tools. That will usually include confidential business information, client material, unpublished financial data, personal data, legal documents, health information and commercially sensitive communications. The policy should explain the risk in plain language and should not assume people will work it out for themselves. Businesses should also review whether they need enterprise-grade tools, internal controls or technical safeguards to reduce the chance of accidental disclosure. This is one of the areas where a casual approach can become very expensive very quickly.
5. Train people properly
A surprising number of organisations expect staff to use AI well without showing them how to do it. That approach wastes time and increases risk. Training should cover the strengths and limitations of the tools, the kind of work they can support, the errors they commonly produce and the checks people need to apply before relying on the output. Staff should also understand that a fluent answer is not necessarily a correct one. Good training improves quality, reduces misuse and helps people move past novelty into useful application. It also gives managers a more realistic picture of what AI can and cannot do inside their own business.
6. Require human accountability
AI can assist with drafting, analysis, structure and speed, but responsibility for the finished work must always sit with a person. That principle matters in every sector and becomes even more important in legal, financial, medical, regulated and reputation-sensitive work. Employees should know that they remain accountable for accuracy, judgement, tone, compliance and the final decision to use any AI-assisted material. Leaders should reinforce that the tool can support the process, while the human being owns the outcome. A business that blurs that line will invite weak judgement and poor habits.
7. Share good practice across teams
Useful AI habits often stay trapped within individual desks or small teams. One person finds a faster way to summarise meeting notes. Another improves reporting workflows. Someone else develops a reliable method for checking documents or organising research. Those gains are worth very little when they remain private. Employers should create ways for teams to share what works, what saves time and what should be avoided. That could include internal workshops, regular team discussions, short written guidance or a shared bank of approved use cases. The aim is to turn scattered experimentation into organisational learning.
8. Measure the gains honestly
AI discussions often become vague very quickly, which makes it harder for leaders to judge value properly. Businesses should identify where the tools are saving time, improving consistency, increasing throughput or reducing low-value administrative work. They should also pay attention to where the promised gains fail to materialise. Some uses will deliver real value and some will waste time. Honest measurement helps leaders invest in what works and stop indulging what does not. It also gives them a firmer basis for decisions about workflow, training, recruitment and service delivery.
9. Set standards for quality and attribution
Businesses need clear rules on what good AI-assisted work looks like and when its use should be declared. Internal disclosure may be appropriate in some settings so managers understand how work is being produced and reviewed. External disclosure may be necessary in certain client, editorial, professional or regulated contexts. Standards should also cover checking procedures, factual verification, source review, tone, legal risk and whether material needs substantial human revision before it can be used. These standards matter because AI can produce work that appears competent while containing errors, inventions or poor reasoning. A business without quality rules will eventually learn that lesson the hard way.
10. Lead the conversation from the top
Senior leaders set the tone for whether AI becomes a useful business tool or a hidden workplace habit. They should speak about it openly, practically and regularly. Staff need to hear that the organisation understands the technology, expects it to be used responsibly and is prepared to engage with it seriously. Leaders should not hand the entire subject to IT or treat it as a niche matter for technical specialists. AI use affects productivity, management, hiring, risk, service delivery and culture. It belongs in mainstream leadership conversation. A business becomes much more capable when people at the top address the issue clearly and give everyone else permission to do the same.

Hamed Amiri is a speaker and technology leader whose work explores leadership, resilience and identity in a rapidly changing world. A senior leader in technology and transformation at PwC, he combines corporate experience with lived perspective, informed by his journey from Afghanistan to the UK.
Main Image: www.kaboompics.com/Pexels
RECENT ARTICLES
-
Has Big Tech hijacked the AI summits? -
What Mexico’s giant data breach tells us about the new hacking age -
France’s quest to secure UNESCO recognition for sea rescue -
How the EU abandoned its cage ban promise -
What kind of masochist would want to run the BBC? -
Workplace inclusivity must be all or nothing — otherwise it fails -
Britannia no longer rules the waves -
Britain must defend its streets as well as its borders -
Silicon Valley is finally being forced to answer for what it built -
President Trump is the product of a constitution stretched beyond its limits -
How Japan’s beer-and-ski city became a global testbed for green AI -
The dating imbalance: why highly educated women are struggling to find partners -
New Hindu Kush Himalaya glacier reports warn of deepening risk to Asia’s water security -
First Adolescence, now Inside the Manosphere. How do we protect boys from misogynistic alpha male influencers? -
NATO reluctance signals limits on U.S. leadership -
Iran, nuclear proliferation and the hard choices facing democracies -
When AI customer service fails, don’t blame technology — it’s leadership at fault -
SUCCESS London conference highlights challenge of life after cure for brain tumour survivors -
A new generation of disability rights leaders is reshaping Europe -
Trump hasn’t broken America — he’s exposed what it really is -
AI is rewriting Europe’s networks from the inside out — and the continent isn’t ready -
Europe’s new gender strategy may be solving yesterday’s problems -
Why Britain still needs reporters in the courtroom -
Rivers run deeper than we think -
Spain’s rocket builder just landed €180 million — and Europe’s case for space sovereignty just got harder to ignore


























