AI is advancing rapidly, and millions are using it daily. As of early 2025, over 378 million people use AI, and Microsoft reports that 75% of workers rely on it at work. But with this rise comes a growing concern many do not see coming: Shadow AI.
What is Shadow AI?
Shadow AI is when employees use AI tools without company approval or oversight. This includes using ChatGPT to draft reports with sensitive data or generating visuals through AI tools, often without security checks or informing the team.
The Appeal of AI in the Workplace
According to Microsoft, the use of AI helps employees enjoy their work more and feel more motivated. This is because AI helps to:
- Prioritise High-Value Work – AI handles repetitive tasks like emails or summaries, freeing time for strategic work and reducing burnout.
- Manage Overwhelming Workloads – It simplifies tasks like scheduling and data sorting, helping teams stay efficient under pressure.
- Boost Creativity – AI tools inspire fresh ideas and visuals, helping users overcome creative blocks and explore new directions.
Challenges of Shadow AI
AI tools are straightforward to use. However, not many employees realise they are bypassing company policies and putting data at risk.
- Lack of Data Control and Security – Using AI tools without approval makes it harder for companies to control and protect their data, increasing the risk of data misuse and leaks. In 2023, Samsung employees accidentally leaked sensitive information through ChatGPT, leading the company to ban generative AI over concerns about data storage and public exposure.
- No Compliance or Regulation Oversight – Without oversight, Shadow AI can lead to violations of regulations like GDPR or HIPAA. In 2023, facial recognition company Clearview AI scraped billions of images without user consent, resulting in lawsuits and regulatory action across Europe for unlawful data collection.
- False Sense of Confidence in AI (Hallucination) – AI can sound convincing but still be wrong. In 2023, a New York lawyer was fined $5,000 for submitting fake case law generated by ChatGPT, showing the risks of trusting AI without fact-checking.
Be Cautious of Shadow AI
Failure to control the use of AI in your workplace can lead to:
- Data Breaches and Loss of Control – Unauthorised AI tools can unintentionally store or expose sensitive information, and organisations lose control over how that data is managed. IBM reported that 35% of recent data breaches involved “shadow data”, unmonitored information stored in unauthorised platforms.
- Regulatory and Compliance Violations – Shadow AI use can accidentally violate laws like GDPR or HIPAA. Misusing regulated data without safeguards can result in legal penalties, reputational damage, and significant financial loss.
- Vulnerability to Malicious Attacks – Unapproved AI tools may contain hidden malware or function as gateways for phishing and ransomware. Using these tools without security checks can compromise entire networks.
- Overreliance on Potentially Insecure AI Outputs – AI is a tool—not a truth machine. Depending too heavily on its outputs without fact-checking can lead to poor business decisions, lower effectiveness, and unintended consequences.
- Shadow AI Bypasses Security Policies – By operating outside IT systems, Shadow AI sidesteps encryption, monitoring, and access controls. This creates blind spots for security teams and raises the risk of undetected breaches or internal misuse.
Using AI Effectively
Though using AI comes with various risks, it remains possible to safely enjoy the benefits of AI:
- Establish clear AI guidelines and rules on what tools are allowed.
- Update policies and practices regularly as AI evolves.
- Educate and raise awareness on safe usage and verifying outputs.
- Offer vetted tools that meets security standards.
- Monitor and manage AI use to catch unauthorised activity early.
AI is Here; Let Us Use It Wisely
AI is now part of everyday work. With the right policies, tools, and awareness, organisations can turn Shadow AI from a risk into a powerful asset. When we stay informed and intentional, we protect both our data and our future.