Artificial intelligence (AI) is rapidly transforming the way organizations operate, offering tools that enhance efficiency, productivity and decision-making. According to IBM, the number of enterprise employees using generative AI grew from 74% in 2023 to 96% in 2024.
However, this swift adoption comes with its own set of challenges, one of which is shadow AI. Recent research found that 37% of employees share sensitive work information with AI tools without their employer’s permission.
Often overlooked, shadow AI poses significant risks to organizations, from higher risk of data breaches to escalating costs. In this blog, we’ll explore what shadow AI is, its risks and how you can effectively manage it.
What is Shadow AI?
Shadow AI refers to the unauthorised use of AI tools, platforms or models within an organization, often without the knowledge or approval of IT or governance teams. This could involve employees or departments adopting third-party AI applications, developing their own AI models or leveraging external AI platforms without proper oversight.
The rise of shadow AI can be attributed to the increasing availability of accessible AI tools and the pressure to achieve quick results. Employees may feel tempted to bypass formal approval processes to address immediate challenges or meet specific goals.
For instance, a report found that 46% of employees use their preferred AI tools, without the permission of their IT division, and refuse to give them up. One of the employees reasoned that they don’t want to deal with the lengthy approval process, and that “it’s easier to get forgiveness than permission.”
How Does Shadow AI Happen?
Shadow AI typically emerges due to a combination of factors:
- Lack of Awareness: Employees or teams may not fully understand the risks associated with unapproved AI tools or the importance of adhering to governance policies. Research from the National Cybersecurity Alliance showed that 55% of survey participants who use AI for their jobs have not received any training regarding AI’s risks
- Decentralised Decision-Making: Departments often seek tailored solutions to their unique challenges, leading them to adopt tools independently of central IT oversight
- Pressure for Speed: In fast-paced environments, the urgency to deliver results can lead to bypassing lengthy approval processes
- Accessibility of AI Tools: Many AI tools are now available as easy-to-use software-as-a-service (SaaS) platforms, making it simple for employees to adopt them without technical expertise or IT involvement
Examples of shadow AI in action include a marketing team using AI-powered analytics tools to gain insights without consulting IT, or a data scientist running machine learning models on unauthorised cloud platforms. These scenarios may seem harmless but can lead to significant complications.
The Risks of Shadow AI
Shadow AI often appears harmless at first glance, but the hidden dangers can have far-reaching consequences. Here are some of the most pressing risks associated with shadow AI:
- Data Security and Privacy Breaches
One of the most significant risks of shadow AI is the potential for data breaches. Unapproved tools or platforms may not meet your organization’s security standards, leaving sensitive data exposed to unauthorised access.
This is particularly concerning if the tools store data in jurisdictions with weak data protection laws or those that conflict with regulations like the GDPR or Australian Privacy Act. A single breach could lead to financial losses, reputational damage and legal repercussions.
For instance, facial recognition company Clearview AI suffered a data breach in 2020, exposing its client list and internal files. The breach revealed that numerous law enforcement agencies and private companies were using the service, often without public knowledge or explicit approval. This case highlights the potential risks when AI tools are adopted without comprehensive oversight and governance.
- Inaccurate or Biased Outcomes
AI models created or implemented without proper oversight often rely on poorly curated or insufficient training data. This increases the likelihood of inaccuracies or biases in the model’s outputs.
For example, a recruitment tool developed without oversight might favour certain demographics due to unbalanced training data. Decisions based on these flawed outputs can erode trust in AI systems and lead to costly mistakes.
- Operational Inefficiencies
Shadow AI can disrupt workflows and create inefficiencies. Unapproved tools may not integrate seamlessly with existing systems, leading to data silos and compatibility issues.
Moreover, if multiple departments adopt different AI solutions for similar problems, it leads to redundant effort and wasted resources. This fragmentation can hinder collaboration and slow down overall organisational progress.
- Escalating Costs
The financial impact of shadow AI can quickly spiral out of control. Subscription fees for unauthorised tools, unexpected costs for fixing compatibility issues, and additional resources required to address problems caused by shadow AI all contribute to increased expenses. Furthermore, untracked spending on AI tools makes it difficult to manage budgets effectively.
Torii’s 2025 SaaS Benchmark Report notes that shadow AI now makes up most of new unmanaged applications, making it harder to track costs, stay compliant, and maintain visibility.
- Regulatory Non-Compliance
The regulatory landscape for AI and data protection is becoming increasingly stringent. Shadow AI significantly increases the risk of non-compliance with laws such as GDPR, CCPA or local Australian regulations. Unauthorised tools may not adhere to data handling and processing requirements, exposing your organization to hefty fines, legal challenges and reputational harm.
How to Prevent Shadow AI
Preventing shadow AI requires a proactive and strategic approach. Here are actionable steps to safeguard your organisation:
- Establish the Right Policies
A robust governance framework is the foundation of preventing shadow AI. Start by:
- Defining Policies: Clearly outline the process for adopting and implementing AI tools, including approval workflows and usage guidelines
- Assigning Accountability: Designate specific roles and responsibilities for AI oversight, ensuring there is clarity on who monitors and enforces compliance
- Periodic Reviews: Regularly update governance policies to keep pace with advancements in AI technology and evolving regulatory requirements
- Encourage Collaboration Across Teams
Encourage open communication and collaboration between IT, data teams and business units. This ensures that departmental needs are understood and addressed through approved solutions. Create a centralized platform for requesting AI tools, allowing IT to evaluate and approve them while maintaining transparency and trust.
- Train Your Staff
Awareness is critical to curbing shadow AI. Conduct regular training sessions to:
- Highlight the risks of unauthorised AI usage, such as data breaches and operational inefficiencies
- Emphasise the importance of adhering to governance frameworks
- Equip employees with the knowledge to identify and report shadow AI activities
The Generative AI Sprint Series is a hands-on, two-part series that teaches professionals how to productively AI tools without compromising security. Sign up for Gen AI Sprint 1, a complimentary training session that covers foundational Gen AI skills, enabling you to drive business impact immediately.
- Promote Your Approved AI
Make it easier for employees to choose approved tools by:
- Offering a curated list of vetted AI solutions that meet your organization’s security and compliance standards
- Providing user-friendly documentation and training to ensure smooth adoption
- Showcasing the effectiveness of approved tools in addressing departmental needs
- Monitor AI Usage
Regular monitoring helps identify unauthorised AI tools before they become a risk. Implement:
- AI Audits: Conduct periodic reviews of software and applications used across the organization to detect unapproved AI tools
- Usage Tracking: Leverage monitoring tools to track AI interactions and ensure compliance with company policies
- Feedback Loops: Encourage employees to report new AI tools they find useful so IT can assess and approve them, reducing the need for shadow AI
Take Control of the Shadow
Shadow AI is a growing challenge that organizations cannot afford to ignore. The risks—ranging from data breaches and biased outcomes to inefficiencies and regulatory penalties—are too significant to overlook.
However, with the right strategies in place, these risks can be mitigated. By establishing robust governance frameworks, fostering collaboration, educating employees and leveraging monitoring tools, your organization can create a secure and efficient environment for AI adoption.
Now is the time to take control of shadow AI. By addressing it proactively, you can ensure that your AI initiatives deliver value without compromising security, compliance or efficiency. Responsible AI adoption isn’t just about avoiding risks; it’s about building a foundation for long-term success.
To learn more about the use of Generative AI in the workplace, check out our eBook “How to Not Be Replaced by AI” or contact us today.