Shadow AI: Managing the Security Risks of Unsanctioned AI Tools

A Complete Guide to Cybersecurity

Download this eBook to learn how to protect your business with an effective cybersecurity program.

Download Now

The explosion of generative artificial intelligence tools is sparking a wave of enthusiasm in workplaces, with employees eagerly embracing new applications to boost productivity and innovation. However, this adoption often leads to a new phenomenon known as shadow AI—the use of artificial intelligence tools within an organization without explicit approval or oversight from IT and security teams.

Unsanctioned use of AI creates significant (and often invisible) security blind spots. An employee utilizing an unapproved AI tool to boost their productivity could accidentally leak sensitive company data or inadvertently expose intellectual property, creating substantial risks for your organization.

Traditional security measures designed to protect corporate environments are struggling to contend with the rise of shadow AI. Organizations must understand this new threat and think critically about how to discover and monitor hidden AI usage. Read on to learn more, including effective approaches to address these challenges, and how these crucial insights can inform a holistic user risk posture to foster safer, more informed AI adoption across your business.

Why shadow AI is spreading

Most employees are unaware of the risks of shadow AI, and often use these tools without any malicious intent. So what’s driving this fast adoption of unapproved AI tools? Let’s explore why employees are turning to unapproved AI tools and how these tools can permeate a corporate environment.

AI as the new efficiency tool

We are currently witnessing an unprecedented explosion of generative AI tools, with new applications, platforms, and models launching almost daily. These tools promise (and often deliver) leaps in productivity, creativity, and efficiency across even the most mundane of tasks. It makes sense that employees eager to optimize workflows, meet deadlines, and stay ahead of the curve are immediately drawn to these AI agents. A new frontier of personal and professional empowerment has arrived, offering automation and enhanced efficiency like never before.

Driving this adoption is the sheer accessibility of many of these tools. Gone are the days of lengthy procurement and deployment processes. For example, an individual user can start a chat with a generative AI model just by visiting a website. Furthermore, most of these tools operate on freemium models, inviting employees to experiment immediately without needing budget approval or any IT intervention. This low barrier to entry naturally encourages exploration and rapid, decentralized adoption, often before security teams are even aware these tools are being used within the organization.

The drivers of "underground" AI adoption

Even organizations with standard security protocols or internal processes for software adoption are facing new risks from shadow AI usage. Employees often sidestep these official protocols for practical reasons. Sanctioned alternatives might not meet their specific needs or take too long to be implemented, leading employees to seek out immediate solutions they believe can help them perform their jobs more effectively, even going as far as using personal accounts to access AI tools for work purposes.

Additionally, most employees underestimate (or may not fully perceive) the risk associated with these tools, especially for “minor” tasks or when handling data they believe is not confidential. For example, an employee might use a free online AI tool to paraphrase a report, or generate ideas for a presentation, without fully considering the tool’s data handling policies or where their input might be stored or used. Mangers might generate performance reviews using free AI tools, unknowingly inputting sensitive data into models without secure data processes. 

The disregard for security protocols and underestimation of risk contributes to the growth of “underground” AI usage. According to Microsoft’s 2024 Work Trend Index Annual Report, 78% of AI users are bringing their own AI tools to work, due to a lack of guidance or clearance from higher ups at their organizations. If official AI policies are overly restrictive or fail to provide viable alternatives, AI usage won’t stop. It simply becomes hidden, creating an even more challenging risk landscape for security teams to navigate.

Unpacking the risks of shadow AI

Unmonitored AI tools introduce a wide spectrum of risks to an organization. Beyond simple policy violations, unvetted AI usage impacts data security, regulatory compliance, and even the integrity of intellectual property.

Uncontrolled data flows and potential exposure

The most immediate risk of shadow AI is the uncontrolled flow of sensitive data into third-party systems that have not been approved by IT or security teams. When an employee inputs sensitive company data into an AI tool to summarize a report or analyze a dataset, that data may flow into a publicly accessible or poorly secured AI model. This data could include strategic plans, customer personally identifiable information (PII), unreleased financial figures, proprietary source code, or sensitive internal communications.

Once your data leaves your organization's controlled environment, several critical risks emerge:

  • Data used for model training: Many AI services, especially free or public versions, use the data they process to further train their models. This means your confidential information could become part of the AI's knowledge base, potentially discoverable by other users or even incorporated into responses for entirely different queries.
  • Data leakage from AI vendors: AI tool providers could suffer a data breach, directly exposing any sensitive information your employees have uploaded or processed through their platform. Your organization has little to no control over that vendor's security posture.

Compliance and regulatory nightmares

The use of unapproved AI tools for handling specific types of data can quickly lead to severe compliance violations and substantial penalties. Many industries operate under strict data protection regulations—such as GDPR for personal data of EU residents, HIPAA for patient health information in the U.S., or CCPA for California consumer data. These regulations dictate how sensitive information must be handled, processed, and secured. Shadow AI tools are rarely designed or certified with these specific compliance frameworks in mind.

When employees use unvetted AI applications to process regulated data, critical safeguards are often bypassed. There are typically no data processing agreements (DPAs) in place with these unsanctioned vendors, no audit trails to track data access and modifications within these external systems, and no way for your organization to enforce necessary access controls or data retention policies. This lack of oversight and documentation makes it nearly impossible to demonstrate due diligence or compliance, creating significant legal and financial liabilities.

Other security vulnerabilities

Alongside data exposure and compliance concerns, shadow AI also introduces other direct security vulnerabilities that could impact an organization, including:

  • Malicious AI tools and extensions: The growth of AI has led to a rise in fake or malicious AI applications and browser extensions, which may masquerade as helpful productivity enhancers but are designed to steal credentials, install malware, or spy on user activity once granted permissions.
  • Insecure AI platforms as attack vectors: Even seemingly legitimate AI platforms can have their own security flaws. If an employee has integrated such a platform which is then compromised, it could serve as an attack vector into your network or other corporate systems.
  • Intellectual property (IP) risks: Employees might feed proprietary algorithms, unpatented designs, confidential business strategies, or unpublished creative content into AI models. If the terms of service for these AI tools are unclear about data ownership, or if they grant the AI provider rights to use or learn from submitted data, your organization could inadvertently expose or lose control over valuable intellectual property
  • Lack of data residency and sovereignty guarantees: Unvetted AI tools may process and store data in geographical locations that violate your company's data residency policies or regulatory requirements (like GDPR), leading to legal and compliance complications.

Illuminating the shadows: Discovering and assessing AI usage

Organizations can no longer afford to remain in the dark concerning shadow AI, and proactive approaches to discover and assess AI usage are the first step towards addressing this emerging risk. These strategies identify which AI tools are being used, by whom, and what level of risk they each present.

Why traditional IT asset management falls short

Asset management systems and network scanning processes are the typical tools IT teams use to maintain software and hardware inventory within a corporate environment. However, these approaches fall short when it comes to the nuances of modern shadow AI adoption. Most generative AI tools are web-based SaaS applications and can be utilized without formal installation, or are browser extensions that can be added by users with just a few clicks. SaaS tools are also incorporating artificial intelligence into the tools themselves, creating streamlined access to AI for employees.

Traditional IT asset management won’t register these cloud-based tools, and while network scanning can identify traffic, the volume of web activity and encrypted connections make it difficult to pinpoint access to unapproved AI tools among legitimate web traffic. Add in the rapid growth and deployment of AI tools within SaaS itself, and older detection methods are not able to keep up. All of these issues create a significant visibility gap, meaning Shadow AI usage can go completely undetected by standard IT oversight.

The power of user-level monitoring and activity scanning

Organizations looking to uncover Shadow AI need to supplement traditional methods with more specific, user-centric monitoring and activity scanning techniques. By focusing on user interactions with digital tools and services, security teams can gain valuable insights into the applications employees are actually using, regardless of whether they are officially sanctioned or installed. Consider the following approaches alongside your existing IT oversight:

  • Network traffic analysis: Monitoring outbound network traffic for connections to known AI service domains or APIs can indicate usage, even if the specific application isn't formally installed.
  • Browser activity and extension monitoring: With appropriate consent and adherence to privacy policies, analyzing browser activity logs or installed extensions can directly identify the web-based AI tools and plugins employees are utilizing.
  • SaaS app integration logs: Reviewing logs from sanctioned enterprise SaaS platforms (like Microsoft 365 or Google Workspace) can reveal third-party AI applications that users have authorized to access their accounts or data.
  • Endpoint activity analysis: Some endpoint detection and response (EDR) tools may offer visibility into application usage, including web applications, that can help spot unapproved AI tools.

Remember the rapid speed at which employees can adopt new AI tools—meaning this scanning and monitoring should be regular, ideally performed continuously or daily, to detect new instances. It’s also vital that any user activity monitoring is implemented transparently, with clear communication to employees about what is being monitored and why, always respecting privacy regulations and fostering a culture of trust.

Cataloging and risk-rating AI tools

Once an unapproved AI tool is detected, security teams should work to understand its specific functionalities and, more importantly, its potential impact. Assessing the risks of each unapproved AI tool requires a systematic approach to cataloging and evaluating each tool against a consistent set of criteria. This assessment process should include:

  • Vendor security practices: The reputation of the AI vendor, their published security policies, any relevant certifications (e.g., SOC 2, ISO 27001), and their history regarding security incidents or vulnerabilities
  • Data handling and privacy policies: How the AI tool collects, processes, stores, and protects data. Specifically, whether input data is used for training public models and data retention policies
  • Terms of service (ToS): Clauses related to intellectual property ownership, data usage rights, and liability
  • Known vulnerabilities: Any publicly disclosed vulnerabilities associated with the AI application or its underlying platform
  • Type of data accessed/inputted: Understanding what kind of company data employees are using with the tool (e.g., public information, internal documents, sensitive PII, or proprietary code).

This assessment provides each tool with a risk rating, which security teams can then utilize to prioritize remediation efforts, develop usage guidelines, or decide whether to block the tool and explore better (and sanctioned) alternatives.

Managing shadow AI: From risk insight to proactive control

True risk mitigation comes from integrating insights regarding Shadow AI usage into your organization’s ongoing security operations and broader risk management framework. This integration is a balancing act of addressing unapproved AI use and also shaping a more secure approach to AI adoption. Proactive control is about transforming risk insight into tangible, protective actions.

Building user-centric risk profiles with AI usage data

An employee's interaction with AI tools, particularly unapproved or high-risk ones, provides valuable data points for assessing their individual cyber risk profile. This profile could also include the frequency of shadow AI use, types of AI applications accessed, and the nature of data being input into AI models. When an employee consistently uses unvetted tools or inputs sensitive information into public AI models, their individual risk score naturally increases, as does the potential impact should a compromise occur through one of these channels. This AI usage data should be mapped back to specific employees and then correlated with other risk indicators to build a truly holistic view. These risk indicators could include:

A user-centric risk profile allows security teams to identify individuals who might pose a higher risk and prioritize targeted education, policy reinforcement, or other mitigating actions.

Developing adaptive policies and enabling safe AI adoption

The gut reaction to shadow AI might be to outright ban unapproved tools, but overly restrictive policies can often be counterproductive. Employees who feel their needs for efficiency and innovation are not being met through official channels will continue to drive their AI usage further underground, making it even harder to detect and manage. Instead, consider developing a clear, flexible, and adaptive AI usage policy that aims to encourage safe adoption rather than just block access. A balanced strategy includes several key components, such as:

  • Clear AI usage guidelines: Educate employees on acceptable use cases, data handling best practices, and how to identify potentially risky applications or permission requests when using AI tools.
  • Curated list of approved tools: Provide and actively promote a suite of sanctioned, vetted, and secure AI tools that meet common business needs, reducing the incentive for employees to seek out unknown alternatives.
  • Transparent vetting process: Establish a straightforward process for employees to request and have new AI tools evaluated and potentially approved by IT and security.
  • Real-time alerts and adaptive controls: Implement systems that can provide real-time alerts to security teams when employees access known high-risk AI sites or frequently use unapproved tools. Consider adaptive controls, such as prompting users for justification before accessing certain AI categories or (in high-risk scenarios) temporarily restricting access pending review.

The goal is to guide employees towards safer choices and provide them with the resources they need to utilize AI productively and securely.

Fostering a culture of responsible AI usage

At the end of the day, managing shadow AI requires fostering a strong organizational culture of security awareness, responsible innovation, and transparency. Begin with a foundation of continuous employee education, not just a one-time training session, but ongoing communication about the evolving AI landscape. Cover both the benefits of AI and also specific risks associated with unvetted AI tools, including practical guidance for safe AI usage. Remember that employees need to understand why certain precautions are necessary.

One of the strongest ways to foster a culture of responsible AI usage is to create open channels for dialogue. Encourage employees to discuss their AI needs, share tools they find useful, and bring potential new applications forward for review without fear of immediate reprisal. When IT and security teams partner with employees to vet and approve useful AI tools, it transforms users from potential sources of risk into active participants in the security process. A collaborative process balances the crucial need for security and compliance with the desire to leverage AI for business advantage, enabling your organization to innovate responsibly.

Harnessing AI securely in the modern workplace

Shadow AI is a natural consequence of the drive for efficiency and employee innovation, but it also introduces undeniable risks if left unmanaged. The path forward lies not in outright prohibition but in proactive discovery, user-level monitoring, and adaptive risk management strategies. These elements are integral to a comprehensive Human Risk Management framework, recognizing that employee choices and interactions with technology are pivotal to an organization's overall security posture. 

Gaining a clear understanding of actual AI usage (and how employees interact with all digital tools) is empowering and allows security teams to identify and mitigate threats while also guiding employees toward safer digital practices. Harnessing the power of AI securely and managing the wider spectrum of user-related risks requires a new depth of insight into user activities and the digital tools they embrace, paving the way for more intelligent and adaptive cybersecurity strategies.

If you’re interested in learning more about how UpGuard is helping organizations tackle Shadow AI usage and human risk management, visit https://www.upguard.com/contact-sales.

Ready to see
UpGuard in action?

Ready to save time and streamline your trust management process?