The emergence of “shadow AI” highlights both the enthusiasm and challenges associated with the adoption of artificial intelligence (AI) in enterprises. Shadow AI occurs when employees utilize unapproved AI tools without the knowledge of their IT or security departments. While this practice may seem like initiative, it often reflects underlying dysfunction within organizations.
When teams resort to shadow AI, it typically indicates one of three issues: they lack access to appropriate AI tools, they possess tools but do not know how to use them effectively, or they are either unaware of or unconcerned about the security and compliance risks associated with unverified AI models. If ignored, shadow AI can lead to data breaches, compliance violations, and misguided decision-making arising from reliance on inaccurate or fabricated outputs.
Rather than viewing shadow AI solely as a threat, organizational leaders should recognize it as a diagnostic indicator. It exposes deficiencies in enablement, governance, and trust that must be addressed for effective AI adoption.
According to the McKinsey 2025 State of AI report, enterprise AI adoption is inconsistent. While some sectors—such as engineering, marketing, and support—widely utilize generative AI, others lag significantly. Even within departments that do employ generative AI, there can be vast differences in employees’ understanding and effective use of these tools.
This inconsistency poses risks. Employees who feel marginalized may turn to unapproved tools to keep pace, leading organizations to conflate adoption with proficiency. Furthermore, employees may become less productive if they waste time attempting to leverage AI for tasks where it is not effective. Just because an AI tool is being employed does not guarantee that it is being used effectively or delivering business value. Without clear guidance and policies, confusion persists regarding the true capabilities and advantages of AI tools.
Prohibiting access to certain tools is not an effective response, especially given the widespread availability of AI options. Employees often seek solutions that allow them to work more efficiently. Organizations can capitalize on this by implementing three strategies:
-
Provide Clarity and Access: Often, confusion about policies drives the use of shadow AI. If an organization’s policies are unclear or outdated, employees will opt for convenient alternatives. Establishing clear AI policies that outline approved tools and the rationale behind their selection can help mitigate this issue.
-
Double Down on Enablement: Distributing licenses alone won’t foster effective adoption; learning how to use AI tools requires dedicated effort. Organizations should cultivate environments where employees feel empowered to experiment, learn, and share best practices. By identifying and enabling advocates, hosting hackathons, and providing ongoing training, organizations can cultivate trust and skill development among team members.
-
Embed Security and Governance: The absence of governance and leadership allows shadow AI to thrive. Organizations must integrate security into each phase of the AI lifecycle, from design to deployment. Tools such as Microsoft Defender can help monitor AI applications used by employees and identify associated risks.
Employees often do not intentionally seek to create problems; usually, they are simply unaware of the risks involved. Educating them on the reasons behind safeguard measures can build understanding and accountability.
The uneven landscape of enterprise AI adoption offers an opportunity for growth. Small, well-managed advancements can yield significant productivity gains. For instance, saving 10-15 minutes daily on repetitive tasks can accumulate to substantial efficiency improvements across the workforce. Moreover, closing the gap between innovators and those lagging behind fosters a culture where all employees feel empowered to contribute to AI-driven transformation.
Ultimately, shadow AI serves as a symptom rather than a fundamental problem. It indicates areas where leadership should implement clearer policies, provide better tools, and enhance enablement. When approached thoughtfully, these insights can guide organizations toward cultivating a secure, inclusive, and transformational AI-focused culture.
This piece is authored by Ed Keisling, Chief AI Officer at Progress Software.
Disclaimer: The views expressed are solely those of the author, and ETCIO does not necessarily endorse them. ETCIO is not responsible for any damages caused directly or indirectly to any individual or organization.
The insights presented emphasize the significance of the Making AI Work Summit & Awards, a crucial event for AI discourse in India. The summit facilitates knowledge exchange while recognizing excellence through the ET AI Awards India, acknowledging the progress made by visionaries, developers, and organizations advancing AI responsibly and at scale.