Shadow AI: Three Critical Risks Organizations Face

Shadow AI: Three Critical Risks Organizations Face
Recent research reveals a concerning pattern: employees are increasingly sharing sensitive data with AI tools, delegating entire tasks to AI, and often doing this through external (shadow) accounts outside IT oversight.
These form three mutually reinforcing risks:
1. Use of personal AI tools for work purposes
When employees use their personal AI accounts for work tasks, the organization loses visibility into what data and tasks are being handed over to AI. MIT research shows that in 90% of companies, employees utilize AI tools in their work with their own accounts – often without IT or security approval. This means that a significant portion of AI usage happens outside the company's control and oversight, making both risk management and regulatory compliance more difficult.
2. Risk of confidential information leakage
Using AI tools is not just a governance issue – it's also a security challenge. Cyberhaven found that nearly one-third of the data employees input into AI tools is sensitive, and about 4% is confidential information. When this happens through personal or other external accounts, the risks of data leakage increase significantly, and potential damage is difficult to trace and manage.
3. Risks of delegating entire tasks
AI usage is shifting from assisted work toward full task delegation — from co-pilot to autopilot. Anthropic's Economic Index shows that 77% of enterprise customers' API usage follows patterns where entire projects are given to AI to handle from start to finish. A Nature study (N ≥ 500 per experiment) found that goal-based delegation (e.g., "maximize profit") increased unethical outcomes: AI followed problematic requests 60–95% of the time, while humans had a corresponding rate of 25–40%.
This development is directly at odds with Article 14 of the EU AI Act, which requires that high-risk AI systems be supervised by natural persons who have the ability to override or stop the system. The "set and forget" approach will soon be in conflict with EU requirements – and requires organizations to adopt a new model where efficiency and oversight go hand in hand.
This is not a technology problem, but a governance and design challenge. The solution is not to prohibit delegation, but to redesign it: maintain human checkpoints at key decision points and build clear oversight processes. This requires viewing AI governance as part of the organization’s management system and strategic decision-making — not merely an IT issue.
How does your organization balance AI efficiency with the oversight that both ethics and regulation now demand?
#AIStrategy #EUAIAct #ShadowAI #DataGovernance
Sources & Further Reading
- Köbis, N., Rahwan, Z., Rilla, R., Bonnefon, J.-F. & Rahwan, I. (2025). Delegation to artificial intelligence can increase dishonest behaviour. Nature, 646, 126–134. https://doi.org/10.1038/s41586-025-09505-x
- Anthropic. (2025). Anthropic Economic Index Report - September 2025. Retrieved October 7, 2025, from https://www.anthropic.com/research/anthropic-economic-index-september-2025-report
- MIT Project NANDA (2025). State “The GenAI Divide: State of AI in Business 2025” (PDF): https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
- Cyberhaven Labs (2024). Shadow AI: How Employees Are Leading the Charge in AI Adoption and Putting Company Data at Risk. https://www.cyberhaven.com/blog/shadow-ai-how-employees-are-leading-the-charge-in-ai-adoption-and-putting-company-data-at-risk
- Metomic (2025). Survey of Security Leaders. Referenced in Cybernews, October 2025. https://cybernews.com/ai-news/ai-shadow-use-workplace-survey/
- IBM (2025). Cost of a Data Breach Report 2024. IBM Think Insights, April 17, 2025. https://www.ibm.com/think/insights/hidden-risk-shadow-data-ai-higher-costs
- European Union (2024). Regulation (EU) 2024/1689 - Artificial Intelligence Act. Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Marko Paananen
Strategic AI consultant and digital business development expert with 20+ years of experience. Helps companies turn AI potential into measurable business value.
Follow on LinkedIn →Related Insights

Autonomous AI Agents: Benefits and Hidden Risks
Autonomous AI agents are shifting work from task iteration to goal definition and evaluation. The risk: convincing but shallow output may flood decision-making.

What 700 Million ChatGPT Users Actually Do with AI - And What It Reveals About Workplace Strategies
OpenAI's study with Harvard and Duke analyzed 1.5M ChatGPT messages. Results show AI is valued more for decision support than automation—what does this mean for workplace strategy?

AI Learns to Think Like Humans: The Hybrid Model Revolution
Nobel laureate Daniel Kahneman's 'Thinking, Fast and Slow' described how the human mind operates on two levels: fast, intuitive reactions to everyday situations and deep, analytical thinking for complex problems. Now AI is also learning to think like humans - sometimes fast, sometimes slow...
Interested in learning more?
Contact us to discuss your company's AI strategy.