Half of Employees Will Use AI Even If Banned. How IT Teams Should Respond

This trend presents significant security and compliance risks that IT leaders must address with strategic approaches rather than blanket prohibitions.

February 26, 2025
7 Min Read

A recent study by Software AG revealed a concerning trend for compliance and security teams: many employees are using unauthorized AI tools, creating what some call "Shadow AI."

The results of the study are striking:

  • 75% of knowledge workers already use AI tools
  • 46% would refuse to give them up, even if their organization banned them completely
  • Most employees aren't blind to the risks. 72% recognize cybersecurity risks and 70% acknowledge data governance concerns.
  • Few employees take precautions, with only 27% running security scans and 29% checking data usage policies.

Founder of research firm GAI, John Sviokla, notes that this creates “a massive security problem if rogue IT users share data with models and providers without review or approval," ultimately meaning that "just about half your knowledge workers are not going to go back to old ways of working – no matter what you do."

This trend presents significant security and compliance risks that IT leaders must address with strategic approaches rather than blanket prohibitions.

Why Employees Are Embracing AI Tools

Understanding the drivers behind AI usage is essential for developing effective responses. The data points to three key motivations:

  1. Productivity Gains: AI tools enable employees to complete tasks faster, automate repetitive work, and improve output quality. When these tools save hours per day, banning them feels like a direct threat to efficiency.
  2. Competitive Pressure: Nearly half (47%) of workers believe AI tools will help them get promoted faster. Workers see colleagues leveraging AI and feel they must keep pace or risk falling behind.
  3. Ease of Access: With consumer-grade AI tools available on personal devices, banning AI is virtually impossible to enforce. As VentureBeat reports, 74% of ChatGPT accounts are non-corporate ones that lack proper security controls.

Security Risks of Unmanaged AI Use

Unsanctioned and unmonitored AI usage introduces significant security and compliance risks that should concern every IT leader:

  • Data Leakage: According to security experts interviewed by VentureBeat, approximately 40% of AI tools default to training on any data you feed them, meaning your intellectual property can become part of their models.
  • Compliance Violations: With regulations like GDPR and the upcoming EU AI Act, organizations face potential penalties if private data flows into unapproved AI tools.
  • Runtime Vulnerabilities: AI tools create new attack vectors that traditional endpoint security and data loss prevention (DLP) systems aren't designed to detect and stop.
  • Intellectual Property Risks: Once proprietary data gets into a public-domain model, controlling it becomes nearly impossible, potentially exposing trade secrets and confidential information.

A Strategic Approach to AI Usage

Rather than implementing outright bans (which data shows are ineffective), forward-thinking organizations are adopting strategic approaches that balance security with productivity:

1. Conduct a Formal Shadow AI Audit

Establish a baseline by conducting a comprehensive audit to identify unauthorized AI usage through proxy analysis, network monitoring, and software inventory reviews. As one security expert told VentureBeat, "one security head of a New York financial firm believed fewer than 10 AI tools were in use. A 10-day audit uncovered 65 unauthorized solutions, most with no formal licensing."

2. Create Clear AI Governance Policies

Develop clear policies that define acceptable AI use cases, approved tools, and data handling procedures. These policies should balance productivity needs with security requirements, acknowledging that employees will find ways to use these tools regardless of blanket prohibitions.

3. Offer Secure Enterprise AI Alternatives

The most effective approach is to provide employees with secure, enterprise-grade AI tools that offer similar functionality to consumer options while maintaining proper security controls. Platforms like Userfront Workforce AI enable secure access to trusted AI assistants while also providing:

  • Restrictions on what AI uses for training
  • User, group, and policy-level access controls
  • AI prompt auto-categorization and flagging
  • Detailed audit trails with prompt logs for compliance reporting
  • Data upload restrictions to flag suspicious prompts or potential data leakage

4. Educate Employees on Secure AI Usage

Implement training programs that explain the risks of AI and demonstrate how to use approved tools effectively. As WinWire CTO Vineet Arora told VentureBeat, "The data confirms that once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth."

Conclusion: Embrace Rather Than Ban

The evidence is clear: attempting to ban AI tools outright will likely drive usage underground, exacerbating security risks rather than mitigating them. As Steve Ponting, Director at Software AG, observes: "While 75% of knowledge workers use AI today, that figure will rise to 90% in the near future because it helps to save time (83%), makes employees' jobs easier (81%) and improves productivity (71%)."

The most successful approach is to provide secure enterprise alternatives that satisfy both productivity needs and security requirements. By implementing proper governance structures, offering sanctioned AI tools, and educating employees on secure usage, organizations can harness the benefits of AI while minimizing potential risks.

Remember that employee AI usage isn't going away—but with the right strategy, you can bring it into the light, where it can be properly secured, monitored, and leveraged for good.

Subscribe to the newsletter

Receive the latest posts to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By subscribing, you agree to our Privacy Policy.

Modernize Your Sign-On

Experience smarter enterprise sign-on tools & reporting.