Nearly 20% of Healthcare Workers Are Secretly Using Unapproved AI Tools
Table of Contents
A new report reveals a growing trend of clinicians turning to unauthorized AI to cope with burnout and workflow pressures, raising serious questions about data security and patient safety.
- Almost 20% of healthcare staff admit to using AI tools not vetted by their organizations.
- Clinicians are primarily motivated by a desire to speed up workflows and reduce administrative tasks.
- The use of “Shadow AI” creates significant liability, including potential data breaches costing an average of $7.4 million.
- A disconnect exists between administrators’ perceptions of AI policy awareness and providers’ actual understanding.
Nearly one in five healthcare workers are going rogue with artificial intelligence, deploying unapproved algorithms to navigate increasingly demanding workloads. A new report reveals that 40% of healthcare staff have encountered these unauthorized AI tools in their workplaces, with almost 20% admitting to actively using them.
“Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns,” warns Yaw Fellin, Senior Vice President at Wolters Kluwer Health. The data suggests that while health systems grapple with establishing AI policies, clinicians are already integrating these tools into their daily routines—often without official permission.
The Efficiency Desperation
What’s driving highly trained medical professionals to seek out “rogue” technology? The answer isn’t defiance, but exhaustion. The survey indicates that 50% of respondents cite “faster workflows” as their primary motivation.
In a healthcare system where primary care physicians would need 27 hours a day to provide guideline-recommended care, readily available AI tools offer a crucial lifeline. Whether it’s drafting an appeal letter or summarizing a complex patient chart, clinicians are prioritizing speed and efficiency, even if it means bending the rules.
“Clinicians and administrative teams want to adhere to rules,” the report notes. “But if the organization hasn’t provided guidance or approved solutions, they’ll experiment with generic tools to improve their workflows.”
The Disconnect: Admins vs. Providers
The report highlights a concerning gap in understanding between those who create policies and those who are expected to follow them.
- Policy Awareness: While 42% of administrators believe AI policies are “clearly communicated,” only 30% of providers agree.
- Involvement: Administrators are three times more likely to be involved in AI policy development (30%) than the providers actually using the tools (9%).
This “ivory tower” dynamic creates a dangerous blind spot. Administrators may believe they’ve established a secure environment, while providers feel compelled to bypass the system to effectively do their jobs.
The $7.4M Risk
The consequences of Shadow AI extend beyond workflow efficiency, encompassing both financial and clinical risks. The average cost of a data breach in healthcare has reached $7.42 million. When a clinician copies and pastes patient notes into a free, publicly accessible chatbot, that sensitive data potentially leaves the secure, HIPAA-compliant environment, inadvertently training a public model on private health information.
Beyond privacy concerns, the potential for physical harm is paramount. Both administrators and providers identified patient safety as their top concern regarding AI. A flawed response, or “hallucination,” from a generic AI tool used for clinical decision support could lead to incorrect dosages or missed diagnoses.
From “Ban” to “Build”
Many CIOs’ initial instinct is to restrict access to popular AI platforms like ChatGPT, Claude, or Gemini. However, industry leaders argue that a purely prohibitive approach is unlikely to succeed.
“GenAI is showing high potential for creating value in healthcare but scaling it depends less on the technology and more on the maturity of organizational governance,” says Scott Simeone, CIO at Tufts Medicine.
The report suggests that instead of banning AI, health systems should focus on providing enterprise-grade alternatives. If clinicians are turning to Shadow AI to solve a specific workflow problem, the organization must offer a sanctioned tool that addresses the same need with equal speed and, crucially, safety.
As Alex Tyrrell, CTO of Wolters Kluwer, predicts: “In 2026, healthcare leaders will be forced to rethink AI governance models… and implement appropriate guardrails to maintain compliance.” The days of passively ignoring this trend are over.
