BOSTON, January 25, 2024 – Nearly one in five healthcare workers have used artificial intelligence tools at work that haven’t been vetted by their organizations, a practise known as “shadow AI,” raising concerns about patient safety and data security. A new survey reveals the extent of this trend and the potential risks it poses to healthcare institutions and patients.
The Rise of Rogue AI in Healthcare
Table of Contents
A growing number of medical professionals are experimenting with unapproved AI tools, creating potential risks for patients and institutions.
- More than 40% of medical workers and administrators are aware of colleagues using shadow AI.
- Nearly 20% of healthcare employees have personally used an unauthorized AI tool.
- Accuracy and data privacy are major concerns surrounding the use of unapproved AI in healthcare settings.
- Lack of awareness regarding organizational AI policies is widespread among healthcare staff.
The survey revealed that over 40% of medical workers and administrators are aware of colleagues utilizing these “shadow AI” products, while almost 20% admitted to using an unauthorized AI tool themselves. This trend highlights a growing disconnect between the potential benefits of AI and the necessary safeguards to ensure responsible implementation.
Security Risks and Data Breaches
Shadow AI poses a serious security risk across all industries, as the covert nature of these tools leaves organizations vulnerable to cyberattacks and data breaches. Healthcare organizations are especially susceptible, given the valuable data they hold and the high stakes associated with patient care. Cybercriminals frequently target the healthcare sector, exploiting vulnerabilities to access sensitive details.
Patient Safety Concerns
Dr. Peter Bonis, chief medical officer at Wolters Kluwer, emphasized the importance of vetting AI tools for safety and efficacy. “The issue is, what is their safety? What is their efficacy, and what are the risks associated with that?” he said. “And are those adequately recognized by the users themselves?” About a quarter of providers and administrators ranked patient safety as their top concern regarding AI in healthcare, according to the survey, which included responses from more than 500 individuals at hospitals and health systems.
AI tools, while promising, can sometimes provide misleading or inaccurate information, perhaps harming patients. Even with human oversight, “there’s a whole variety of ways in which…these tools misfire, and those misfires may not be adequately intercepted at the point of care,” Bonis explained.
Why Healthcare Workers Turn to Shadow AI
Despite the risks, many healthcare workers are drawn to shadow AI tools for their perceived benefits. More than 50% of administrators and 45% of care providers reported using unauthorized products to expedite workflows. Nearly 40% of administrators and 27% of providers cited better functionality or a lack of approved alternatives as their reasons for using shadow AI.Curiosity and experimentation also played a role, with over 25% of providers and 10% of administrators admitting to using these tools simply to explore their capabilities.
awareness of AI Policies Lags
A notable portion of healthcare workers are unaware of their organization’s AI policies. While administrators are more likely to be involved in AI policy development, only 29% of providers reported being aware of the main AI policies at their organization, compared to 17% of administrators.Many providers are familiar with policies surrounding AI scribes-tools that record conversations and draft clinical notes-but may not fully understand the broader scope of AI governance.
“So that might be why they are saying that they are aware, but they may not be fully aware of all the things that could be considered an AI policy,” Bonis said.
