AI-Powered Healthcare: Promise and Peril as ChatGPT Health Faces Regulatory Hurdles
Table of Contents
The rise of artificial intelligence in healthcare offers unprecedented support for both patients and professionals, but concerns over data privacy and the potential for inaccurate diagnoses are raising red flags, notably in Europe.
OpenAI’s new AI tool, developed in collaboration with doctors, aims to augment-not replace-medical expertise. Its functionality includes explaining test results, assisting in preparing for specialist appointments, and offering personalized guidance on nutrition and fitness. However, the path to widespread adoption, especially in the European Union, is fraught with challenges due to stricter regulations surrounding artificial intelligence and medical data sharing compared to the United States, where similar tools like ChatGPT Santé already have access to patient electronic health records. OpenAI maintains that robust data protection measures are in place, and conversations will not be used to train AI models.
A Gap Filled by the Private Sector
For some, the emergence of AI-driven healthcare solutions highlights a critical failure of public systems to innovate. Professor Giovanni Briganti, Chair of Artificial Intelligence and Digital Medicine at UMons and professor of digital health at ULiège, believes ChatGPT Health is providing a service governments should have already delivered. “While Belgium continues to struggle with health data recovery for innovation, a private company has achieved what our health system should have: interoperability of health data,” he stated. “This allows patients to utilize an AI system that can potentially support them throughout their lives by integrating medical reports and daily health applications.”
The timing is particularly opportune, as healthcare systems grapple with increasing demands and physician burnout. The arrival of ChatGPT Health and similar models is occurring as doctors are becoming less available to provide readily accessible health advice. This shift isn’t just about wait times; it’s a basic change in the patient-doctor relationship. “Young doctors no longer want to be available 24/7,” Professor Briganti explained. “We underestimate how much the health system relied on these informal exchanges, on the unpaid availability of physicians. This is where AI tools are gaining traction, offering round-the-clock access and advanced knowledge.”
The Shadow of “Hallucinations” and Patient Responsibility
Despite the potential benefits, experts warn of significant risks. A growing number of patients are already sharing health data with AI, and some are making critical decisions about their treatment-including altering or stopping medication-without consulting a physician. “we must acknowledge the reality: citizens haven’t been adequately prepared to use these tools,” one doctor lamented. “The use is happening, but these tools need to be properly regulated.”
A key concern is the potential for AI hallucinations-instances where the system generates inaccurate or misleading information. This risk isn’t necessarily mitigated by a “health” focused model. As an example,a patient describing flu-like symptoms who mentions recent travel to an exotic country might receive a diagnosis of a far more serious illness,as the AI lacks the nuanced understanding of a physician familiar with the patient’s specific context. The model, lacking specific knowledge, may gravitate towards rarer or more severe diagnoses.
Moreover, accessing healthcare through AI shifts responsibility to the patient. professor Briganti cautioned, “We must accept the consequences of consulting AI: if we modif
