The integration of generative artificial intelligence into medical training is shifting from simple chatbots to sophisticated, co-designed simulations. A new research initiative led by Zhiqi Gao, Jiahuan (Joanne) Pei, and Benyou Wang is exploring the development of AI standardized patients for medical education, focusing on a participatory design process that involves the medical learners themselves in the creation of the tools they will employ for clinical practice.
Traditionally, medical students rely on “standardized patients”—trained actors who simulate specific clinical conditions to help students hone their diagnostic and communication skills. While effective, the human-led model is often limited by cost, scheduling conflicts, and the difficulty of maintaining strict consistency across different student interactions. The work by Gao, Pei, and Wang seeks to bridge this gap by leveraging AI to create scalable, high-fidelity synthetic patients.
This research is slated for presentation at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2026) in Barcelona. As the premier global forum for Human-Computer Interaction (HCI), the conference serves as a critical venue for validating how AI can be implemented in high-stakes professional environments without sacrificing the nuance of human interaction.
The Shift Toward Participatory Co-Design
Unlike many AI tools developed in isolation by software engineers, this project emphasizes “co-design.” This methodology ensures that the AI’s behavior, dialogue patterns, and clinical accuracy are informed by the actual needs and observations of medical learners. By involving students in the design phase, the researchers aim to avoid the “uncanny valley” of medical simulation, where an AI might provide technically correct answers but fail to mimic the emotional or behavioral cues of a real patient.

The co-design process typically involves iterative feedback loops where medical students test the AI’s responses against real-world clinical scenarios. This ensures the synthetic patients can simulate not only the symptoms of a disease but similarly the psychological barriers—such as anxiety, hesitation, or confusion—that students must navigate during a real patient encounter.
This approach addresses a primary concern in medical AI: the risk of “hallucinations” or clinically inaccurate data. By anchoring the AI’s development in the expertise of learners and educators, the team creates a safeguard that aligns the technology with established medical curricula and diagnostic standards.
Comparing Traditional and AI-Driven Simulations
The move toward AI-augmented simulation represents a fundamental change in how medical schools can scale their training. While human actors provide unmatched empathy, AI offers a level of accessibility and repetition that was previously impossible.
| Feature | Traditional Standardized Patients | AI-Co-Designed Patients |
|---|---|---|
| Availability | Limited by actor schedules | On-demand, 24/7 access |
| Consistency | Variable by actor performance | Strictly standardized logic |
| Scalability | High cost per student session | Low marginal cost per user |
| Nuance | High emotional authenticity | Improving via co-design feedback |
Implications for Clinical Competency
The ultimate goal of these AI standardized patients is to improve performance in Objective Structured Clinical Examinations (OSCEs), the rigorous practical tests medical students must pass to prove their competency. By practicing with AI that has been refined by their peers, students can engage in “low-stakes” failure—making mistakes and correcting them in a safe environment before interacting with actual patients.
Beyond the technical skill of diagnosis, the research highlights the importance of communication. The co-design element allows the researchers to program the AI to react to the student’s tone and empathy levels, providing a mirror for students to observe how their bedside manner affects the “patient’s” willingness to share critical information.
Stakeholders and Future Adoption
The adoption of such systems affects a wide array of stakeholders in the healthcare ecosystem:
- Medical Students: Gain more frequent, personalized practice and immediate feedback on their diagnostic pathways.
- Educators: Can deploy a wider variety of rare clinical cases that would be tough or expensive to simulate with human actors.
- Healthcare Institutions: Reduce the overhead costs associated with hiring and training large cohorts of simulation actors.
- Patients: Ultimately benefit from physicians who have had more extensive, standardized training in both clinical logic and interpersonal communication.
Despite the promise, the transition to AI-led simulation is not without constraints. The researchers must continue to address the “black box” nature of some large language models, ensuring that the AI’s reasoning for a specific symptom can be traced back to verified medical literature.
Disclaimer: This article is for informational purposes only and does not constitute medical advice or a validation of specific medical training protocols.
The next major milestone for this research will be its formal presentation and peer review during the CHI 2026 proceedings in Barcelona, where the team will share their findings on how co-design specifically improves the efficacy of synthetic patient interactions.
We invite readers to share their thoughts on the role of AI in medical training in the comments below.
