UCL Develops New Tool to Assess Nutrition Misinformation Risks

by Grace Chen

For years, the fight against online health misinformation has relied on a binary approach: a claim is either true or false. But in the nuanced world of diet and nutrition, the most dangerous content rarely consists of outright lies. Instead, it thrives in the gray area of half-truths, omitted context and exaggerated promises—the kind of selective framing that can lead a person to abandon life-saving medicine for an unproven supplement.

To address this gap, UCL scientists’ recent tool detects risk of online nutrition misinformation by moving beyond simple fact-checking. Developed by researchers at University College London (UCL), the Diet-Nutrition Misinformation Risk Assessment Tool (Diet-MisRAT) does not just flag whether a post is “wrong”. it estimates the potential for real-world harm on a graded scale.

The tool arrives at a critical juncture in public health. According to the World Health Organization (WHO), health misinformation spread online is a major public health threat that can lead to disastrous, and sometimes fatal, outcomes. From the promotion of extreme fasting to the misuse of dietary supplements, the digital landscape has become a breeding ground for advice that contradicts established science and endangers vulnerable populations, particularly adolescents.

Moving Beyond Binary Fact-Checking

Most existing misinformation detectors operate like a light switch—on or off. However, nutrition misinformation often operates through a “selective framing” that masks potential risks while highlighting a single, perhaps true, but misleading benefit. This allows harmful content to bypass traditional fact-checkers until a high-profile tragedy makes the headlines.

Moving Beyond Binary Fact-Checking

Diet-MisRAT is a rule-based content analysis model that adopts the WHO’s framework for assessing hazardous exposures in digital environments. Rather than a simple “true or false” verdict, the tool analyzes content through four distinct dimensions to determine its risk level:

  • Inaccuracy: Whether the core facts presented are objectively wrong.
  • Incompleteness: Whether critical information or necessary caveats are missing.
  • Deceptiveness: Whether the content is framed in a way that intentionally misleads the reader.
  • Health Harm: Whether the advice could reasonably lead a user to engage in dangerous behavior.

By prompting structured questions about whether a claim is exaggerated or contradicts science, the tool generates a risk score. The higher the score, the more likely the content is to be harmful, allowing regulators and educators to rank and prioritize interventions.

The Physical Cost of Digital Advice

The motivation for this tool is rooted in alarming clinical trends. The research team highlights that restrictive diets and the unmonitored utilize of dietary supplements are significant contributors to drug-induced liver injuries in the United States. The danger is not merely theoretical; it is documented in medical records.

The study, published in Scientific Reports, notes that misinformation has been implicated in decisions to abandon life-saving treatments. In some cases, patients with curable cancers have opted for unproven dietary alternatives, an approach linked to mortality rates twice as high as those who followed standard medical care.

Other clinical reports cited in the research illustrate the extremes of these trends. One man developed severe cholesterol-induced skin lesions after following a “carnivore diet,” a trend often amplified within “manosphere” online subcultures. Another individual suffered hazardous metallic layering in the colon after ingesting colloidal silver drops promoted in some naturopathic circles. Most tragically, the researchers point to the death of an adolescent girl who adhered to a water-only fasting regime discovered online.

Health misinformation spread online creates a major public health threat, according to the WHO.

Expert Calibration and the AI Challenge

To ensure the tool reflects professional medical judgment rather than algorithmic bias, the researchers calibrated Diet-MisRAT through five verification rounds. This process included input from more than 60 specialists across the fields of public health, dietetics, and nutrition.

“It is essential to include specialist expertise when assessing misinformation risk. Our tool was calibrated and validated with feedback from nearly 60 subject-matter experts. This helps ensure that assessments of potential harm reflect appropriate professional judgment,” says co-author and professor Anastasia Kalea at UCL’s Division of Medicine.

The urgency of this tool is magnified by the rise of generative AI. When AI chatbots deliver health advice with absolute confidence, users often assume the information is safe. This “confidence gap” makes it easier for misleading advice to be accepted as fact.

Alex Ruani, a doctoral researcher at UCL and the lead developer of the tool, argues that the same logic used to assess environmental risk factors should be applied to digital information. By measuring the potential harm of a piece of advice before it is widely consumed, safeguards can be built directly into AI agents and models.

person on their phone reading information
The study stresses that misinformation has been implicated in decisions to abandon life-saving treatments.

Scaling the Response to the ‘Infodemic’

The implications of the Diet-MisRAT model extend beyond software. The authors suggest that the tool’s risk assessment criteria can be used for professional training and public education, helping people develop a “misinformation inoculation” to better identify deceptive framing on their own.

This scalable approach is necessary given the reach of digital “super-spreaders.” A previous study indicated that following the health advice of just 53 high-reach social media influencers could potentially put up to 24 million people at risk of serious health consequences.

Diet-MisRAT Assessment Dimensions
Dimension Key Focus Risk Indicator
Inaccuracy Factual correctness Direct contradictions of scientific evidence
Incompleteness Context and caveats Omission of side effects or contraindications
Deceptiveness Framing and intent Use of hyperbole or selective data reporting
Health Harm Behavioral outcome Potential for acute or chronic physical injury

As public health authorities and policymakers debate how to regulate digital health content, the goal is to create responses proportionate to the risk. According to Ruani, the more severe the potential harm, the stronger the regulatory or educational response should be.

Disclaimer: This article is for informational purposes only and does not constitute medical advice. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition or dietary changes.

The research team intends for the tool to serve as a foundation for more robust content oversight and the development of safer AI health agents. Further integration of these risk-stratification models into social media moderation systems remains a key area for future public health policy and technical development.

Do you rely on social media for nutrition advice? Share your thoughts in the comments or share this article to help others identify health misinformation.

You may also like

Leave a Comment