The internet has develop into a primary source of health information for many, but this accessibility comes with a significant risk: the proliferation of nutrition misinformation. A new tool developed by researchers at University College London (UCL) aims to address this growing problem, going beyond simply labeling content as “true” or “false” to assess the potential harm it could cause. This nuanced approach is crucial, as misleading information – even if not demonstrably false – can have serious consequences for individual and public health.
From restrictive diets and dangerous fasting regimes to the unsafe use of dietary supplements, the World Health Organization (WHO) recognizes health misinformation as a major public health threat. In the United States alone, it’s estimated that improperly used supplements account for 20% of drug-induced liver injuries. The problem is compounded by the fact that harmful content often slips past traditional fact-checkers, operating through selective framing that obscures potential risks, according to Alex Ruani, lead author and developer of the new tool.
Assessing Risk, Not Just Truth
The tool, formally named the Diet-Nutrition Misinformation Risk Assessment Tool, or Diet-MisRAT, takes a novel approach to identifying and evaluating misleading information. Unlike existing methods that focus on factual accuracy, Diet-MisRAT adapts the WHO’s established framework for assessing hazardous exposures – typically used in physical environments – to the digital realm. It treats online content as the “medium” and misleading traits as “risk agents” that can increase a user’s susceptibility to harm.
The tool doesn’t simply flag inaccurate claims; it analyzes the context, framing, and potential impact of the information. It then ranks content as green (low risk), amber (moderate risk), or red (critical risk) based on a weighted misinformation risk score. This scoring system considers not only the content itself, but also how it’s presented and the likelihood that a user will be misled. For example, a claim stating “it is safer to give your child high-dose vitamin A than the MMR vaccine” would be immediately classified as critical risk due to its false safety framing, omission of the dangers of excessive vitamin A, and undermining of public health guidance.
A Rigorous Validation Process
The development of Diet-MisRAT wasn’t solely a theoretical exercise. Researchers rigorously tested and calibrated the tool through five rounds of verification, incorporating feedback from nearly 60 specialists in dietetics, nutrition, and public health. This collaborative process ensured that the tool’s assessments reflect professional judgment and accurately identify the core traits of misinformation: inaccuracy, hazardous omissions, and manipulative framing. The testing also pinpointed indicators that amplify risk potential, such as the method and conditions of content consumption and its prominence online.
Professor Anastasia Kalea, a co-author of the study from the UCL Division of Medicine, emphasized the importance of specialist expertise in assessing misinformation risk. “It is essential to include specialist expertise when assessing misinformation risk,” she said. “Our tool was calibrated and validated with feedback from nearly 60 subject-matter experts. This helps ensure that assessments of potential harm reflect appropriate professional judgement.”
Real-World Examples of Harm
The dangers of online nutrition misinformation are not hypothetical. Researchers cite several recent cases illustrating the potential for real-world harm. In 2025, a man was diagnosed with cholesterol-induced skin lesions after adopting a carnivore diet, a trend amplified by social media algorithms, particularly within online communities known as the “manosphere.” More alarmingly, a person was hospitalized after following AI-generated advice to replace sodium chloride (table salt) with sodium bromide, a toxic substance. Misinformation has also contributed to individuals abandoning life-saving cancer treatments in favor of unproven dietary alternatives.
Implications for Platforms and Policymakers
The development of Diet-MisRAT comes at a critical time, as digital platforms, public health authorities, and policymakers grapple with the growing influence of misleading health advice online, particularly on social media, in search summaries, and through generative AI. Ruani argues that misleading health information should be treated like any other public health risk. “In public health we assess exposure to risk factors. We believe misleading health information should be treated in the same way. Some misinformation can lead to serious harm, so mitigation strategies should be proportionate to the level of risk,” she explained.
The tool’s ability to measure the degree of misleading information and potential harm could be instrumental in building stronger safeguards into AI models and agents before they are deployed, rather than reacting after harm occurs. Professor Michael Reiss, a co-author from the UCL Institute of Education, added that the tool’s risk assessment criteria can be incorporated into educational programs and professional training, equipping individuals to recognize and challenge misinformation.
The research, published in Scientific Reports, offers a valuable framework for addressing the complex challenge of nutrition misinformation. As AI-powered tools become increasingly prevalent, the need for robust risk assessment mechanisms like Diet-MisRAT will only continue to grow. The next step for the researchers involves exploring collaborations with digital platforms to integrate the tool into content moderation systems and further refine its capabilities.
Have you encountered nutrition misinformation online? Share your experiences and thoughts in the comments below.
