Google AI Safety Pledge: UK Lawmaker Accusations

by Priyanka Patel

U.K. parliamentarians Accuse Google DeepMind of AI Safety Pledge Violations

A cross-party group of 60 U.K. parliamentarians has formally accused Google DeepMind of breaching international commitments designed to ensure the safe development of artificial intelligence. The criticism, detailed in an open letter shared exclusively with TIME ahead of its public release on August 29, centers on the March launch of Google’s Gemini 2.5 Pro without simultaneous publication of complete safety testing data.

The letter, released by activist group PauseAI U.K., argues that Google’s approach “sets a perilous precedent” and undermines efforts to proactively manage the risks associated with increasingly powerful AI systems.signatories include prominent figures such as digital rights campaigner Baroness Beeban Kidron and former Defense secretary Des Browne, who emphasized the urgency of the situation.

For years, AI experts – including Google DeepMind CEO Demis Hassabis – have cautioned about the potential for catastrophic risks stemming from unchecked AI development. These concerns range from aiding malicious actors in creating biological weapons to facilitating cyberattacks on critical infrastructure. In February 2024, at an international AI summit co-hosted by the U.K. and South Korean governments, Google, OpenAI, and other leading AI developers signed the Frontier AI Safety Commitments. these pledges included a commitment to “publicly report” system capabilities, risk assessments, and details regarding external testing by autonomous AI safety institutes.

However,the parliamentarians contend that Google failed to uphold its end of the bargain with the release of Gemini 2.5 Pro, which the company touted as outperforming rival AI systems by “meaningful margins.” Detailed safety details was not made public for over a month, raising concerns about openness and accountability. According to the letter, this delay represents a “failure to honor” international safety commitments and jeopardizes the fragile norms promoting safer AI development. “If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,” Browne stated.

Google DeepMind maintains it is fulfilling its obligations. “We’re fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments,” a spokesperson told TIME via email. “As part of our development process, our models undergo rigorous safety checks, including by UK AISI and other third-party testers – and Gemini 2.5 is no exception.”

The open letter specifically calls on Google to adhere to its safety commitments and demands greater transparency from AI developers. It also highlights concerns about the safety records of other leading AI companies. “We’re seeing examples of AI models being released with limited information about their potential harms – for example, when they bump into pedestrians and when the brakes don’t work.”

Croft questioned the allocation of Google’s ample AI investment, asking, “How much of [Google’s] huge investment in AI is being channeled into public safety and reassurance and how much is going into huge computing power?”

Google is not alone in facing scrutiny. Elon Musk’s xAI has yet to release any safety report for its Grok 4 model, launched in July.Similarly,OpenAI’s February release of its Deep Research tool lacked a same-day safety report,with the company publishing it 22 days later despite claiming to have conducted “rigorous safety testing.”

Joseph Miller, director of PauseAI U.K., expressed concern about a pattern of apparent violations, explaining that the focus on Google stemmed from its prominent position and the fact that DeepMind, acquired by google in 2014, remains headquartered in London.While U.K. Secretary of State for Science, Innovation and Technology, Peter Kyle, pledged to “require” AI companies to share safety tests during his 2024 campaign, plans for AI regulation were reportedly delayed in February as the U.K. sought alignment with a more laissez-faire approach favored by the previous U.S. administration.

Miller argues that the time for voluntary commitments has passed, stating, “voluntary commitments are just not working,” and advocating for “real regulation” to ensure responsible AI development.

You may also like

Leave a Comment