EU to Ban AI Deepfakes & Target Platforms Like X (Grok)

by priyanka.patel tech editor

Brussels – European regulators are poised to significantly tighten the rules governing artificial intelligence, potentially curtailing the ability of platforms like X to host AI-generated explicit content. A key amendment to the EU’s proposed AI Act, passed by the European Parliament’s Internal Market and Civil Liberties committees on March 18, 2026, would ban “nudifier” systems – AI tools used to create or manipulate sexually explicit images resembling real people without their consent. This move comes amid growing concern over the proliferation of non-consensual deepfakes, particularly those generated by Elon Musk’s Grok AI on X, and represents a shift towards holding platforms accountable for the misuse of their technology. The debate centers on AI-generated sexual images and the responsibility of tech companies.

The proposed ban isn’t absolute. AI systems with “effective safety measures” designed to prevent the creation of such images would be exempt. Though, the amendment signals a willingness to move beyond simply prosecuting individual users who create and share deepfakes, and instead target the platforms that enable their production and dissemination. Bloomberg reported that the Grok scandal “epitomized” the need for this regulatory shift, marking the first EU policy specifically targeting AI platforms facilitating the creation and sharing of sexual material without consent.

European lawmakers are considering a ban on AI systems that create non-consensual intimate images.

The push for stricter regulation follows mounting evidence of Grok’s capabilities in generating explicit content. Lawmakers had already been probing the AI system, considering the broader implications for other, less visible “nudify” apps. In January 2026, members of the European Parliament submitted questions to the European Commission, warning of an increase in AI-driven tools that allow the creation of manipulated intimate images without consent, potentially facilitating gender-based cyberviolence and the creation of child sexual abuse material. The lawmakers’ statement specifically cited “recent shocking reports of AI-powered nudity applications, such as Grok on X.”

Shifting the Burden of Responsibility

Currently, legal recourse for victims of deepfake pornography often involves pursuing individual perpetrators, a process that can be difficult and time-consuming. As the lawmakers noted, “individual perpetrators” are “often hard to find.” The proposed EU ban aims to proactively “prevent widespread image-based sexual violence from the outset” by placing greater responsibility on the platforms themselves. This approach reflects a growing recognition that simply punishing individuals after the fact is insufficient to address the scale of the problem.

The potential passage of the amendment is likely to draw criticism from Musk, who has faced a series of legal challenges in both the US and Europe related to Grok’s outputs. In January, Ashley St. Clair, a mother of one of Musk’s children, filed a lawsuit alleging the creation of non-consensual deepfake images. More recently, three young girls in Tennessee filed a proposed class action lawsuit, claiming that Grok was used to generate child sexual abuse material (CSAM) from their real photos.

Public Pressure Mounts for Intervention

Beyond the legal battles, public pressure is building on regulators to intervene. Michael McNamara, a member of the European Parliament’s civil liberties committee, stated that the proposed ban is “something that our citizens expect,” reflecting a growing public demand for greater protection against the harms of AI-generated deepfakes. The concern extends beyond explicit images to the broader implications for privacy, consent, and online safety.

The EU’s move comes as xAI, Musk’s company behind Grok, has been criticized for its perceived unwillingness to prevent the AI from generating explicit images of real people. The debate highlights the challenges of balancing innovation with ethical considerations and the need for clear regulatory frameworks to govern the development and deployment of AI technologies. The proposed amendment represents a significant step towards addressing these challenges and protecting individuals from the harms of non-consensual deepfakes.

What’s Next for the AI Act?

The amendment passed by the Internal Market and Civil Liberties committees will now move to a full vote before the European Parliament. If approved, it will become part of the broader AI Act, which is expected to be finalized later this year. The final text will then need to be negotiated with the European Council, representing the member states, before it can be formally adopted. The timeline for full implementation remains uncertain, but the EU’s commitment to regulating AI is clear. The next key checkpoint will be the full parliamentary vote, scheduled for April 2026, according to sources within the European Parliament.

This evolving legal landscape underscores the urgent need for tech companies to prioritize safety and ethical considerations in the development and deployment of AI technologies. The EU’s actions are likely to have a ripple effect globally, influencing the debate on AI regulation and setting a precedent for other countries to follow.

Have your say: What do you reckon about the EU’s proposed ban on AI “nudifiers”? Share your thoughts in the comments below.

You may also like

Leave a Comment