Brussels – European regulators are poised to enact sweeping restrictions on artificial intelligence tools capable of generating non-consensual intimate imagery, a move spurred by growing concerns over the misuse of platforms like X, formerly Twitter, and its AI chatbot, Grok. A key vote on Wednesday within the European Parliament’s Civil Liberties Committee advanced the “AI Omnibus” legislation, which includes a prohibition on AI systems creating realistic images depicting sexual activity or intimate body parts of identifiable individuals without their explicit consent. This represents a significant escalation in the EU’s efforts to regulate artificial intelligence and protect citizens from online harms.
The proposed ban isn’t absolute, however. Companies demonstrating robust safeguards against misuse, such as effective deepfake detection mechanisms, would be exempt. This nuance reflects a desire to balance safety with the continued development of AI technology. The Parliament’s approval aligns it with the stance of European governments, increasing the likelihood of the measure becoming law later this year, according to officials involved in the process.
Addressing a Growing Threat
The impetus for this legislative push stems from a surge in reports of AI-generated sexual imagery, particularly following the release of Grok. The chatbot, developed by xAI, an entity linked to Elon Musk, was reportedly exploited to create thousands of non-consensual images of women and children, often based on existing photographs. Whereas xAI subsequently limited the function in response to widespread criticism, the incident highlighted the potential for AI to be weaponized for malicious purposes. A report indicated that Grok generated images of people in a sexualized manner at a rate 85 times higher than other similar sites.
“It’s no longer enough to simply address the actions of individuals who create and share these images,” explained a European Parliament source familiar with the negotiations, speaking on background. “We demand to target the tools themselves and hold developers accountable for preventing their misuse.”
Building on Existing Legislation
The proposed AI restrictions build upon existing European laws designed to combat online sexual abuse and the non-consensual sharing of intimate images. A 2024 rule addressing violence against women already criminalizes the apply of AI to produce sexual imagery without consent. The Digital Services Act (DSA) imposes obligations on social media platforms to remove illegal content, including child sexual abuse material. This new legislation aims to broaden the scope of these protections to encompass the technology enabling the creation of such content.
The focus on the technology itself, rather than solely on individual users, reflects a recognition of the rapidly evolving capabilities of AI models. These models are becoming increasingly adept at generating realistic and convincing depictions of real people, making it more difficult to detect and remove harmful content.
Investigation into X and xAI
The case of Grok has also prompted investigations by authorities in both the European Union and the United Kingdom into X and xAI. These investigations, launched in January, seek to determine whether the platforms violated laws related to content moderation and online safety. XAI did not immediately respond to a request for comment, according to reports.
Navigating the Challenges of Consent
While the proposed ban is widely supported, questions remain about its practical implementation. A key challenge lies in verifying consent. If the legislation is enacted, companies developing AI systems will be required to demonstrate that their technologies incorporate safeguards against the creation of non-consensual imagery. However, the precise mechanisms for verifying consent remain unclear.
“How do you prove that someone has not consented to their image being used?” asked a legal expert specializing in AI regulation. “That’s a complex question that will require careful consideration and the development of robust verification protocols.”
Broader AI Legislation on the Horizon
The current proposal is part of a broader effort to update the EU’s AI legislation. A more comprehensive overhaul of the rules is underway, with plans to simplify regulations and postpone implementation deadlines for certain aspects of the law. The implementation of rules related to high-risk applications of AI, originally slated for August 2026, has been pushed back to December 2027 and August 2028. This delay is intended to allow specialized organizations to develop detailed guidance on compliance and provide greater clarity to businesses.
The EU is also working to address concerns about the potential impact of the new regulations on innovation. The revised timeline aims to strike a balance between protecting citizens and fostering the responsible development of AI technologies.
The evolving landscape of artificial intelligence demands constant vigilance and adaptation. As AI models become more sophisticated, the EU is committed to ensuring that these technologies are used ethically and responsibly, safeguarding the rights and freedoms of its citizens. The next key step will be final approval of the “AI Omnibus” legislation by the European Council, expected in the coming months.
This represents a developing story. Readers are encouraged to share their thoughts and experiences in the comments below.
