AI Chatbots & Photo Manipulation: Bikini Stripping Concerns

by Priyanka Patel

Some users are exploiting popular chatbots to create sexually explicit deepfakes of women, often without their knowledge or consent.

AI Tools Used to Generate Nonconsensual Images

The rise of generative AI is raising serious ethical concerns about the creation and spread of intimate, nonconsensual imagery.

  • Generative AI tools are being misused to create deepfake images, specifically targeting women.
  • A now-deleted Reddit post detailed how to manipulate Google’s Gemini model to generate revealing images.
  • Reddit banned a subreddit, r/ChatGPTJailbreak, with over 200,000 followers for violating its policies against nonconsensual intimate media.
  • As AI imaging models improve, the potential for realistic deepfakes and circumvention of safety measures increases.

A disturbing trend has emerged where individuals are using generative AI chatbots to fabricate sexually suggestive images of women, often starting with existing photos of them fully clothed. The practice raises significant privacy and ethical concerns, as these images are created and shared without the subjects’ permission.

Reddit Discussions and Policy Violations

The issue came to light following a now-deleted post on Reddit, titled “gemini nsfw image generation is so easy.” Users exchanged advice on how to leverage Google’s Gemini to produce pictures of women in revealing attire. One particularly troubling request involved a user submitting a photo of a woman in a traditional Indian sari and asking for it to be altered to depict her wearing a bikini. Another user promptly fulfilled the request with a deepfake image.

After being alerted to the posts, Reddit’s safety team removed the request and the resulting deepfake. A Reddit spokesperson stated, “Reddit’s sitewide rules prohibit nonconsensual intimate media, including the behavior in question.” The subreddit where this discussion took place, r/ChatGPTJailbreak, which had amassed over 200,000 followers, was subsequently banned for violating the platform’s “don’t break the site” rule.

Proliferation of Harmful AI Websites

The misuse of generative AI extends beyond Reddit. Millions of users have visited websites specifically designed to “nudify” images, allowing them to upload photos of individuals and request AI-generated depictions of them undressed. This highlights a broader pattern of harassment and exploitation facilitated by readily available AI technology.

Evolving AI Capabilities and Safety Measures

While most mainstream chatbots, with the exception of AI’s Grok, typically restrict the generation of explicit content, users are continually finding ways to bypass these safeguards. Google released Nano Banana Pro in November, an imaging model capable of sophisticated photo manipulation and realistic image generation. OpenAI responded last week with its own updated imaging model, ChatGPT Images.

As these tools become more advanced, the resulting deepfakes are becoming increasingly realistic, making it harder to distinguish between genuine and fabricated images. In tests, it was demonstrated that basic prompts could be used to transform images of fully clothed women into deepfakes depicting them in bikinis using both Gemini and ChatGPT.

Circumventing AI Guardrails

In a separate Reddit thread, a user sought advice on how to modify prompts to avoid AI-imposed restrictions, specifically asking how to make a skirt appear tighter on a subject in an image. This underscores the ongoing challenge of preventing the misuse of AI technology for malicious purposes.

What are deepfakes? Deepfakes are synthetic media—images, videos, or audio—that have been manipulated to replace one person’s likeness with another. They are often created using artificial intelligence and can be used to spread misinformation or create nonconsensual intimate imagery.

Leave a Comment