OpenAI’s Risky Pivot: Balancing Safety Concerns with the Lure of AI Erotica
OpenAI, the artificial intelligence giant behind ChatGPT, is navigating a treacherous path, simultaneously rolling out new protections for teenage users while preparing to launch a feature generating explicit sexual content. This dual strategy has sparked outrage among safety advocates and mental health experts, who warn the move undermines the company’s stated commitment to responsible AI development and could expose vulnerable individuals to significant harm.
The company announced last week the release of a new age-estimation model designed to identify users under 18 and automatically apply stricter safeguards. This initiative, OpenAI claims, reflects a dedication to user safety. However, this commitment appears sharply at odds with the impending introduction of erotica within ChatGPT, a feature slated for release this quarter.
Details surrounding the new content remain scarce. OpenAI has not clarified whether the feature will encompass solely text-based interactions or extend to AI-generated images and videos, nor how it will be segregated from the standard ChatGPT experience. The company has stated only that access to erotica will be restricted to adult users and subject to additional safety measures.
This juxtaposition has drawn fierce criticism. “The shift to erotica is a very dangerous leap in the wrong direction,” says Jay Edelson, an attorney representing the family of Adam Raine, a 16-year-old from Orange County, California, who died by suicide in 2025. In the weeks leading up to his death, Raine reportedly spent four hours daily interacting with ChatGPT, including inquiries related to self-harm. “The problem with GPT is attachment; this will only exacerbate that.” OpenAI has expressed condolences to the Raine family but maintains it bears no responsibility for the tragedy.
The move represents a significant departure for a company founded in 2015 with the stated mission of developing AI “for the benefit of all humanity.” Initially a non-profit organization, OpenAI has rapidly evolved into a commercial enterprise, boasting approximately 800 million weekly users and an estimated valuation of $500 billion. This transformation has been accompanied by a shift away from open-source research and a focus on capital generation and revenue growth.
The financial pressures facing OpenAI are substantial. Despite raising over $60 billion in funding, the company reportedly incurred a $9 billion loss in 2025, according to leaked documents from The Wall Street Journal. Analysts project losses could climb to $74 billion by 2028, driven by the immense costs associated with training and operating its advanced AI models.
This financial strain appears to be influencing strategic decisions. In September 2025, OpenAI launched Sora 2, a video-generation social media platform, in an attempt to attract new users, despite acknowledging the platform’s limited benefit to humanity and the unsustainable economics of video model processing. More recently, the company began testing advertising for some US users, reversing a previous stance articulated by CEO Sam Altman, who once described ads combined with AI as “uniquely unsettling.” He stated in May 2024, “I kind of think of ads as a last resort for us for a business model.”
The introduction of erotica, therefore, appears as a logical, albeit controversial, next step. The digital adult content industry is a lucrative market, estimated to be worth $81.86 billion in 2025 – a figure likely underestimated. OpenAI’s leadership has framed the decision as a matter of user freedom, with Altman asserting on X last October, “We are not the elected moral police of the world. Allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission.”
However, experts warn that AI-generated pornography presents unique risks. Unlike traditional pornographic content, these systems respond and adapt to user requests, potentially fostering prolonged engagement and increasing vulnerability. While currently “niche,” this technology is poised to integrate with mainstream chatbots relied upon for work and study.
The rise of “AI companions” – chatbots designed to simulate friendship or romance – further complicates the landscape. Researchers have observed users forming strong emotional bonds with these systems, sometimes leading to emotional dependence and even simulated marriage. Tech CEOs are “cashing in” on systemic loneliness, according to Meetali Jain, executive director at Washington-based Tech Justice Law Project (TJLP). “We saw the first iteration with social media.”
Psychiatrists have expressed concern that the always-available, sycophantic nature of chatbots can create unhealthy codependencies and exacerbate existing mental health conditions. “It seems like really not the right time to introduce this sexual charge to these conversations, to users who are already struggling,” one psychiatrist noted. OpenAI’s own data from October 2025 indicates that as many as 560,000 users per week exhibit “possible signs of mental health emergencies related to psychosis or mania.”
OpenAI is currently facing eight lawsuits in the US, filed by TJLP, alleging inadequate safety guardrails led to mental health crises and deaths among teenagers and adults. While OpenAI denies causing harm, it has since implemented additional safeguards. The company maintains that even in the erotic mode, it “will still not allow content that harms others.”
Children remain particularly vulnerable to manipulation, but concerns extend to adults as well. Jain warns that generative AI’s sophistication renders “all of us vulnerable.” Fidji Simo, OpenAI’s chief executive of applications, stated in December that the company intends to refine its age-estimation capabilities before launching the erotic feature, with a planned rollout by April.
On January 20th, OpenAI released its new age-prediction system for consumer plans, analyzing user behavior – account age, activity times, and usage patterns – to estimate age without requiring sensitive documentation. While child-safety experts acknowledge this as a positive step, they caution that age-prediction models remain largely untested at scale. OpenAI claims its model outperforms industry standards, but has not publicly released supporting data.
Sam Stockwell, a researcher at the Alan Turing Institute specializing in online safety, describes the tool as “promising to see,” but expresses concern that behavior-based age prediction may not accurately reflect a user’s true age. Leanda Barrington-Leach, executive director of the children’s rights group 5Rights Foundation, highlights potential legal issues under UK and EU data-protection laws, specifically GDPR and the UK’s Age Appropriate Design Code, which restrict the processing of children’s data without parental consent.
The situation is further complicated by the legal battles initiated by Elon Musk, who left OpenAI’s board in 2018 and is now suing the company, alleging it has abandoned its original mission. However, Musk’s own AI chatbot, Grok, integrated into his social media platform X, has been implicated in generating non-consensual sexualized images and illegal child sexual abuse material, as reported by the Internet Watch Foundation.
Zoe Hawkins, co-founder of the Tech Policy Design Institute, emphasizes that the harm caused by Grok serves as a “cautionary tale,” illustrating the rapid deployment of AI products outpacing regulatory efforts. Enforcement, she notes, is often a game of “whack-a-mole.”
While ChatGPT is currently overseen by the UK regulator Ofcom as a “search service,” the addition of an erotic feature will undoubtedly challenge the existing regulatory framework. An Ofcom spokesperson affirmed that sites offering pornographic content, including AI-generated material, must employ “highly effective age assurance” to prevent access by children, and that the regulator “will not hesitate to use the full force of our powers” against non-compliant services.
Jain and Edelson argue that the risks to adults alone should prompt OpenAI to reconsider. “What I fear is that erotica is simply too enticing as a revenue stream.” The company’s trajectory, from a non-profit dedicated to benefiting humanity to a commercial enterprise prioritizing growth and monetization, raises fundamental questions about its values and its responsibility to protect its users.
