Discord, the popular communication platform boasting over 150 million monthly active users, is moving forward with plans to verify the ages of its users. While framed as a safety measure to protect children and teens, the initiative is raising serious privacy concerns among digital rights advocates and highlighting a broader trend of platforms demanding more personal data in the name of security. The Free Software Foundation (FSF) is among those sounding the alarm, arguing that Discord’s approach relies on “nonfree, invasive programs” and erodes user trust.
The core of Discord’s new policy centers around an “age inference model” – a machine learning system designed to estimate a user’s age based on their behavior and account signals. According to a recent press release, this model operates in the background, and only when it’s highly confident in its assessment will it categorize a user. If the confidence level is low, users will be prompted to provide further verification, potentially including biometric data or government identification. Discord states it does not use message content in the age estimation model.
However, the lack of transparency surrounding the “age inference model” is a key point of contention. Discord hasn’t detailed precisely what data points will be analyzed, whether the criteria will be consistent across all users, or the maximum amount of data the system will examine before requesting more sensitive information. Reports suggest the system could analyze a significant portion of user activity on the platform. Critically, Discord’s current policy offers no clear opt-out for background data searching beyond deleting an account entirely.
The Risks of Proprietary Age Verification
The FSF’s concerns stem from the fundamental nature of Discord’s software. As a nonfree platform, its code is closed-source, meaning users have no way to independently verify how their data is being collected, used, or protected. This lack of transparency is particularly troubling given the sensitive nature of the information potentially being analyzed. The Electronic Frontier Foundation (EFF) has similarly highlighted the dangers of age verification policies, arguing they inevitably lead to the collection of personal data and increased surveillance.
“Discord and its vendors looking through our data isn’t just creepy — This proves dangerous,” the FSF wrote in a recent blog post. Once data is collected, it’s vulnerable to misuse, leaks, or hacks. The reliance on third-party vendors further complicates the issue, as data is shared beyond Discord’s direct control.
A Troubled Track Record
Discord’s history offers little reassurance. Just months after age verification measures were implemented in the UK, the platform disclosed a data breach affecting approximately 70,000 users. The breach, which impacted a third-party customer service provider, exposed sensitive information including login details, payment information, IP addresses, and even messages sent to support agents. This incident underscores the risks inherent in storing large amounts of user data, especially when relying on external partners.
Prior to delaying the rollout, Discord briefly partnered with Persona, an identity verification service backed by Peter Thiel, co-founder of the controversial data analytics firm Palantir. User reports indicated prompts for age verification through Persona, raising concerns about potential mass surveillance. An uncompressed version of Persona’s code was even discovered on a federally authorized server, fueling speculation about government access to user data. While Discord ultimately ended its partnership with Persona, it has not ruled out working with other similar vendors. The FSF has identified Palantir as a tool for mass surveillance.
Public Pressure and Future Steps
Facing significant public backlash, Discord announced a delay in the full rollout of its age verification policy, pushing it back to the latter half of 2026. In a revised statement, the company pledged to “expand verification options, increase vendor transparency, and publish detailed technical documentation, while continuing to meet regulatory requirements where needed.”
This delay demonstrates the power of public pressure, but advocates argue that more is needed. The FSF is calling for Discord to release its code, allowing independent security audits and ensuring transparency. Without open-source code, users are left to trust Discord’s assurances, which are insufficient given the platform’s track record and the inherent risks of data collection. Even with promises of data deletion, the lack of transparency makes it impossible to verify whether data is truly being removed and how it’s being handled in the interim.
The debate surrounding Discord’s age verification policy is part of a larger conversation about data privacy and the trade-offs between security and freedom online. As more platforms adopt similar measures, the need for transparency, user control, and open-source alternatives becomes increasingly critical. Discord has promised its vendor partners will delete data quickly, but the specifics of that deletion remain unclear.
Discord’s next step, as promised, is to publish a detailed blog post explaining the mechanics of its age verification process. However, the platform’s commitment to transparency will be truly tested by the level of detail it provides and its willingness to address the legitimate concerns raised by privacy advocates and users alike. If Discord truly values user trust, it must prioritize openness and empower users with control over their own data.
What are your thoughts on Discord’s new age verification policy? Share your comments below.
