YouTube’s Deepfake Tool Raises Privacy Concerns as Google Weighs AI Training
A new safety feature designed to combat deepfakes on youtube may inadvertently grant Google access to creators’ biometric data for training it’s own artificial intelligence models, sparking alarm among online safety experts. the tool, launched in October and slated for full rollout to the YouTube Partner Program by the end of January, allows creators to submit videos of their faces to flag unauthorized deepfakes using their likeness. While YouTube insists the data is solely for identity verification and powering the feature, its privacy policy raises questions about potential secondary uses.
YouTube introduced the deepfake detection tool as a proactive measure to help creators address the growing threat of AI-generated impersonations. Creators can request the removal of AI-generated doppelgangers identified through the system. However, the policy governing the tool also states that Google reserves the right to utilize publicly submitted content – including biometric data – to “help train google’s AI models and build products and features like Google Translate, Gemini Apps, and Cloud AI capabilities,” as reported by CNBC.
A spokesperson for YouTube, Jack Malon, attempted to assuage concerns, stating, “The data creators provide to sign up for our likeness detection tool is not – and has never been – used to train Google’s generative AI models.” He further clarified that the data is “used exclusively for identity verification purposes and to power this specific feature.” Despite this assurance, YouTube acknowledged it is indeed reviewing the wording of its policy sign-up to address potential confusion, though the core policy will remain unchanged.
The debate highlights a broader struggle among tech giants to balance innovation in AI with maintaining user trust. The rollout of increasingly complex AI models has prompted scrutiny over data privacy and the potential for misuse.
Currently, the system relies on creators to identify and request the removal of deepfakes. Amjad Hanif, YouTube’s head of creator product, told CNBC that takedown requests remain relatively low, with many creators simply expressing relief that the tool exists. “By and far the most common action is to say, ‘I’ve looked at it, but I’m OK with it,'” Hanif stated.
Though, online safety experts suggest this low rate of takedowns may stem from a lack of clarity surrounding the tool’s functionality, rather than widespread acceptance of deepfakes. Companies specializing in protecting digital likeness rights, such as Vermillio and Loti, have reported a surge in demand for their services as AI technology becomes more prevalent.
“As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves,” warned Dan Neely, CEO of Vermillio, in a statement to CNBC. “Your likeness will be one of the most valuable assets in the AI era,and once you give that control away,you may never get it back.” Luke Arrigoni, CEO of Loti, echoed these concerns, describing the risks associated with YouTube’s policy as “enormous” and advising clients against utilizing the deepfake detection tool.
The rise of accessible AI tools like OpenAI’s Sora and Google’s Veo 3 has amplified the urgency of addressing deepfake threats. youtube creator Mikhail Varshavski, known as “Doctor Mike” and boasting over 14 million subscribers, recently encountered a deepfake of himself promoting a dubious health supplement on TikTok. “It obviously freaked me out,” Varshavski told CNBC, “because I’ve spent over a decade investing in garnering the audience’s trust and telling them the truth… To see someone use my likeness in order to trick someone into buying something they don’t need or that can potentially hurt them, scared everything about me in that situation.”
Currently, creators have no established mechanism to profit from the unauthorized use of their likeness in deepfake content. YouTube previously allowed creators to permit third-party firms to utilize their videos for AI training,but without any form of compensation.The situation underscores the complex ethical and legal challenges posed by the rapid advancement of AI technology and the need for clear guidelines regarding biometric data and digital identity.
