The current era of generative artificial intelligence has been characterized by a digital land grab. For years, large language models have been “fed” on a diet of nearly the entire public internet—scraping billions of words, images, and lines of code without the knowledge or permission of the original creators. This approach, often described as a “Wild West” of data acquisition, has sparked a wave of litigation and a growing outcry from the creative community over copyright infringement and the erosion of intellectual property.
Amidst this tension, a new proposal known as the Human Consent Standard (HCS) is emerging as a potential framework to shift the power balance back toward human creators. The initiative aims to replace the current culture of unauthorized scraping with a systemic, machine-readable method of granting or denying permission for AI training.
The framework is championed by RSL Media, an organization associated with actress Cate Blanchett. The effort represents a significant escalation in the fight for digital autonomy, moving beyond simple legal challenges toward a technical infrastructure that AI companies could theoretically integrate into their crawling processes.
A Technical Bridge for Digital Rights
At its core, the Human Consent Standard functions similarly to a robots.txt file—the long-standing industry standard that tells search engine crawlers which parts of a website to index and which to ignore. However, while robots.txt is primarily about visibility in search results, the HCS is designed specifically to govern the “feeding” of AI models.

The HCS is an evolution of the Really Simple Licensing (RSL) system. While the RSL focused on the permissions of specific content accessible via a particular URL, the Human Consent Standard is designed to be more holistic. According to Eckart Walther, a co-founder of RSL Media, the HCS is intended to apply to the “work, identity, character, or underlying brand, regardless of where it appears.”

This distinction is critical for public figures and artists whose work is often mirrored across thousands of different websites. Under a URL-based system, a creator would have to secure every single page where their image appears; under the HCS, the consent is tied to the identity of the creator or the work itself, creating a portable shield that follows the content across the web.
| Feature | Really Simple Licensing (RSL) | Human Consent Standard (HCS) |
|---|---|---|
| Scope | URL-specific content | Identity, Brand, and Work |
| Application | Individual web pages | Global presence across platforms |
| Primary Goal | Content licensing | Comprehensive training consent |
The Roadmap to Identity Verification
The initiative is not merely a set of guidelines but a planned infrastructure project. A central component of the rollout is the creation of a verification database, scheduled for release in June 2026. This database would allow individuals to verify their identity and explicitly log the permissions they grant—or withhold—from AI developers.

By centralizing these permissions, the HCS seeks to eliminate the “plausible deniability” often cited by AI firms, who argue that the scale of the internet makes individual consent impossible to track. A verified database would provide a single source of truth for AI companies to check before incorporating specific identities or works into their training sets.
The project has already garnered significant support from high-profile figures in the entertainment industry, including George Clooney, Tom Hanks, Kristen Stewart, and Meryl Streep. This coalition reflects a broader movement in Hollywood to protect the “digital twin”—the likeness and voice of an actor—from being synthesized without compensation or consent, a central issue in recent SAG-AFTRA labor disputes.
The Challenge of Industry Adoption
Despite the technical elegance of the Human Consent Standard, its success depends entirely on a factor outside the creators’ control: the willingness of AI developers to play by the rules. For a standard like HCS to work, companies like OpenAI, Google, and Anthropic must agree to honor the HCS tags and consult the verification database during their data collection phases.
Historically, the AI industry has leaned on the concept of “fair use” to justify the use of copyrighted data. However, regulatory pressure is mounting. The European Union AI Act has already begun introducing transparency requirements for generative AI, forcing companies to be more explicit about the data they use. The HCS arrives at a moment when the legal tide may finally be turning in favor of mandatory transparency.
For the creative class, the stakes extend beyond money. It is a question of agency. If the HCS is adopted, it would mark the first time that the “human” in “human-centric AI” has a functional, technical kill-switch for their own intellectual property.
The next major milestone for the initiative will be the development and testing of the verification database leading up to its projected June 2026 launch. Whether the tech giants will respect this digital boundary remains the defining question of the next two years.
Do you believe AI companies should be legally required to follow a consent standard like HCS? Share your thoughts in the comments or share this story on social media.
