Esquire Singapore Faces Backlash Over AI-Generated Mackenyu Interview

by Sofia Alvarez

The boundary between creative experimentation and journalistic integrity has grow a primary flashpoint for the digital age. When Esquire Singapore published a feature on actor Mackenyu—best known for his role in the live-action adaptation of One Piece—the publication didn’t just use AI for a few polishings of the prose. They used it to simulate an entire interview.

The resulting backlash has sparked a wider industry conversation about the AI lessons from Esquire SG’s Mackenyu backlash, highlighting a growing tension between the desire for “creative” content and the fundamental promise of truth in reporting. While the magazine attempted to frame the move as an artistic choice, the reaction from readers and industry peers suggests that the traditional “disclosure” disclaimer is no longer a sufficient shield against accusations of deception.

The controversy centered on a story where the actor was unavailable for a traditional sit-down. Rather than pivoting to a profile based on existing archives or a standard feature, the publication opted to generate a “fake” interview using AI. Despite including a disclosure that AI was used in the process, the decision to mimic a direct conversation with a living person created a rift in trust that a simple footnote could not bridge.

The Friction Between Creativity and Truth

In a subsequent response to the outcry, Esquire Singapore described the use of AI as a “deliberate creative decision.” The publication argued that the intent was to push boundaries and explore new forms of storytelling. However, for many in the media world, the “creative” label is an insufficient defense when applied to the format of an interview—a genre of writing built entirely on the premise of authentic human exchange.

The fallout underscores a critical shift in audience expectations. For years, the industry standard for emerging tech has been “disclose and proceed.” But as generative AI becomes more sophisticated, the act of disclosure can feel like a legal loophole rather than a gesture of transparency. When a reader sees a Q&A format, they are looking for the subject’s voice, not an algorithm’s approximation of that voice.

This incident highlights the specific risks facing legacy brands attempting to pivot toward “Gen Z” sensibilities or “experimental” digital formats. While younger audiences are often more tech-savvy, they are also acutely sensitive to authenticity. The attempt to bypass the “busy” schedule of a celebrity through synthetic media was viewed by many not as an innovation, but as a shortcut that undermined the subject’s agency.

A Timeline of the Contention

The sequence of events reveals a gap between the editorial room’s perception of “innovation” and the public’s perception of “fabrication.”

Timeline of the Esquire SG AI Controversy
Phase Action/Event Outcome
Publication Esquire SG releases a feature on Mackenyu using AI-generated interview content. Immediate reader backlash over “fake” dialogue.
Initial Response Publication points to the AI disclosure included in the piece. Critics argue disclosure does not justify synthetic interviews.
Official Statement Magazine labels the move a “deliberate creative decision.” Widespread debate on journalistic ethics vs. Creative license.

Who is Affected by the Shift?

The implications of this backlash extend beyond a single magazine. Several key stakeholders are now navigating the fallout of this “creative” experiment:

  • The Subjects: Celebrities like Mackenyu find their likeness and perceived “voice” co-opted by algorithms, raising questions about digital consent and the right to one’s own words.
  • The Journalists: Staff writers and editors are facing a precarious balance between meeting aggressive digital KPIs and maintaining the editorial standards that give a brand its authority.
  • The Audience: Readers are becoming increasingly skeptical of “exclusive” content, leading to a climate where even genuine interviews may be questioned as synthetic.
  • The Industry: PR agencies and media houses are now re-evaluating their AI guidelines, realizing that “disclosure” is a baseline, not a ceiling, for ethical AI use.

Why Disclosure Is No Longer a Sufficient Defense

The core of the AI lessons from Esquire SG’s Mackenyu backlash is the realization that transparency is not the same as legitimacy. In the early days of AI-assisted writing, a disclaimer like “This text was assisted by AI” was enough to satisfy the reader. However, there is a qualitative difference between using AI to summarize a meeting and using AI to invent a conversation.

When a publication simulates a person, they are not just using a tool; they are creating a persona. This crosses the line from “assistance” to “impersonation.” The backlash suggests that the public distinguishes between generative productivity (efficiency) and generative representation (mimicry). The former is generally accepted; the latter is viewed as a breach of the social contract between a journalist and their audience.

the decision to proceed with a synthetic interview because a subject was “too busy” creates a dangerous precedent. If the value of a celebrity interview is the unique insight and human connection, removing the human element removes the value of the piece entirely. It transforms a journalistic endeavor into a piece of fan fiction, regardless of how “creative” the intent may have been.

The Path Forward for Digital Media

As media houses continue to integrate large language models into their workflows, the “Esquire model” serves as a cautionary tale. The industry is moving toward a framework where AI must be used to enhance the human element, not replace it. This means using AI for data analysis, research, or formatting, while leaving the “voice” and the “interview” to the humans involved.

The next steps for publications involve establishing rigorous internal “AI Ethics Charters” that explicitly forbid the synthesis of human speech or interviews. The goal is to move from a culture of “Can we do this?” to “Should we do this?”

The broader conversation now turns to whether industry bodies or regulators will step in to define “synthetic journalism” and whether mandatory labeling will be enough to protect the integrity of the press. For now, the consensus among critics is clear: the trust of the reader is far more valuable than the novelty of a generated interview.

As the industry watches for further updates on how legacy titles adapt their AI policies, the focus remains on whether these “creative decisions” will lead to a new standard of transparency or a permanent decline in editorial trust. We invite you to share your thoughts on the use of AI in journalism in the comments below.

You may also like

Leave a Comment