Microsoft’s AI-Generated Poll Damages The Guardian’s Reputation: Publisher Outraged

by time news

Title: Microsoft Accused of Damaging Guardian’s Reputation Over AI-Generated Poll on Woman’s Death

Date: Tue 31 Oct 2023 15.00 CET

The Guardian has lodged a complaint against Microsoft, alleging that the tech giant’s publication of an AI-generated poll next to one of its articles has caused significant reputational damage. The controversial poll, created by an AI program, speculated on the cause of Lilie James’ death, a 21-year-old water polo coach found dead with serious head injuries at a school in Sydney last week.

Microsoft’s news aggregation service featured the automated poll alongside the Guardian’s article reporting on James’ tragic death. The poll offered three options – murder, accident, or suicide – prompting readers to share their opinions. The poll has since been removed but critical reader comments on the deleted survey remained online, fueling public outrage.

The Guardian condemned Microsoft’s decision to display the AI-generated poll in close proximity to their journalism, asserting that it had caused distress to James’ family and inflicted significant harm to the publisher’s reputation. Readers also voiced their disdain towards the poll, with some calling for the dismissal of a Guardian reporter who was completely uninvolved in its creation.

Anna Bateson, Chief Executive of the Guardian Media Group, sent a letter to Microsoft President Brad Smith, expressing concerns about the incident. In her letter, Bateson criticized the inappropriate use of generative AI technology by Microsoft in relation to a sensitive public interest story. She emphasized the importance of a strong copyright framework that enables publishers to negotiate the terms under which their journalism is utilized.

Microsoft holds a license to publish content from the Guardian, and the article along with the controversial poll appeared on Microsoft Start, the company’s news aggregation website and app. Bateson sought assurances from Smith, urging Microsoft not to deploy experimental AI technology alongside Guardian journalism without prior approval. She also requested that Microsoft explicitly notify users when AI tools are used to create additional units and features adjacent to trusted news brands like the Guardian.

Furthermore, Bateson suggested that Microsoft add a note to the article, assuming responsibility for the poll’s creation. She highlighted the importance of addressing how platforms prioritize authentic information, ensure fair remuneration for licensing journalism, and establish greater transparency and safeguards for consumers in the use of AI.

As the incident and its implications unfold, Microsoft has yet to provide a comment on the matter. Bateson’s remarks come amidst discussions at the AI safety summit, underscoring the need for platforms to prioritize trusted content, protect journalistic integrity, and establish guidelines surrounding the use of AI for news-related purposes.

The Guardian and Microsoft are expected to engage in further discussions regarding the publication practices and responsibilities surrounding AI-generated content.

You may also like

Leave a Comment