WASHINGTON, January 30, 2026 — The U.S. Department of Homeland Security is now utilizing artificial intelligence video generators from Google and Adobe to create public-facing content, a development that underscores the growing influence of AI in government communication and raises questions about transparency.
AI-Generated Content and the Erosion of Trust
The use of AI by government agencies and news organizations is sparking debate about the future of truth and the public’s ability to discern fact from fiction.
- The Department of Homeland Security is employing AI to produce videos shared with the public.
- The White House previously shared a digitally altered image during an ICE protest.
- A news network aired an AI-edited image without initially recognizing its manipulation.
- Concerns are growing that tools designed to verify truth are proving insufficient.
The confirmation of DHS’s use of AI video tools comes as the agency has increased its social media presence to support the administration’s policies, including those related to increased deportations. A video shared on X (formerly Twitter) by DHS (@DHSgov) on December 25, 2025, depicting a somber “Christmas after mass deportations,” has drawn scrutiny for potentially being AI-generated.
What are the implications of government agencies using AI to create and disseminate information? The use of AI by government agencies to shape public perception is a significant development, raising concerns about potential manipulation and the need for greater transparency.
The reactions to this news revealed a broader anxiety about the current information landscape. Some readers expressed little surprise, citing a digitally altered photograph posted by the White House on January 22. The image, depicting a woman arrested at an ICE protest, was manipulated to portray her as more distraught. Kaelan Dorr, the White House’s deputy communications director, declined to comment on whether the White House intentionally altered the photo, but wrote, “The memes will continue.”
Others questioned the value of reporting on DHS’s AI use, arguing that news organizations themselves were engaging in similar practices. They pointed to an incident involving MS Now (formerly MSNBC), which aired an image of Alex Pretti that had been AI-edited to enhance his appearance. The altered image quickly went viral, even prompting discussion on Joe Rogan’s podcast. A spokesperson for MS Now told Snopes that the outlet aired the image unaware it had been altered.
However, equating these two instances is a misstep. The White House shared a demonstrably altered image and avoided addressing questions about intentional manipulation, while MS Now aired an altered image but took steps to acknowledge the mistake after it was discovered. The latter, while still problematic, represents a different level of accountability.
These responses highlight a fundamental flaw in the way we’ve approached the “AI truth crisis.” The prevailing assumption has been that the inability to distinguish reality from fabrication will be catastrophic, necessitating tools for independent verification. But these tools are proving inadequate, and simply verifying the truth is no longer sufficient to restore the societal trust that was anticipated.
The challenge now isn’t just about identifying falsehoods; it’s about rebuilding a shared understanding of reality in an age where manipulation is increasingly sophisticated and readily available.
