Trump Aide ChatGPT Data Breach: Sensitive Documents Leaked

by priyanka.patel tech editor

Acting Director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, reportedly uploaded sensitive government documents to the public version of ChatGPT last summer, triggering security alerts and an internal review.

The incident raises serious questions about data security protocols within federal agencies as artificial intelligence tools become increasingly integrated into professional workflows. It also underscores the risks of sharing even seemingly innocuous information with AI platforms.

Some users have already been warned by tech leaders like Sam Altman that messages sent to ChatGPT aren’t entirely private, and that sharing sensitive data with these tools could have unintended consequences.

Cyber Chief’s Data Slip-Up Raises Security Concerns

The incident highlights the potential for data breaches when government officials utilize public AI platforms, even with limited access.

  • Madhu Gottumukkala, the acting director of CISA, uploaded contracting materials marked ‘for official use only’ to ChatGPT.
  • The upload triggered automated security alerts and prompted an internal investigation by Department of Homeland Security officials.
  • Gottumukkala reportedly secured special access to ChatGPT despite it being off-limits to most DHS staff due to security concerns.
  • The documents could potentially be exposed to a massive user base—estimated at one billion—through future AI prompts.

The sensitive documents, marked “for official use only,” were uploaded to the OpenAI platform during the summer of 2025, according to a report by Politico. This breach prompted automated security alerts, initiating an internal review by top Department of Homeland Security (DHS) officials.

Gottumukkala, who served under President Trump, had reportedly pushed for special access to ChatGPT shortly after joining CISA in May. At the time, the AI tool was largely restricted to DHS personnel due to concerns about the potential for sensitive information to be compromised outside of secure federal systems.

However, the former chief information officer for South Dakota’s Bureau of Information and Technology allegedly “forced CISA’s hand into making them give him ChatGPT,” as one official told Politico, and then “abused it.” The core issue is that any data entered into the public version of ChatGPT can be incorporated into future responses, potentially exposing the information to its vast user base—estimated to be around one billion people.

What are the risks of using public AI platforms for sensitive data? Public AI platforms like ChatGPT store and process user inputs, which could potentially be accessed by unauthorized parties or used to train the AI model, leading to data breaches and privacy violations.

Marci McCarthy, CISA’s director of public affairs, acknowledged the incident in a statement to Politico, stating that Gottumukkala received “permission to use ChatGPT with DHS controls in place” and that his use was “short-term and limited.”

“Acting Director Dr. Madhu Gottumukkala last used ChatGPT in mid-July 2025 under an authorized temporary exception granted to some employees,” McCarthy wrote. “CISA’s security posture remains to block access to ChatGPT by default unless granted an exception.”

McCarthy also emphasized the agency’s commitment to advancing America’s leadership in AI, aligning with President Trump’s January 2025 executive order on the subject.

Gottumukkala’s tenure as CISA’s director, which began in May after being appointed as deputy director by DHS Secretary Kristi Noem, has already been marked by controversy. Politico also reported that he reportedly failed a polygraph examination he himself initiated.

You may also like

Leave a Comment