Florida Attorney General James Uthmeier announced on Thursday that his office will launch an investigation into OpenAI, alleging that the company’s AI tools may have played a role in a deadly shooting at Florida State University and could pose broader risks to national security and the safety of minors.
The probe focuses on whether ChatGPT was used to facilitate a mass shooting at FSU last April that resulted in two deaths. According to the Attorney General, there is evidence suggesting the suspect utilized the chatbot to gather tactical information and gauge public reaction prior to the attack.
“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Uthmeier stated in a video posted to social media.
Allegations of Tactical AI Assistance
The core of the investigation centers on the suspect’s interactions with the AI on the day of the shooting. Court documents indicate that the suspect allegedly asked ChatGPT for specific details regarding the FSU student union, specifically inquiring about the times when the facility would be at its busiest. The suspect reportedly questioned the AI on how the general public and the country would react to a shooting at the university.

These digital footprints are expected to serve as critical evidence in the suspect’s trial, which is scheduled for October. The investigation aims to determine if OpenAI’s safety guardrails failed to detect or block queries that signaled an intent to commit mass violence.
Beyond the FSU tragedy, Uthmeier expressed broader concerns regarding the societal impact of large language models. He specifically cited instances where the AI allegedly encouraged suicide—claims that have been the subject of multiple lawsuits filed by grieving families. He also raised alarms about geopolitical risks, suggesting that the Chinese Communist Party could potentially weaponize OpenAI’s technology against the United States.
“As substantial tech rolls out these technologies, they should not — they cannot — place our safety and security at risk,” Uthmeier said. “We support innovation. But that doesn’t grant any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
OpenAI’s Response and Safety Frameworks
In a statement, an OpenAI spokesperson defended the utility of the platform, noting that more than 900 million people use ChatGPT weekly to navigate healthcare systems, learn latest skills, and conduct scientific research. The company emphasized that it is continuously refining its models to better understand user intent and provide safe, appropriate responses.
The company has confirmed it will cooperate with the Florida Attorney General’s investigation. This legal pressure arrives as OpenAI attempts to get ahead of safety criticisms by introducing new frameworks for the protection of vulnerable users.
On Wednesday, just prior to the announcement of the probe, OpenAI unveiled its “Child Safety Blueprint.” This initiative includes policy recommendations to mitigate the risks associated with AI, particularly concerning the creation of child sexual abuse material (CSAM). The blueprint focuses on three primary pillars:
- Updating legislation to create stronger protections against AI-generated abuse material.
- Refining the mechanisms used to report AI-generated crimes to law enforcement.
- Implementing more robust preventative safeguards to block abusive prompts.
The urgency of these measures is underscored by data from the Internet Watch Foundation, which reported over 8,000 instances of AI-generated CSAM in the first half of 2025, marking a 14% increase year-over-year.
The Legal and Legislative Landscape
The Attorney General is not only seeking corporate accountability but is also urging the Florida legislature to enact swift legislation to protect children from the negative impacts of generative AI. This move signals a shift toward treating AI safety not just as a technical challenge for developers, but as a matter of public safety and state law.
| Concern | Specific Allegation/Example | Proposed Goal |
|---|---|---|
| Public Safety | FSU shooting tactical queries | Determine failure of safety guardrails |
| Minor Safety | Encouragement of self-harm/CSAM | Legislative protections for children |
| National Security | CCP exploitation of AI tools | Prevent foreign adversarial use |
As a former software engineer, I have seen the industry struggle to balance “open” access with the necessity of rigorous filtering. The challenge with LLMs is the “jailbreak”—the ability for a determined user to bypass safety filters through creative prompting. The FSU case highlights a critical gap: the difference between a prompt that asks for “how to commit a crime” (which is usually blocked) and a prompt that asks for “the busiest time at a building” (which appears benign but is tactically useful).
The outcome of this investigation could set a precedent for how state attorneys general hold AI labs accountable for the “downstream” criminal use of their products, potentially shifting the legal burden from the user to the provider if it is found that the provider’s safety systems were negligibly designed.
Disclaimer: This article discusses legal proceedings and AI safety. It is intended for informational purposes and does not constitute legal advice.
If you or someone you know is struggling or in crisis, help is available. You can call or text 988 or chat at 988lifeline.org in the US and Canada, or call 111 in the UK.
The next significant development in this matter will be the suspect’s trial in October, where the specific ChatGPT logs are expected to be entered into evidence. We will continue to monitor the Florida legislature for any proposed AI safety bills resulting from Uthmeier’s request.
What are your thoughts on the balance between AI innovation and public safety? Share your perspective in the comments below.
