Remarks at the Special Competitive Studies Project AI+ Expo – SEC.gov

Gary Gensler, the chair of the U.S. Securities and Exchange Commission, is not suggesting that the financial industry stop using artificial intelligence. In fact, he acknowledges the immense potential for AI to streamline operations and uncover market insights. But during his recent remarks at the Special Competitive Studies Project (SCSP) AI+ Expo, Gensler issued a pointed warning: the very tools designed to optimize the markets could, if left unchecked, trigger a systemic collapse.

The core of Gensler’s concern isn’t a single “rogue” AI, but rather the phenomenon of algorithmic convergence. In a market where a handful of dominant AI models provide the underlying intelligence for thousands of different trading firms, the industry risks creating a digital “herd.” When these models all identify the same signal and trigger the same sell order simultaneously, the result isn’t a balanced market—We see a flash crash on a scale the financial system may not be equipped to handle.

For the SEC, the mission is clear: the agency must ensure that the integration of AI into finance does not compromise the fundamental goal of maintaining fair, orderly, and efficient markets. As the U.S. Races to maintain its competitive edge in AI—the primary driver behind the SCSP’s mission—Gensler is arguing that true competitiveness requires a foundation of stability and transparency, not just raw speed.

The Danger of the Digital Herd

The “herd behavior” Gensler described is a sophisticated evolution of the flash crashes seen in the early 2010s. In those instances, simpler algorithms reacted to one another in a feedback loop. Today, the risk is more centralized. If a significant portion of the market relies on a small number of Large Language Models (LLMs) or predictive analytics suites, the diversity of opinion—which is the bedrock of price discovery—disappears.

From Instagram — related to Large Language Models, Black Box

When everyone is using the same “black box” to determine value, the market ceases to be a collection of independent actors and becomes a monolithic entity. Gensler emphasized that this concentration of intelligence creates a single point of failure. If a dominant model develops a bias or misinterprets a geopolitical event, the ensuing market reaction could be instantaneous and indiscriminate, wiping out billions in value before human oversight can intervene.

Solving the ‘Black Box’ Problem

A recurring theme in the SEC’s current approach to AI is the demand for transparency. For decades, the financial industry has operated on the principle of disclosure. Investors have a right to know the risks associated with their holdings and the logic behind the advice they receive. AI, however, often operates as a “black box,” where even the developers cannot fully explain why a model reached a specific conclusion.

Solving the 'Black Box' Problem
Special Competitive Studies Project Herd

Gensler argued that this lack of explainability is incompatible with fiduciary duty. If an AI-driven advisor recommends a high-risk portfolio, the firm must be able to explain why that recommendation was made. Without that transparency, the SEC warns that firms may inadvertently hide conflicts of interest behind the veil of “algorithmic complexity.”

The agency is particularly focused on predictive data analytics (PDA). The concern is that firms might program AI to prioritize the firm’s own profits—such as pushing a proprietary product with higher fees—over the best interests of the client, all while claiming the AI is simply “optimizing for the user.”

AI Implementation in Finance: Risks vs. Regulatory Objectives
AI Application Primary Market Risk SEC Regulatory Objective
Algorithmic Trading Systemic “Herd” Behavior Market Stability & Circuit Breakers
Robo-Advising Hidden Conflicts of Interest Fiduciary Transparency
Risk Management Model Over-reliance/Bias Stress Testing & Validation
Data Analysis Lack of Explainability Audit Trails & Disclosure

The Competitive Stakes

The venue for these remarks—the SCSP AI+ Expo—is significant. The Special Competitive Studies Project is tasked with ensuring the U.S. Remains the global leader in critical technologies to counter the rise of strategic competitors, most notably China. Gensler’s warnings are not meant as a brake on innovation, but as a safeguard for it.

The argument is that a financial system plagued by AI-driven instability would be a strategic liability. If the U.S. Markets are seen as volatile or opaque due to unregulated AI, global capital may seek safer harbors. Establishing a “gold standard” for AI governance in finance is, in itself, a competitive advantage. By creating a framework where AI is used responsibly, the U.S. Can attract more sustainable investment and foster a more resilient economic infrastructure.

Who Is Most at Risk?

The impact of these regulatory shifts will be felt across several layers of the financial ecosystem:

  • Retail Investors: Those relying on AI-driven “fintech” apps may find that the transparency of the advice they receive increases as the SEC pushes for clearer disclosures.
  • Institutional Hedge Funds: Firms using high-frequency trading (HFT) and LLMs will likely face stricter requirements regarding how they test their models for “herd” tendencies.
  • AI Developers: Companies providing the underlying infrastructure (the “model providers”) may eventually be required to provide more granular data to regulators to ensure their tools aren’t creating systemic vulnerabilities.

Disclaimer: This article is provided for informational purposes only and does not constitute financial, legal, or investment advice.

The road ahead involves a complex balancing act. The SEC is currently weighing new rules regarding the use of predictive data analytics, and the industry is lobbying heavily to ensure these rules don’t stifle the efficiency gains AI provides. The next major checkpoint will be the SEC’s forthcoming updates on rule-making regarding conflicts of interest in AI-driven investment advice, which will signal whether the agency intends to move toward formal mandates or continue relying on guidance and enforcement actions.

What do you think about the balance between AI innovation and market stability? Share your thoughts in the comments or share this story with your network.

You may also like

Leave a Comment