Remarks by Chairman Atkins on AI Innovation, Capital Markets, and Regulatory Flexibility – The Harvard Law School Forum on Corporate Governance

The tension between the breakneck speed of artificial intelligence and the deliberate, often sluggish pace of financial regulation has reached a tipping point. For years, the dialogue has been a binary choice: either stifle innovation with rigid rules or risk systemic instability by letting the technology run wild. However, in recent remarks delivered to the Harvard Law School Forum on Corporate Governance, Chairman Atkins proposed a third path—one rooted in “regulatory flexibility.”

Atkins argues that the traditional regulatory playbook is ill-equipped for the AI era. In a world where software evolves weekly, a rule-making process that takes years to finalize is not just inefficient; it is a liability. For capital markets to remain competitive and transparent, Atkins suggests that regulators must shift from a posture of static prohibition to one of adaptive oversight, ensuring that the guardrails protect investors without boxing in the innovators.

This perspective arrives as the industry grapples with a profound paradox. While AI promises to unlock unprecedented efficiencies in asset management and risk analysis, it also introduces “black box” risks that could trigger flash crashes or algorithmic biases on a global scale. The challenge, as Atkins frames it, is creating a framework that is firm enough to prevent fraud but flexible enough to allow the technology to breathe.

Beyond ‘Regulation by Enforcement’

A central theme of Atkins’ discourse is the critique of “regulation by enforcement”—the practice of using lawsuits and penalties to signal new rules rather than issuing clear, written guidance. For the AI sector, where capital investments are measured in billions, this ambiguity is a significant deterrent. When companies do not know where the line is drawn, they either stop innovating or risk catastrophic legal exposure.

Beyond 'Regulation by Enforcement'
Washington

Atkins suggests that the SEC and other governing bodies should prioritize clarity over combat. By establishing broad, principle-based goals—such as transparency and fairness—rather than hyper-specific technical requirements, regulators can allow firms to implement AI tools that meet the spirit of the law, even as the method of delivery changes. This shift would move the burden of compliance from guessing the regulator’s mood to meeting a transparent standard of conduct.

The need for this clarity is not limited to AI. Recent movements by the SEC to provide more definitive guidance on crypto vaults illustrate a broader realization within Washington: the “wait and see” approach is failing. Whether it is a digital asset vault or a generative AI trading bot, the market is demanding a rulebook that is written in plain English and updated in real-time.

The Anxiety of the AI+ Expo

The theoretical flexibility advocated by Atkins is being tested in the real world, as evidenced by the atmosphere at the recent AI+ Expo in Washington, D.C. While the official rhetoric often focuses on growth, the conversations behind the scenes are fraught with apprehension. Industry leaders at the expo expressed a recurring set of fears that mirror the concerns of the general public: the displacement of human labor, the erosion of data privacy and the potential for AI to hallucinate critical financial data.

The Anxiety of the AI+ Expo
Harvard Law School Forum Expo

These anxieties highlight the stakes of the regulatory debate. If regulators are too flexible, they may ignore the systemic risks that industry leaders themselves are worried about. If they are too rigid, they may push the development of these tools offshore to jurisdictions with no oversight at all. The “regulatory flexibility” Atkins describes is intended to be a bridge between these two extremes—a way to monitor the risks discussed at the AI+ Expo without killing the technology that creates the value.

Stakeholders and the Impact of Flexibility

  • Institutional Investors: Stand to gain from AI-driven alpha but fear “model drift” and the lack of a legal safety net when algorithms fail.
  • Fintech Startups: Require regulatory predictability to secure VC funding; “regulation by enforcement” is often a death knell for early-stage firms.
  • Retail Investors: The most vulnerable party, requiring a regulator that can detect AI-driven market manipulation before it wipes out small accounts.
  • Regulators: Facing a talent gap, as they struggle to hire experts who understand the code as well as they understand the law.

Mapping the Regulatory Shift

To understand the transition Atkins is proposing, it is helpful to compare the traditional regulatory approach with the proposed flexible model.

Chairman Paul Atkins on Restoring America’s Capital Markets
Comparison of Regulatory Frameworks for Emerging Tech
Feature Traditional Approach Flexible Approach (Atkins)
Rule Creation Lengthy public comment periods Adaptive, principle-based guidance
Enforcement Penalty-first / Litigation-led Compliance-first / Dialogue-led
Tech Pace Lagging (Years behind) Concurrent (Iterative updates)
Primary Goal Risk elimination Risk management & Innovation

The Knowns and the Unknowns

While the vision for regulatory flexibility is compelling, several critical unknowns remain. First, there is the question of who defines the principles. If the industry has too much influence over the “flexible” guidelines, the result could be regulatory capture, where the rules are written by the very companies they are meant to govern.

Second, the technical ability of the SEC to monitor AI in real-time is still unproven. Flexibility requires a high level of competence; a regulator cannot be “flexible” if they do not understand the underlying technology they are overseeing. Without a massive infusion of technical talent into the public sector, “flexibility” could simply become a euphemism for “lack of oversight.”

Despite these hurdles, the momentum is shifting. The intersection of AI and capital markets is too volatile to be managed with 20th-century tools. The move toward clarity—seen in the crypto vault discussions and Atkins’ Harvard remarks—suggests a maturing understanding that the goal of regulation is not to stop the future, but to ensure the future is sustainable.

Disclaimer: This article is provided for informational purposes only and does not constitute financial, legal, or investment advice.

The next critical checkpoint for this regulatory evolution will be the upcoming series of SEC public comment periods regarding digital asset frameworks and AI integration, expected in the coming quarters. These filings will reveal whether the “flexibility” discussed in academic forums is being codified into actual policy.

We want to hear from you. Should regulators prioritize innovation or investor safety in the age of AI? Share your thoughts in the comments or share this piece with your network.

You may also like

Leave a Comment