California is taking a significant step toward regulating artificial intelligence, a technology rapidly reshaping industries and raising concerns about its potential risks. Governor Gavin Newsom, a Democrat, issued an executive order on October 11, 2023, directing state agencies to prioritize safety and privacy when contracting with companies developing or deploying artificial intelligence systems. This move aims to establish guardrails for AI within state government, focusing on responsible innovation and mitigating potential harms. Understanding California’s executive order on A.I. requires a look at the specifics of the directive, its implications for businesses, and the broader context of AI regulation.
The order doesn’t represent a complete overhaul of AI policy, but rather a focused effort to ensure the state itself isn’t inadvertently contributing to the spread of unsafe or biased AI. It’s a response to growing anxieties about the technology’s potential for misuse, including concerns about algorithmic bias, data privacy violations, and job displacement. The directive comes as the federal government also grapples with how to regulate AI, with President Biden issuing his own executive order on the topic just weeks later, on October 30, 2023. Read the White House Executive Order here.
What the Order Requires
The core of Governor Newsom’s order centers on requiring state agencies to adopt specific criteria when procuring AI technologies. According to the official statement, agencies must now prioritize vendors who demonstrate a commitment to:
- Transparency: AI systems should be explainable, allowing users to understand how decisions are made.
- Fairness: Algorithms must be evaluated for potential biases and steps taken to mitigate discriminatory outcomes.
- Privacy: Data used by AI systems must be handled securely and in compliance with privacy regulations.
- Security: AI systems must be protected against cyber threats and unauthorized access.
The order also directs the California Department of General Services to develop a set of standardized evaluation criteria for AI systems, which will be used across all state agencies. This aims to create a consistent and rigorous process for assessing the risks and benefits of AI technologies. The Department is expected to release these criteria within the next six months. The order mandates that agencies provide training to their employees on the responsible use of AI.
Impact on AI Companies
The executive order primarily affects companies seeking to contract with the state of California. Those hoping to secure state business will demand to demonstrate that their AI systems meet the novel safety and privacy standards. This could involve undergoing independent audits, providing detailed documentation about their algorithms, and implementing robust data security measures. The order doesn’t apply to companies operating in California but not contracting with the state, though it could set a precedent for future regulations.
Some industry observers believe the order could create a competitive advantage for companies already prioritizing responsible AI development. Others express concern that the new requirements could increase costs and complexity, potentially discouraging smaller AI startups from bidding on state contracts. The California Chamber of Commerce has not yet issued a formal statement on the order, but is reportedly monitoring its implementation closely.
Broader Context: The Push for AI Regulation
California’s move is part of a growing global trend toward AI regulation. The European Union is currently finalizing the AI Act, a comprehensive set of rules governing the development and deployment of AI systems. Learn more about the EU AI Act. The United Kingdom has also adopted a pro-innovation approach to AI regulation, focusing on sector-specific guidelines.
In the United States, the debate over AI regulation is ongoing. While there’s broad agreement on the need for some level of oversight, there’s disagreement on the best approach. Some lawmakers advocate for a comprehensive regulatory framework, while others prefer a more sector-specific approach. The Biden administration’s recent executive order signals a commitment to addressing the risks of AI, but the details of its implementation remain to be seen.
What This Means for Consumers and Citizens
While the immediate impact of California’s order is on businesses contracting with the state, it has broader implications for consumers and citizens. By prioritizing safety and privacy in its own AI procurements, the state is sending a signal that these values are essential. This could encourage other organizations, both public and private, to adopt similar standards.
The order also highlights the importance of transparency and accountability in AI systems. As AI becomes more pervasive in our lives, it’s crucial that we understand how these systems work and how they’re making decisions that affect us. The push for explainable AI is a key part of this effort.
The state’s Department of Technology is tasked with providing regular updates on the implementation of the executive order, including progress on developing the standardized evaluation criteria. The next update is scheduled for April 2024. Citizens can find more information about the order and its implementation on the Governor’s website: https://www.gov.ca.gov/2023/10/11/governor-newsom-issues-executive-order-advancing-responsible-artificial-intelligence/.
As artificial intelligence continues to evolve, California’s executive order represents a proactive step toward ensuring that this powerful technology is used responsibly and ethically. The state’s actions will be closely watched by policymakers and industry leaders across the country as they navigate the complex challenges and opportunities presented by AI.
Have your say: What are your thoughts on California’s new AI regulations? Share your comments below and let us understand how you consider AI should be governed.
