AI Needs Public Control to Avoid Repeating Social Media’s Mistakes

by Priyanka Patel

Justin Rosenstein, a name perhaps less familiar than Mark Zuckerberg’s, carries a weight of experience that speaks directly to the anxieties surrounding artificial intelligence today. He was there at the beginning of Facebook, a 22-year-aged engineer convinced by a 21-year-old Zuckerberg to join a project aimed at connecting people. What he helped build, but, morphed into something far different – a system optimized not for connection, but for addiction. And the driving force, Rosenstein argues, wasn’t malice, but a simple, chilling calculation: addiction is more profitable. That same logic, he warns, is now taking root in the development of AI, and the stakes are exponentially higher.

The core problem, as Rosenstein and others witness it, is a pervasive belief within the tech industry: “If we don’t do it, someone else will.” This justification, a race to deployment regardless of consequences, fueled the rapid and often unchecked growth of social media. Now, it’s driving the breakneck pace of AI development, with potentially catastrophic results. The question isn’t whether AI will reshape our lives – it already is – but whether we can steer its development toward abundance and empowerment, or succumb to a future we can’t control. Recent proposals from the White House, focused on shielding the AI industry from liability, suggest a familiar path: letting companies self-regulate, a strategy that proved disastrous with social media.

But a different path is possible, one where the public, not corporations, dictates the terms of AI’s evolution. This isn’t about halting progress, but about ensuring that progress serves the public interest. If a technology is poised to fundamentally alter society, the argument goes, those most affected should have a say in how it’s deployed. That’s not just a matter of fairness; it’s the incredibly definition of democracy.

AI’s Invisible Governance

The influence of artificial intelligence is already pervasive, operating largely behind the scenes. Algorithms determine what information we see online, influence job opportunities, impact loan applications, and, increasingly, even inform decisions in areas like criminal justice and military targeting. As Brookings Institution experts have detailed, the use of AI in defense raises particularly acute ethical concerns. Yet, for most people, these systems are opaque, their workings hidden from view, and their impact felt without any opportunity for input or recourse. Companies are locked in a fierce competition to deploy AI as quickly as possible, often prioritizing speed over safety and ethical considerations. The CEOs leading this charge – Sam Altman of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of Google DeepMind, Elon Musk, and even Mark Zuckerberg – all face the same pressure: fall behind, and risk being left behind.

However, public opinion suggests a growing awareness of these risks and a desire for greater control. Polling conducted by Blue Rose Research reveals that 66% of Americans support the creation of citizen panels to help establish AI regulations. This support transcends political divides, holding steady across voters who supported Donald Trump, Joe Biden, and those who identify as swing voters. 79% of Americans express concern that the government lacks a comprehensive plan to address potential job losses resulting from AI-driven automation. These numbers demonstrate that public apathy isn’t the issue; rather, people feel excluded from the conversation.

The Power of Citizens’ Assemblies

What does “public control” actually look like? It’s not simply about elections, which can be easily influenced by money and lobbying. Rosenstein and others advocate for citizens’ assemblies: carefully selected groups of everyday people, representative of the broader population, who are given access to expert briefings, facilitated deliberation, and the authority to set binding goals and constraints for AI development.

The role of citizens isn’t to write code, but to define the purpose of that code. Technical experts would remain responsible for implementation, but they would be accountable to the public’s priorities. This model isn’t fresh. Ireland successfully utilized citizens’ assemblies to break political stalemates on deeply divisive issues like marriage equality and abortion, as detailed by the Constitution Unit at University College London. Similar assemblies are currently shaping AI policy in Taiwan, the UK, and Belgium, offering recommendations on topics ranging from facial recognition to disinformation and the future of perform. Unlike elected officials, citizens participating in these assemblies have no donors to appease and no necessitate to worry about reelection, allowing them to focus solely on the public good.

The benefits of public governance are clear. Left unchecked, AI will inevitably optimize for engagement, profit, and efficiency, potentially at the expense of human well-being. Democratic governance, however, provides a lever to prioritize learning, patient health, and worker empowerment.

Building the Infrastructure for Change

The infrastructure for this kind of public participation is already being developed. Organizations like One Project, founded by Rosenstein, are building participatory platforms designed to facilitate democratic governance at scale. These platforms aim to make it easier for citizens to engage in informed deliberation and contribute to policy-making.

The concept of public ownership isn’t radical. We already treat essential resources like airwaves, waterways, and beaches as public trusts, recognizing that they belong to everyone. This isn’t about nationalization, but about ensuring that resources vital to the common good are managed in the public interest. AI, with its potential to generate trillions of dollars in new wealth, arguably falls into this category. The future where everyone benefits, however, requires that the public – not just shareholders – control its development, directing resources toward critical areas like childcare, elder care, retraining programs for workers displaced by AI, and innovative educational models.

A Closing Window of Opportunity

In Washington, the prevailing sentiment is often one of urgency and pragmatism. Pundits argue that the public is too divided, the issues too complex, and the competition with China too intense to allow for democratic oversight. But this argument misses the point. Democratic oversight isn’t a hindrance to progress; it’s the only way to prevent a dangerous AI arms race and ensure that AI serves humanity.

The demand for change is already present, the infrastructure is being built, and the public is ready to engage. The critical question now is whether we will demand democratic governance before AI follows the path of social media, prioritizing profit over people. If AI is going to reshape all our lives, we, the people, should decide how. That’s not radical; it’s self-governance, and it’s more crucial now than ever before.

Disclaimer: This article provides information for general knowledge and informational purposes only, and does not constitute professional advice.

The next key development to watch is the ongoing debate within Congress regarding AI regulation, with several committees expected to hold hearings in the coming months. Stay informed and make your voice heard.

You may also like

Leave a Comment