For years, the conversation around artificial intelligence in the American workplace has been dominated by two extremes: the utopian promise of unprecedented productivity and the dystopian fear of mass unemployment. But for the people actually punching the clock, the reality is less about a robotic takeover and more about who holds the remote control.
A new poll released by the AFL-CIO, the largest federation of labor unions in the United States, reveals a striking consensus among the workforce. American workers are not necessarily calling for a ban on AI, but they are overwhelmingly demanding that it be governed by human oversight and transparency—and they believe labor unions are the only entities capable of enforcing those boundaries.
The survey, conducted by David Binder Research between April 14 and April 22 among 1,588 respondents, paints a picture of a workforce feeling exposed. While companies race to integrate large language models and algorithmic management to cut costs, workers are pushing back against the “black box” approach to employment, where decisions about their livelihoods are made by code they cannot see and managers who cannot explain the logic behind the output.
As a former software engineer, I’ve seen how “efficiency” is often used as a shorthand for removing human friction. In a corporate setting, that “friction” is often a person’s judgment, intuition, or wellbeing. The data suggests that workers are now acutely aware of this trade-off and are turning toward collective bargaining to ensure that AI serves as a tool for the employee, rather than a weapon for the employer.
The Demand for a ‘Human-in-the-Loop’
The most resounding finding in the AFL-CIO poll is the insistence on human agency. An overwhelming 95 percent of surveyed workers support a requirement that a human being must be the final decision-maker on any issue affecting an individual’s employment. This reflects a growing anxiety over “algorithmic management”—the practice of using AI to track productivity, assign tasks, and, in some cases, trigger terminations without human intervention.
This desire for a “human-in-the-loop” is not just about job security; it is about accountability. When an AI makes a mistake—whether it is a hallucinated fact in a report or a flawed performance metric—there is no moral or professional accountability inherent in the software. Workers are signaling that they refuse to be managed by a system that cannot be reasoned with or held responsible for its errors.
Beyond final decision-making, the poll highlights a broad appetite for systemic guardrails:
- 92 percent of workers support advanced guardrails against harmful AI uses and demand transparency and accountability from employers.
- 78 percent believe it is extremely or very important that immediate action be taken to protect workers from potential AI harms.
- 75 percent support expanding opportunities to form unions specifically to protect their jobs from AI displacement.
The Transparency Gap and Workplace Surveillance
One of the most concerning aspects of the report is the disconnect between how companies use AI for surveillance and how much they tell their employees about it. The poll found a staggering transparency gap: only 7 percent of workers said their employers have disclosed how and when AI is used to monitor their work. Conversely, 70 percent stated their employers have remained silent on the matter, while 23 percent were unsure.

This lack of disclosure is a point of significant contention, as 94 percent of workers believe they should be informed if AI is monitoring their performance. This isn’t just about privacy; it’s about the psychological impact of “invisible” surveillance, which can lead to burnout and a culture of distrust.
The trust deficit extends beyond the immediate employer to the broader political and corporate landscape. When asked who they trust most to protect them from the harms of AI, workers pointed away from the halls of power and toward the picket line:
| Entity Trusted to Protect Workers | Percentage of Support |
|---|---|
| Labor Unions | 38% |
| Democrats | 17% |
| Republicans | 10% |
| Employers | 6% |
| None of the above | 18% |
From Polling to Policy: AI in Collective Bargaining
These sentiments are already translating into concrete contractual wins. Rather than waiting for federal legislation—which has remained slow to materialize—workers are using collective bargaining to write AI protections directly into their employment contracts.
Anna Iovine, former unit chair of the Ziff Davis Creators Guild, noted that her guild secured protections in 2024 that specifically address the fears highlighted in the poll. Their contract includes mandates for editorial integrity, transparency regarding when AI is used and a guarantee that AI implementation will not lead to layoffs or reduced pay. For Iovine, this is a necessary defense against companies that might use AI to “cut corners,” even when the resulting work is inferior.

The stakes are even higher in the healthcare sector, where AI errors can be fatal. Hannah Drummond, a registered nurse in North Carolina and member of National Nurses United, fought for language in her 2024 contract ensuring that no new technology affecting patient care can be implemented without union approval.
“We should not be experimenting on our patients,” Drummond said, pointing to the dangers of using flawed statistical models to predict patient deterioration. By requiring union sign-off, nurses are attempting to ensure that AI supports clinical judgment rather than replacing it or “de-skilling” the profession.
A Mandate for the Labor Movement
For Liz Shuler, president of the AFL-CIO, the poll results validate the federation’s “Workers First Initiative on AI.” Shuler argues that the data represents a clear mandate, suggesting that workers view the labor movement as the only viable shield against the unchecked deployment of Big Tech tools in the workplace.
The broader implication is a shift in the labor movement’s strategy. AI is no longer just a futuristic threat; it is a primary point of contention in current contract negotiations across diverse industries, from journalism and nursing to automotive manufacturing and logistics.
Disclaimer: This article discusses labor policies and workplace regulations. For specific legal advice regarding employment contracts or labor laws, please consult a licensed legal professional.
As the US continues to grapple with the rapid integration of generative AI, the next critical checkpoint will be the ongoing federal discussions regarding AI safety and labor standards, as well as the upcoming wave of contract renewals for major unions in late 2024 and early 2025, where AI language is expected to be a central pillar of negotiations.
Do you feel your employer is transparent about the AI tools they use to monitor your work? Share your experience in the comments or share this story with your colleagues.
