In the high-pressure environment of huge tech, the distance between a productivity metric and actual productivity can sometimes be vast. At Amazon, that gap is currently being filled by an internal AI tool designed to save time, which some employees say they are now using to waste it.
According to a report from the Financial Times, workers are utilizing an in-house AI agent tool called “MeshClaw” to automate tasks that don’t actually need automating. The goal isn’t efficiency, but visibility. Employees claim they are creating “busy work” for the AI to inflate their token consumption—the primary metric used to track how often a developer interacts with generative AI.
As a former software engineer, I’ve seen this pattern before. When a company shifts from measuring outcomes to measuring activity, engineers often find the most efficient way to satisfy the metric, even if it means doing less meaningful work. At Amazon, this has manifested as a quiet arms race to appear “AI-forward” in the eyes of management.
The friction stems from a push to integrate generative AI into the developer workflow at scale. Amazon reportedly instituted targets for more than 80% of its developers to use AI tools on a weekly basis. To track this, the company implemented systems to monitor token usage—the units of text that LLMs process. While Amazon describes these as “dashboards” for cost and efficiency tracking, some employees have characterized them as “leaderboards,” creating a competitive atmosphere where high usage is equated with high performance.
The Metric Trap: Tokens vs. Productivity
The core of the issue lies in the “perverse incentive” created when a tool for efficiency becomes a KPI (Key Performance Indicator). When developers are told that AI adoption is a priority, but are not given clear guidelines on how to use it meaningfully, they often default to the most measurable behavior.
MeshClaw was designed to allow workers to create AI agents that can complete tasks on a user’s behalf, theoretically freeing up humans for more strategic problem-solving. However, sources told the Financial Times that some staff are using these agents to trigger unnecessary AI activity. By automating trivial or redundant tasks, workers can keep their token counts high, ensuring they stay above the 80% usage threshold.
The psychological pressure is compounded by a lack of clarity regarding performance reviews. While Amazon has officially stated that AI token statistics are not used in employee evaluations, the culture of the “leaderboard” suggests otherwise to those on the ground. “Managers are looking at it,” one employee noted, suggesting that even if it isn’t a formal metric, it is a perceived one.
Amazon’s Stance on AI Adoption
Amazon has pushed back against the notion that it is forcing a mandate on its developers. In response to inquiries, the company emphasized that MeshClaw was developed by a small team to help employees automate repetitive tasks and solve larger customer problems more effectively.
The company maintains that its tracking is purely operational. According to Amazon, token usage is monitored to understand the cost of the infrastructure and the general efficiency of the tools, not to act as a benchmark for individual developer performance. They further noted that they welcome employee feedback to improve the quality of these internal tools.
To clarify the discrepancy between corporate policy and employee perception, the following table outlines the two competing narratives regarding AI tracking at the company:
| Feature | Amazon Corporate Position | Employee Reports |
|---|---|---|
| Tracking Purpose | Cost and efficiency analysis | Performance monitoring (“Leaderboards”) |
| AI Mandates | No central mandate for tool use | Pressure to meet 80% weekly usage |
| Performance Link | Not used in performance reviews | Perceived as a metric for “AI-readiness” |
The Broader Crisis of ‘AI Anxiety’
The situation at Amazon is a microcosm of a larger trend across the corporate landscape. Many organizations are rushing to deploy generative AI without providing the necessary managerial context, leaving employees to guess how their roles are evolving.
This ambiguity often fuels “AI anxiety”—the fear that if a worker doesn’t appear to be mastering the new technology, they will be replaced by it. Drew Edwards, CEO of Ingo Payments, recently highlighted this tension, noting that workers often hear fragmented commentary about job losses and assume the worst when managers fail to provide a roadmap.
When employees are told that their jobs will be “impacted” by AI but are not told how to successfully navigate that impact, they may resort to “AI theater.” This is the act of performing the appearance of using AI to signal alignment with corporate goals, regardless of whether the tool actually adds value to the product.
What So for the Future of Work
The “MeshClaw” incident highlights a critical lesson for the C-suite: you get exactly what you measure. If a company measures “AI usage” rather than “AI-enabled outcomes,” it will inevitably see an increase in usage, even if that usage is meaningless.

For developers, the pressure to automate the unessential is a survival mechanism. For the company, the risk is a degradation of code quality and a waste of expensive compute resources. The challenge moving forward will be shifting the conversation from how much AI is being used to how effectively it is being applied to solve actual customer pain points.
Amazon continues to iterate on its internal AI suite, and the company’s ability to refine these tools based on employee feedback will be a key indicator of whether they can move past the “leaderboard” culture. Further updates on Amazon’s internal AI policies are expected as the company continues its broader integration of generative AI across its AWS and retail ecosystems.
Do you feel pressure to use AI in your current role even when it doesn’t add value? Share your experiences in the comments below.
