“Big Short” Investor Michael burry Bets Against the AI Boom, Citing Accounting Concerns
Table of Contents
The investor who famously predicted the 2008 financial crisis, Michael Burry, is now taking a contrarian position on the booming artificial intelligence sector, arguing that major tech companies are employing questionable accounting practices that could signal an impending market correction.Burry’s concern centers on the depreciation schedules used by hyperscalers – companies like Google, Microsoft, Meta, and Amazon – for their expensive GPU chips, the engines powering the AI revolution.
The Depreciation Debate: A Key to Uncovering AI’s True Costs
The core of Burry’s argument lies in the lifespan assigned to these critical components. Hyperscalers are currently depreciating their GPU investments over periods exceeding three years,a practice Burry believes is too long. This extended depreciation lowers reported expenses and artificially boosts earnings, possibly masking the true capital-intensive nature of AI infrastructure. “If depreciation schedules don’t align with real-world replacement cycles, companies may be overstating their profitability,” one analyst noted.
How Hyperscalers Are Stretching the Rules
Traditionally, companies depreciated general-purpose servers over roughly three years. However, many major hyperscalers now publicly estimate a useful life of five to six years for their AI server equipment, including GPUs.Microsoft and Oracle have been specifically cited as utilizing depreciation schedules extending up to six years for their new AI chips and servers. Even CoreWeave, a cloud GPU rental company, lengthened its depreciation period to six years in 2023, up from four.While Amazon (AWS) tends toward shorter, four-year schedules, Meta has adopted exceptionally long timelines of 11-12 years. Microsoft, Google, and Oracle generally fall within the four- to five-year range.
Rapid Obsolescence: The Counterargument
Critics contend that the actual economic lifespan of GPUs is far shorter – potentially just one to three years. This is driven by Nvidia’s aggressive one-year product cycle, consistently releasing new generations of chips like the Blackwell and Rubin that offer meaningful performance and efficiency gains. This rapid innovation renders older chips economically obsolete for demanding AI training workloads much faster than a five- or six-year schedule suggests. Furthermore, high utilization rates – often between 60% and 70% – in intensive AI applications contribute to faster physical degradation of the hardware. “
The “Value Cascade” Defense
Hyperscalers defend their longer depreciation schedules by arguing for a “value cascade” model. They maintain that older gpus, once replaced in top-tier training jobs, are repurposed for less computationally intensive tasks like inference (running the model) or other applications, continuing to generate economic value for years. They also point to ongoing improvements in software and data center operations that extend the hardware’s lifespan and efficiency.
Is an AI Bubble Brewing?
The discrepancy between reported depreciation and actual replacement cycles raises concerns about a potential AI bubble. If companies are underestimating the true cost of AI infrastructure, they may be overstating their profitability, creating a distorted view of the market’s health. This could lead to inflated valuations and a subsequent correction.
Time.news Analysis: Why the Hyperscalers Likely Have It Right
Despite Burry’s concerns, our analysis sides with the hyperscalers.Data centers were already well-established before the surge in AI interest following the introduction of ChatGPT in late 2022. As many as 4,000 data centers operated in the US as of 2021,fueled by the growing demand for cloud computing,and many continue to function with their original chipsets. Crucially, the revenues and earnings of these hyperscalers continue to demonstrate rapid growth, suggesting that their current accounting practices are not masking fundamental weaknesses.
