Deep Quarry

Deep Quarry

Depreciation of GPUs: between useful lives and useful myths

A data-driven look at how Big Tech revises depreciation estimates, what the changes signal, and why today’s disclosures leave investors piecing together an incomplete picture.

Olga Usvyatsky's avatar
Olga Usvyatsky
Dec 07, 2025
∙ Paid

Michael Burry, a prominent investor and the “Big Short” lead character, turned Nvidia into the centerpiece of his broader argument that the AI boom rests on aggressive accounting, using a series of social media posts over the last few weeks. In recent X posts and Substack essays, he claims that large cloud providers that operate hundreds of data centers – often referred to as hyperscalers - flatter earnings by depreciating Nvidia-based data center hardware over five or six years even though Nvidia’s fast chip cycle means the real economic life is closer to two or three years. That leads to what he estimates will be about $176 billion of understated depreciation and overstated profits across the industry between 2026 and 2028.

Burry has also accused Nvidia of destroying “owner’s earnings” through massive stock-based compensation. He likened today’s setup not to Enron but to Cisco at the peak of the dot-com bubble, where exuberant capital spending and optimistic assumptions later unraveled and left many companies bankrupt and their stock options worthless.

Nvidia responded to Burry’s critiques and the pile-on that came with it with a detailed memo to Wall Street analysts, disputing his claim and insisting that it does not resemble historical accounting frauds such as Enron or WorldCom, its business is economically sound, its financial reporting is transparent, and that its strategic investments and compensation practices do not undermine shareholder value.

Does the company protest too much?

I’ve written about the depreciable life of GPUs twice now - first in July 2024, when I argued that extending server lives to six years might be unsustainable in the long-run as data centers shift from CPUs to GPUs, and again in February 2025 as Amazon shortened the useful life of a subset of its servers and networking equipment to five years, while Meta extended the useful life of most of its servers and network assets to five and a half years. The contrast was especially notable because Amazon explicitly cited the rapid pace of AI and machine-learning innovation as the reason for shortening useful lives - precisely the opposite direction taken by Meta.

On November 24, 2025, Bloomberg quoted me in an article discussing the implications of GPU depreciation. I commented on the impact of a change in the useful life of GPUs on the financial statements:

Even a small change of several months in the depreciation policies can change earnings in a given quarter by billions,” says Olga Usvyatsky, who writes about accounting issues such as server depreciation in her newsletter, Deep Quarry.

We can clearly see the impact in the Amazon’s disclosure – a reduction in the useful life of a subset of servers led to a reduction of net income of $298 million and $677 million for the three and nine months ending September 30, 2025, respectively, based on Amazon’s 10-Q filing (emphasis added):

Effective January 1, 2025 we changed our estimate of the useful lives of a subset of our servers and networking equipment from six years to five years. The shorter useful lives are due to the increased pace of technology development, particularly in the area of artificial intelligence and machine learning. The effect of this change in estimate for Q3 2025, based on servers and networking equipment that were included in “Property and equipment, net” as of June 30, 2025 and those acquired during the three months ended September 30, 2025, was an increase in depreciation and amortization expense of $392 million and a reduction in net income of $298 million, or $0.03 per basic share and $0.03 per diluted share, which primarily impacted our AWS segment. The effect of this change in estimate for the nine months ended September 30, 2025, based on servers and networking equipment that were included in “Property and equipment, net” as of December 31, 2024 and those acquired during the nine months ended September 30, 2025, was an increase in depreciation and amortization expense of $889 million and a reduction in net income of $677 million, or $0.06 per basic share and $0.06 per diluted share, which primarily impacted our AWS segment.

(See Appendix A at the end of this piece for a history of the changes in the useful lives of servers and networking equipment, along with the dollar impact of those changes, for Amazon, Alphabet, Meta, Microsoft, Oracle, and CoreWeave, Inc.)

But it took the public criticism of Nvidia by someone who made the right call during the last financial crisis, Michael Burry, and his broader claim that hyperscalers were overstating earnings by depreciating compute equipment over unrealistically long periods, to turn what had been a niche accounting concern into an industry-wide debate. Burry’s comments did what even Amazon’s reversal of the useful life of GPUs could not: it pushed useful life assumptions - and the billions of dollars of depreciation expense riding on them - into the center of investor and media attention.

The heightened visibility also, inevitably, led to several accounting-related misperceptions, myths, and oversimplifications being introduced into the discussion. Those include misunderstandings regarding how depreciation estimates are determined according to Generally Accepted Accounting Principles (GAAP), and tax rules, how changes in useful life flow through the financial statements, and what these adjustments imply about the underlying economics of AI infrastructure. Those points are worth clarifying, both to ground the conversation in the actual mechanics of GAAP and to separate legitimate analytical concerns from oversimplifications that have inundated the media as the topic has gone mainstream.

Let’s start with what the Generally Accepted Accounting Principles (GAAP) prescribe.

Under US GAAP, the “useful life” of property, plant, and equipment is an accounting estimate: it is management’s best judgment about the period over which the asset is expected to provide economic benefits to that specific entity.

From the PwC accounting guide:

4.1 Depreciation and amortization overview

ASC 360-10-35-4 defines depreciation accounting as “a system of accounting which aims to distribute the cost or other basic value of tangible capital assets, less salvage (if any), over the estimated useful life of the unit (which may be a group of assets) in a systematic and rational manner.” Depreciation accounting is “a process of allocation, not of valuation.” It is intended to allocate an asset’s cost as equitably as possible to the periods during which the reporting entity benefits from the use of the asset.

And also:

Although not defined, we believe the use of the term “useful economic life” in ASC 360-10-35-4 is intended to have the same meaning as “useful life,” as defined in the ASC Master Glossary. The useful life assessment of a long-lived asset is based on entity-specific assumptions about how the entity intends to use the asset, which may be different from market-participant assumptions. Accordingly, the useful life could be different than the economic life or actual physical life of the asset.

Common Myths About Depreciable Life and What GAAP Actually Says

Myth 1: Extending the useful life of GPUs is “fraud” that companies will be forced to correct as an error.

A more extreme version of the current debate is the idea that using five or six-year useful lives for GPU-rich server fleets is inherently fraudulent and will inevitably lead to restatements. That is not how GAAP works. As we discussed earlier, under US GAAP, depreciation must reflect a systematic and rational pattern of economic consumption (ASC 360), and changes in useful life are treated as changes in accounting estimate, applied prospectively, not as automatic corrections of error. A change in accounting estimate does require the company’s auditor to sign off. To recharacterize a useful life as an “error” would require showing that everyone had made a mistake, management and auditors, because the original estimate was not reasonable based on information available at the time or that the company failed to follow the applicable accounting literature - not merely that, with hindsight or with new data, a different estimate turns out to be more accurate.

It is also important to separate server-level changes companies actually made from a GPU-specific story that is now being told about them. When companies such as Amazon and Alphabet extended useful lives from four to five or six years in 2022–2024, those changes were typically framed and modeled at the level of servers and networking equipment as a class, in data centers that were in many cases heavily CPU-dominated. The policies were likely not a specific decision to “extend the life of GPUs”; they were broad server-fleet assumptions that later were applied to a growing mix of CPU and GPU configurations. Thus, in my view, the debate should be less about “you extended GPU lives,” and more “you did not revisit GPU-based server lives once the fleet mix shifted toward GPUs.”

Since changing those estimates - whether to shorten or lengthen them - requires evidence that will stand up to an external audit, for GPUs specifically, the question is whether there is already sufficient, reliable data to justify a change. Should the depreciable life for GPU servers be say, two- or three-years, or conversely is it appropriate to support keeping a five- or six-year server life for a mixed CPU/GPU environment?

Reasonable people can disagree about where the estimate should land within that range - I personally think companies with a six-year useful life of GPUs would follow Amazon’s lead and reduce the useful lives of their GPUs. But GAAP anticipates exactly this kind of uncertainty: it expects estimates to evolve as data accumulates, not to be retroactively treated as errors or fraud simply because new information, new chip generations, or new economic perspectives emerge.

Myth 2: Useful life is a determinable, fixed, constant number determined by the manufacturer or by industry norms.

Under US GAAP, useful life is explicitly an entity-specific estimate, not a universal number that applies equally across an industry. As PwC notes, the useful life of a long-lived asset reflects management’s assumptions about how the entity intends to use the asset, which may differ from how a market participant - or even a direct competitor - would use similar equipment.

For example, an AI company running GPUs at high utilization for continuous model training may experience faster wear due to thermal stress than a firm using GPUs primarily for inference, where workloads are less power-intensive. Likewise, a company that keeps GPUs within tighter, more stable temperature ranges - using liquid-cooling or another sophisticated thermal management - is likely to experience fewer failures and potentially longer service lives than a data center that relies on basic air-cooled racks or operates in edge environments with less precise temperature control.

Thus, the “useful life” of GPU servers is therefore best understood as a range, shaped by each company’s workloads, data center design, and maintenance practices. This means that two companies operating GPU-based data centers - such as Meta and Amazon - can reasonably arrive at different useful life estimates for their servers.

What to watch for: outliers with useful lives that fall significantly outside the industry range.

Deep Quarry is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

User's avatar

Continue reading this post for free, courtesy of Olga Usvyatsky.

Or purchase a paid subscription.
© 2026 Nonlinear Analytics · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture