DEV Community

Osmary Lisbeth Navarro Tovar
Osmary Lisbeth Navarro Tovar

Posted on

Proof-of-Work as a Hidden Subsidy

How Cryptocurrency Mining (2013-2022) Facilitated the Physical Infrastructure of Large Language Models

Proof-of-Work as a Hidden Subsidy

Author's Note: This article proposes an interpretive thesis on the material foundations of contemporary Artificial Intelligence. It starts from the premise that physical infrastructure—hardware, energy, data centers—is a determining, though often overlooked, factor in technological progress. The narrative that follows connects points between two seemingly distant industries, arguing that private investment in cryptocurrency mining functioned as an indirect funding mechanism for the critical infrastructure of AI. The claims are based on public market data, documented case studies, and an economic framework of capital depreciation and reuse.

Executive Summary

Large Language Models (LLMs) like GPT-4 or Llama 3 are often explained as a consequence of algorithmic advances, but their industrial-scale deployment depends on a hardware infrastructure that was not originally built for AI. This infrastructure emerged, to a large extent, from Proof-of-Work (PoW)-based cryptocurrency mining. Between 2013 and 2022, competition for block rewards generated the first globally massive, privately-funded demand for parallel computing and energy, creating a distributed network of high-performance capacity [1].

This work argues that mining, particularly Ethereum mining on GPUs, acted as a hidden subsidy for the AI ecosystem by:

Expanding and maintaining the supply chain of general-purpose, high-bandwidth GPUs, creating a surplus of hardware on the secondary market [2].

Accelerating the maturation of software ecosystems and operational practices for managing massive fleets of GPUs—know-how directly transferred to AI [3].

Building and amortizing energy-dense data centers, whose physical infrastructure (electrical connections, cooling, space) is being repurposed to train LLMs [4][5].

An economic framework is proposed where the depreciation of capital invested by miners and the resulting excess capacity reduced the marginal cost for new AI players to access critical resources. Under conservative assumptions, the magnitude of this value transfer is estimated to be in the range of several billion dollars, a subsidy that lowered the effective CAPEX and accelerated the industrial viability of models with hundreds of billions of parameters [2].

  1. Introduction: The Hidden Debt of AI

State-of-the-art LLMs are presented as achievements of deep learning theory, but they are also material artifacts anchored in a prior history of investment in computational infrastructure. The central hypothesis is that Proof-of-Work cryptocurrency mining, and particularly Ethereum mining executed on GPUs, functioned as the first global market for massively parallel computing driven solely by private economic incentives. This industry assumed the capital and energy costs to create an installed base that was subsequently repurposed for AI [3][2].

Central thesis (reformulated): LLMs are not just algorithms; their economic viability at an industrial scale relied on a decade of private investment in GPU farms for PoW. Ethereum, by maintaining sustained demand for general-purpose GPUs, contributed to expanding and lowering the cost of an installed base of hardware and data centers. Once amortized by mining, these assets pivoted towards AI with a significantly lower marginal cost. The magnitude of this effect is modeled here as a "hidden subsidy," initially absorbed by miners and later captured by AI labs and infrastructure providers [5][3][2].

  1. Proof-of-Work as the First Global HPC Market

2.1 SHA-256 and Ethash: Two Hardware Trajectories

Bitcoin (SHA-256) introduced the concept of an incentivized global computing network, but its algorithm quickly migrated to Application-Specific Integrated Circuits (ASICs), displacing GPUs from the core of its network. In contrast, Ethereum, with its Ethash algorithm, was designed with moderate resistance to ASICs, depending more heavily on the memory and bandwidth of general-purpose GPUs [6][2].

Market reports confirm that the crypto mining rush caused a significant disruption in the supply chain. In 2017 alone, it is estimated that miners purchased approximately 3 million GPUs valued at $776 million dollars, creating shortages and inflating prices for other sectors, including researchers [2][7].

Table 1. PoW Algorithms and Their Qualitative Impact on Hardware (2013-2022)

Algorithm Cryptocurrency ASIC Resistance Typical GPU Role Plausible Impact on AI
SHA-256 Bitcoin Low (rapid migration to ASIC) Relevant only in early phase Initial demand for parallel hardware, but largely non-reusable later [6].
Ethash Ethereum Moderate (emphasis on memory) GPU with high bandwidth and VRAM Expansion of the supply of GPUs suitable for matrix operations (GEMM) and model training [2].
RandomX Monero CPU-oriented Marginal Limited impact; did not drive GPU farms at scale [6].
The robust point is not an exact market percentage, but the qualitative effect: mining introduced an additional demand peak that expanded production and, crucially, created a massive stock of hardware that would later flood the secondary market [7][2].

2.2 Global Energy Ledger (2013-2022)

The scale of this distributed computing experiment is measured in energy consumption. The Cambridge Bitcoin Electricity Consumption Index (CBECI) estimates that the cumulative consumption of the Bitcoin network is measured in hundreds of terawatt-hours (TWh), comparable to the annual electricity consumption of medium-sized countries [1][6]. Ethereum, before its transition to Proof-of-Stake in 2022, added tens of additional TWh to this footprint [6].

Beyond the exact figure, the relevant fact is material: PoW mobilized massive private investments in energy and cooling infrastructure to sustain intensive and constant computing loads. This infrastructure, once built, represents a physical substrate ready for reuse [4].

  1. GPU Mining as AI's Prototype Factory

3.1 The Ethereum Factor: GPUs for GEMM

The Ethash algorithm specifically favored GPUs with large amounts of video memory (VRAM) and high bandwidth, specifications that translate directly to General Matrix Multiplication (GEMM) operations, fundamental for neural network training. The wave of mining purchases not only increased production but also steered hardware design towards features useful for AI [2][7].

While it is difficult to trace the origin of every GPU in a modern AI cluster, there is broad consensus: cyclical collapses in mining profitability (as in 2018) released large volumes of used GPUs onto the secondary market at prices far below their original cost. This second-hand supply drastically reduced the capital barrier to entry for startups and academic labs [2].

3.2 Industrial Farms: From Mining to AI

The repurposing of mining infrastructure is no longer a hypothesis, but a market trend. As documented by Wired, the largest Bitcoin miners in the United States are transforming their "farms" into AI factories, repurposing buildings, high-capacity electrical connections, and cooling systems [4].

The paradigmatic case is CoreWeave. Founded in 2017 as an Ethereum mining operation (Atlantic Crypto), the company pivoted in 2019 to providing cloud infrastructure for AI [3][8]. Today, valued at billions, it is a key provider of GPU capacity for players like OpenAI and Microsoft, demonstrating how the operational know-how and amortized assets from mining can underpin a leading AI business [5][9]. Its trajectory encapsulates the hidden subsidy thesis: the capital (physical and human) depreciated in one industry financed the takeoff of another.

  1. The Software Subsidy: CUDA, NCCL, and the DL Ecosystem

Large-scale GPU mining functioned as a real-world stress test for parallel computing software stacks. The need to operate millions of GPUs stably, with efficient thermal management and effective communication between devices, drove optimizations in drivers, kernels, and libraries like NVIDIA's CUDA and NCCL [3][2].

This debugging and optimization process, funded by miners in their pursuit of efficiency, generated a more mature and robust software ecosystem. When generative AI demanded scaling to thousands of GPUs, it found a partially prepared software ground, reducing the time and risk of R&D for deep learning players [3].

  1. Big Data and Storage: AI on PoW Infrastructure

Training LLMs requires vast volumes of data. Projects like Common Crawl have grown from terabytes to petabytes in the last decade, needing distributed processing and storage pipelines [1]. While mining farms were not designed for this purpose, the technical culture of blockchain—with its emphasis on decentralized synchronization, verification, and data replication—contributed to a broader ecosystem of distributed storage tools and protocols that underlie some AI data infrastructures [6].

  1. The Reuse Event: LLM Training on ex-PoW Capacity

The boom-and-bust cycle of mining created windows of opportunity. The sudden drop in profitability released hardware onto the secondary market, making it cheaper. Industry reports document how these sharp declines in used GPU prices allowed new AI players to form training clusters at a fraction of the cost of new hardware [7][2].

In parallel, an infrastructural conversion took place. Providers like CoreWeave not only bought GPUs but leased or acquired entire mining data centers, transforming "hash farms" into "FLOPS farms" for AI. This allowed tech giants to access accelerated computing capacity without the long design and construction cycle of a data center from scratch, outsourcing and thus mitigating a large part of the initial CAPEX [4][5][3].

  1. Quantifying the "Hidden Subsidy" (2013-2025)

To estimate the magnitude of this value transfer, a three-step methodological framework is proposed:

Estimate cumulative expenditure on PoW infrastructure: Aggregate the CAPEX on GPUs, power systems, and data center infrastructure (electrical connections, cooling, real estate) attributable to mining, using financial reports from public miners and market studies [4][2].

Estimate the fraction repurposed: Determine what percentage of that hardware and infrastructure was reused for AI, either through resale on the secondary market or through the direct conversion of facilities [5][9][4].

Model the transferred depreciation: Calculate the difference between the book value that miners had depreciated and the residual value at which AI players accessed it. This difference represents the implicit economic subsidy.

Applying conservative assumptions about hardware volumes, reuse rates, and depreciation, this model yields a plausible range of several billion dollars in CAPEX and OPEX that were assumed by mining but facilitated the expansion of AI. This range should be understood as the result of an original estimation model, built on public data, and not as a figure directly observed in financial reports [3][2].

  1. Unintended Consequences

8.1 Environmental Ledger

PoW's carbon footprint is substantial, with emissions estimated in tens of millions of tons of CO₂ [1][6]. The conversion of this infrastructure to AI does not erase that historical environmental debt. However, it raises a complex ethical and economic discussion: can the future social benefits of AI (medical, scientific advances) partially amortize that already incurred environmental cost? This article does not justify PoW's energy consumption, but points out that, having occurred, the repurposing of its assets can maximize the social return on that energy investment [4].

8.2 The Geopolitics of AI-FLOPS

Strategic competition has shifted from hash rate (hashrate) to AI flops (AI-FLOPS). Export restrictions on advanced GPUs, such as those imposed by the United States, and subsidies for "green" data centers mark the new geopolitical board. Countries with experience in large-scale mining possess an infrastructural and knowledge asset that can be pivoted towards AI sovereignty, provided it is accompanied by talent development policies and regulatory stability [4][10].

This represents an opportunity for economies with energy resources and mining expertise: to stop being mere technology consumers and become providers of strategic computing capacity, converting their infrastructure towards the new commodity of the 21st century: AI computing power [10].

  1. Discussion: Infrastructure as a Limiting Factor

The scaling laws of AI models indicate that, beyond a certain point, performance improvements critically depend on access to massive fleets of computation and data [3]. In this context, the infrastructure generated by PoW acted as an accidental and distributed "pre-funding" of the industrial phase of AI, reducing the costs and timelines needed to reach the necessary scale.

The prospective question is whether this pattern—where an industry with speculative economic incentives finances reusable, general-purpose infrastructure—is replicable. Could the metaverse, quantum simulation, or scientific computing finance the next generation of infrastructure? Understanding these cycles is key to designing R&D policies that capture social benefits more directly and with less volatility [4][10].

  1. Conclusion: The Material Legacy of AI

This article proposes a material reading of the AI revolution. Behind every LLM there are not only brilliant algorithms, but also racks of GPUs, electrical transformers, and cooling systems that once mined cryptocurrencies. Proof-of-Work mining, in its pursuit of block rewards, functioned as an "unintentional donor" of physical capital, building an infrastructural foundation upon which AI could scale faster and with less direct investment than it would have required in its absence [4][3].

Recognizing this lineage is not to celebrate PoW nor absolve its environmental costs, but to understand that technological history is also the history of the reuse and re-signification of inherited infrastructures. The future of AI, therefore, will be linked not only to ideas, but to the capacity to repurpose the material substrates of the past to build those of the future.

DIAGRAM

References

[1] Cambridge Centre for Alternative Finance, Cambridge Bitcoin Electricity Consumption Index (CBECI), 2023. [Online]. Available: https://ccaf.io/cbeci
[2] J. P. Pereira, "Cryptocurrency miners bought 3 million GPUs in 2017," ZDNet, 2018. [Online]. Available: https://www.zdnet.com/article/cryptocurrency-miners-bought-3-million-gpus-in-2017/
[3] Introl, "CoreWeave: The AI Infrastructure Revolution - How a Crypto Mining Startup Became the $23 Billion Backbone of Artificial Intelligence," 2025. [Online]. Available: https://introl.com/blog/coreweave-openai-microsoft-gpu-provider
[4] M. P. of Wired, "America’s Biggest Bitcoin Miners Are Pivoting to AI," Wired, 2025. [Online]. Available: https://www.wired.com/story/bitcoin-miners-pivot-ai-data-centers/
[5] Fortune Crypto, "CoreWeave’s $9 billion acquisition of Core Scientific provides an AI road map for struggling Bitcoin miners," Fortune, 2025. [Online]. Available: https://fortune.com/crypto/2025/07/09/coreweaves-9-billion-acquisition-of-core-scientific-gives-an-ai-roadmap-for-struggling-bitcoin-miners/
[6] Cambridge Centre for Alternative Finance, Cambridge Blockchain Network Sustainability Index, 2023. [Online]. Available: https://ccaf.io/cbnsi/cbeci
[7] J. Anderson, "Cryptocurrency miners bought 3 million graphics cards worth $776 million in 2017," PC Gamer, 2018. [Online]. Available: https://www.pcgamer.com/cryptocurrency-miners-bought-3-million-graphics-cards-worth-776-million-in-2017/
[8] Wikipedia contributors, "CoreWeave," Wikipedia, The Free Encyclopedia, 2025. [Online]. Available: https://en.wikipedia.org/wiki/CoreWeave
[9] The Economic Times, "CoreWeave to acquire crypto miner Core Scientific," 2025. [Online]. Available: https://economictimes.com/tech/artificial-intelligence/coreweave-to-acquire-crypto-miner-core-scientific/articleshow/122299957.cms
[10] Insights4VC, "Bitcoin miners’ pivot to AI data centers," Substack, 2024. [Online]. Available: https://insights4vc.substack.com/p/bitcoin-miners-pivot-to-ai-data-centers

Research and analysis conducted by Osmary Lisbeth Navarro Tovar

Top comments (0)