DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on

The Economics of Training Frontier Models

I'm Shrijith Venkatrama, the founder of Hexmos. Presently, I am building LiveAPI, a super-convenient engineering productivity tool. LiveAPI processes your code repositories at scale and automatically produces beautiful API docs in minutes.

As I build LiveAPI, I am also making an effort to learn about various economic matters & share it here with you.


ChatGPT was released to the world on 30 November 2022.

I am reading the following report, published on 31 May 2024, almost 1.5 years after the original "ChatGPT moment".

The rising costs of training frontier AI models

As they say, money makes the world go round, so let's try to learn some data and insights about the costs of developing serious (or frontier) models.

Components of Training Cost Models

  1. Hardware
  2. Energy
  3. Cloud rental
  4. Staff Expenses

Rough Estimates for GPT-4 and Gemini

  1. AI Accelerator Chips (37% - 29.5%)
  2. Staff Costs (37% - 29.5%)
  3. Server Components (15-22%)
  4. Cluster-Level Interconnect (9-13%)
  5. Energy Consumption (2-6%)

Since 2016, the absolute cost of training frontier models has increased 2.4x every year. Assuming such a trend continues, larger model trainings will cost more than 1 billion by 2027.

The Data: GPT-4 Training cost $40M, Gemini Ultra Cost $30M

Hardware + Energy Cost Evolution

Hardware + Energy Cost Evolution

Cloud Compute Cost Evolution

Cloud Compute Cost Evolution

Hardware Acquisition Cost

Hardware Acquisition Cost

Energy/Hardware Costs Breakdown for All The Major Models

Cost Breakdown

Conclusions From the Study

  1. Half of ammortized hardware capex + energy cost is for AI chips
  2. The third biggest cost apart from the above two is for employing R&D staff
  3. Training costs are increasing exponentially, year by year
  4. Securing chips and power are going to be bottlenecks in the future for AI development

Top comments (0)