I'm Shrijith Venkatrama, the founder of Hexmos. Presently, I am building LiveAPI, a super-convenient engineering productivity tool. LiveAPI processes your code repositories at scale and automatically produces beautiful API docs in minutes.
As I build LiveAPI, I am also making an effort to learn about various economic matters & share it here with you.
ChatGPT was released to the world on 30 November 2022.
I am reading the following report, published on 31 May 2024, almost 1.5 years after the original "ChatGPT moment".
The rising costs of training frontier AI models
As they say, money makes the world go round, so let's try to learn some data and insights about the costs of developing serious (or frontier) models.
Components of Training Cost Models
- Hardware
- Energy
- Cloud rental
- Staff Expenses
Rough Estimates for GPT-4 and Gemini
- AI Accelerator Chips (37% - 29.5%)
- Staff Costs (37% - 29.5%)
- Server Components (15-22%)
- Cluster-Level Interconnect (9-13%)
- Energy Consumption (2-6%)
Since 2016, the absolute cost of training frontier models has increased 2.4x
every year. Assuming such a trend continues, larger model trainings will cost more than 1 billion by 2027.
The Data: GPT-4 Training cost $40M, Gemini Ultra Cost $30M
Hardware + Energy Cost Evolution
Cloud Compute Cost Evolution
Hardware Acquisition Cost
Energy/Hardware Costs Breakdown for All The Major Models
Conclusions From the Study
- Half of ammortized hardware capex + energy cost is for AI chips
- The third biggest cost apart from the above two is for employing R&D staff
- Training costs are increasing exponentially, year by year
- Securing chips and power are going to be bottlenecks in the future for AI development
Top comments (0)