(I need to put this somewhere, probably towards the end or in research part)
Note on proper research
(paaraphrasing Keller Jordan, I will shorten or move this somewhere else):
We need a dedicated to AI model speedruns—structured competitions where researchers must train tiny LLMs or similar models under strict constraints.
The goal is to create a fair environment where new methods (like optimizers or architectures) can be tested against fully optimized baselines. Without this, many research get "state-of-the-art" results simply because they didn't optimize existing methods to their limits, and not because their new idea is better.
This is wasting a lot of time for other researchers and teams who implement the method and find out it's not actually better.
It can also motivate companies like OpenAI and Google to open source algorithms they want optimized by open source community.
Top comments (0)