For years, AdamW has been the default optimizer for training large language models. Itโs reliable, well-understood, and works out of the box for almost everything.
But as models scale, optimizer choice starts to matter a lot more, especially for memory and compute.
โ Thatโs where Muon is getting attention.
โ Instead of storing second-moment statistics like AdamW, Muon uses momentum + orthogonalized updates (via NewtonโSchulz), which makes it:
~50% lighter on optimizer memory
๐ Blog link: https://www.linkedin.com/pulse/day-16-21-days-building-small-language-model-choosing-lakhera-lj3jc
Iโve covered all the concepts here at a high level to keep things simple. For a deeper exploration of these topics, feel free to check out my book "Building A Small Language Model from Scratch: A Practical Guide."
โ Gumroad: https://plakhera.gumroad.com/l/BuildingASmallLanguageModelfromScratch
โ Amazon: https://www.amazon.com/dp/B0G64SQ4F8/
โ Leanpub: https://leanpub.com/buildingasmalllanguagemodelfromscratch/
Top comments (0)