I automated my documentation screenshots a while back. Every week, one command regenerates 30+ PNGs. No more manual screenshotting. Life was good.
Then I checked the repo size.
412MB. For a docs project.
The problem
Git stores binary files as complete copies. Every time you regenerate a screenshot — even if just one pixel changed — Git saves the entire file again. Thirty screenshots, updated weekly, that's 156MB of history per year. And it only goes one direction.
New contributors clone the repo and wait. CI pipelines download hundreds of megabytes of old PNGs nobody will ever look at again.
Git LFS fixes this
Git LFS replaces your images with tiny pointer files (about 130 bytes each). The actual images live on a separate server. Checkout still works normally — you don't notice the difference.
Setup is a one-time thing:
git lfs install
git lfs track "heroshots/*.png"
git add .gitattributes
git commit -m "configure Git LFS"
That's it. Your normal Git workflow doesn't change. git add, git commit, git push — all the same. LFS handles the routing behind the scenes.
Already have a repo full of PNGs?
Migrate them:
git lfs migrate import --include="heroshots/*.png"
git push --force-with-lease
That repo I mentioned? Went from 412MB to 28MB.
CI/CD
If you're running screenshot updates in GitHub Actions, add one line:
- uses: actions/checkout@v4
with:
lfs: true
Without it, your CI gets the pointer files instead of the actual images. Then your screenshot tool tries to compare against 130-byte text files and things get weird.
When to bother
If you have a handful of screenshots that rarely change, don't bother. Plain Git is fine.
But if you're running automated screenshots — 30+ images, weekly updates, team of contributors — LFS pays for itself on the first clone.
Quick rule of thumb:
- < 10 screenshots, monthly updates → skip it
- 30+ screenshots, weekly automation → set it up
- Open source with many cloners → definitely set it up
More details in the heroshot docs.
Top comments (2)
412MB to 28MB is a wild difference. We ran into this exact same thing on a project where Playwright was generating visual regression screenshots on every PR. The repo got so fat that fresh clones were taking 5+ minutes and CI was burning through bandwidth.
One extra tip for anyone setting this up: if you're on GitHub, check your LFS storage quota. Free tier only gives you 1GB of storage and 1GB bandwidth/month. For active projects with lots of screenshots that can sneak up on you fast. We ended up self-hosting the LFS server with lfs-test-server to avoid the costs.
Also
git lfs migrate importis great but heads up — it rewrites history, so coordinate with your team before running it on a shared branch. Learned that one the hard way.Yeah, wondered about this the other day. It's fine for now, but it will become a problem before long. Thanks for the heads up, will check out Git LFS this week.