A space to discuss and keep up software development and manage your software career
An inclusive community for gaming enthusiasts
News and discussion of science and technology such as AI, VR, cryptocurrency, quantum computing, and more.
From composing and gigging to gear, hot music takes, and everything in between.
Memes and software development shitposting
Discussing AI software development, and showing off what we're building.
Movie and TV enthusiasm, criticism and everything in-between.
Web design, graphic design and everything in-between
A community for makers, hobbyists, and professionals to discuss Arduino, Raspberry Pi, 3D printing, and much more.
For engineers building software at scale. We discuss architecture, cloud-native, and SRE—the hard-won lessons you can't just Google
Discussing the core forem open source software project — features, bugs, performance, self-hosting.
A collaborative community for all things Crypto—from Bitcoin to protocol development and DeFi to NFTs and market analysis.
This seems utterly unpractical and error-prone to me.
A better approach would be, imho, to just commit as often as you can.
Ideally, after every micro-iteration (at each stage when something is working).
Why is this approach better?
No worries: you committed.
Better time-travel possibilities.
At the end of the day you can squash all of your micro-commits in one big juicy commit that includes every changes made to implement a function.
It's nothing hard: it's just git rebase.
Totally agree, merging is to be planned, committing is to be done frequently. Anyone who has ever lost work from hardware failure knows this. At any point when you have work you would not want to lose, commit.
Exactly, early and often, to a user/feature branch, then rebase/squash them as needed before merging to a 'real' branch.
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
Hide child comments as well
Confirm
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
This seems utterly unpractical and error-prone to me.
A better approach would be, imho, to just commit as often as you can.
Ideally, after every micro-iteration (at each stage when something is working).
Why is this approach better?
No worries: you committed.
Better time-travel possibilities.
At the end of the day you can squash all of your micro-commits in one big juicy commit that includes every changes made to implement a function.
It's nothing hard: it's just git rebase.
Totally agree, merging is to be planned, committing is to be done frequently. Anyone who has ever lost work from hardware failure knows this. At any point when you have work you would not want to lose, commit.
Exactly, early and often, to a user/feature branch, then rebase/squash them as needed before merging to a 'real' branch.