This seems utterly unpractical and error-prone to me.
A better approach would be, imho, to just commit as often as you can.
Ideally, after every micro-iteration (at each stage when something is working).
Why is this approach better?
No worries: you committed.
Better time-travel possibilities.
At the end of the day you can squash all of your micro-commits in one big juicy commit that includes every changes made to implement a function.
It's nothing hard: it's just git rebase.
Totally agree, merging is to be planned, committing is to be done frequently. Anyone who has ever lost work from hardware failure knows this. At any point when you have work you would not want to lose, commit.
Exactly, early and often, to a user/feature branch, then rebase/squash them as needed before merging to a 'real' branch.
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
Hide child comments as well
Confirm
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
This seems utterly unpractical and error-prone to me.
A better approach would be, imho, to just commit as often as you can.
Ideally, after every micro-iteration (at each stage when something is working).
Why is this approach better?
No worries: you committed.
Better time-travel possibilities.
At the end of the day you can squash all of your micro-commits in one big juicy commit that includes every changes made to implement a function.
It's nothing hard: it's just git rebase.
Totally agree, merging is to be planned, committing is to be done frequently. Anyone who has ever lost work from hardware failure knows this. At any point when you have work you would not want to lose, commit.
Exactly, early and often, to a user/feature branch, then rebase/squash them as needed before merging to a 'real' branch.