For further actions, you may consider blocking this person and/or reporting abuse
Read next
How To Build a Basic Search Engine Like Google
Shubhadip Bhowmik -
Be Real: How to Use AI Writing Tools and Stay Authentic
Jimmy McBride -
JavaScript Memory Management and Optimization Techniques for Large-Scale Applications
Shafayet Hossain -
Why "AI" is the new "sustainable"
Ingo Steinke, web developer -
Top comments (27)
da best answer fosho
Lol
I think any time anybody did anything before the current thing existed is, by and large, the same way they do it now: By stretching the boundaries of what is possible and having a lot of frustrating experiences as a result.
As we make progress in the software industry, things don't necessarily get "easier", we just start attempting more challenging things.
There were many revision/version control systems prior to git, but practices have matured and standardized since then. It can be as simple as a tarball with a timestamp going to tape backup if you're into 80s style.
One place I worked earlier in my career which did not use source control had a development server on the office network instead.
We mapped its drive and simply loaded the code files on it directly in our editors, made changes, and saved them.
I remember that one or two files in the codebase of a site on there were quite popular and we always had to ask others in the office if they were working on these files to avoid overwriting each others work (and yes this happened from time to time).
It was a pretty awful way of working. Fine if only one person was working on the codebase at a time, but as soon as others tried working on it at the same time we had lots of issues.
I was glad once we changed to using source control.
Like Google Drive?
First off,
git
is actually relatively young by VCS standards at only 15 years old. By comparison, one of the first widely used VCS tools, SCCS, is currently 48 years old (though it's hard to say if it's still in active usage anywhere as it was only ever designed for local operation). So version control has been around for quite some time.Before that though, most of the development process involved keeping proper backups, and properly documenting the code so that it can be understood what is going on (and sometimes also documenting what was changed and why).
Exactly. At the beginning of my career I've worked on project where we had no VCS. Every single file had at the beginning comment with list of changes where we documented who and what changed (and often why).
Here's a REALLY interesting piece of background about the "ancient" history of version control systems, this goes back even to the time BEFORE computing started, describing the processed that were used in industrial engineering (managing technical drawings and designs):
red-gate.com/blog/database-devops/...
Of course none of those "version control" processes were automated, I guess there were "librarians", just people who archived versions of the drawings in "file" cabinets, just drawers with codes and numbers where they kept the old versions of the drawings.
When computing started with mainframes and punch cards (where a computer with the processing power of your contemporary PC or smartphone would be as big as your living room), "version" control used these same manual processes - as a programmer you would give your "deck" of cards to a librarian who would "archive" a version of it in the "library" (just a storage room with cabinets with drawers containing physical copies of the card decks, organized and numbered so that people could find and retrieve them).
I think that truly automated Source Control started after UNIX "mini computers" were invented, which did not use punch cards and printers anymore but keyboards and "terminals" (monitors). The oldest source control system was SCCS invented in 1972:
en.wikipedia.org/wiki/Source_Code_...
None of this was networked or multi-user, it must have been clumsy and difficult to use, here you can read about how laborious it was to use it:
ericsink.com/vcbe/html/author_back...
So, from that time onwards "source control" existed but I'm pretty sure that for less important programs people didn't bother with source control, they just dumped backups of their "spaghetti code" programmed in dBase III Plus or whatever on a 360Kb floppy disk and called it a day :-) .... source control was probably just for the "high end".
Around 1984 Microsoft introduced "SourceSafe" and I think from then on source control really became "mainstream". Then CVS came (better than SourceSafe), SVN came (better than CVS), then GIT came (WAY better than all the others), and the rest is history :-)
Document control processes are still used to today to control versions, access and write access. I've even seen such implemented as part of ISO90001 compliance in industry in recent years
Or Filezilla.
... it was a mess. I don't even want to speak about Subversion. Git is a weird tool, pretty difficult to learn, even more difficult to master. But it brings so much flexibility.
Another approach that was around was that you'd simply have one person actually write the software, with a team around in some kind of "pair programming", and there wasn't actually any source code in files. Instead, the program was a "living thing", an image, and source code a mere serialization of its state. That was mainly done on lisp machines, but other languages that followed that approach were e.g. SmallTalk. Pharo is an example of that.
How did people develop software now without using version control? Yes, it happens (even in teams, not just solo devs).