DEV Community

Cover image for The Bigbang πŸ’₯
PARTHIB KUMAR DEB
PARTHIB KUMAR DEB

Posted on • Edited on

The Bigbang πŸ’₯

Introduction 🏁

Welcome, fellow explorers πŸ‘‹, to the inaugural post of Linux Empire. This isn't just a series of articles; think of it as a fascinating web-series where each blog post is an episode, gradually unveiling the remarkable story that shaped the open-source operating system we rely on today. In this first episode, The Big Bang, we're heading back to the 20th century πŸ•°οΈ. We'll explore the early rivalries among tech giants β€” a significant period that laid the groundwork for modern computing.

I encourage you to accompany me on this voyage β›΅ to truly understand Linux's motive and its profound impact on computing, from past centuries to future inventions.

Mainframes and Multics 🏬

In the mid-20th century, computing was dominated by massive mainframes and time-consuming punch card batch processing, with programmers waiting hours for results. This inefficient, solitary technique inspired the concept of interactive time-sharing. The enormous Multics (MULTiplexed Information and Computer System) project was born, a large "computer utility" proposed by MIT, Bell Labs, and General Electric.

Multics was designed to store and retrieve large data sets, to be used by many different people at once, and to help them communicate. However, Multics' overwhelming complexity was destined for failure. It suffered from the second-system effect , which occurred when too many revolutionary innovations were included at the same time, resulting in a bloated and constantly delayed system. Development was extremely slow, and the ongoing need for more powerful, costlier hardware rendered it financially unsustainable for Bell Labs. The frustration of working with a system that was constantly behind schedule and difficult to manage eventually drove Bell Labs to withdraw in 1969, leaving a critical void and a clear lesson in over-engineering.

The Bell Minds πŸ’‘

During that era, the extensive telecommunications giant AT&T, often called Ma Bell, held a virtual monopoly over telephone services in the USA. Its extensive research arm, Bell Labs, had been part of the ambitious Multics project and it was located in Murray Hill, New Jersey. However, after its failure , Bell Labs employees Ken Thompson and Dennis Ritchie embarked on their own, setting out to create an operating system that would offer a simpler, more efficient environment for developing software - The UNIX.

The UNIX Era 🌐

The UNIX (or UNICS - UNiplexed Information and Computing System) operating system was born out of a collaborative, developer-driven environment. This project was Heavily funded by the Department of Defense’s Advanced Research Projects Agency (then known as ARPA). Its inception was not motivated by commercial needs or corporate requirements, but by a sheer desire to remove the actual constraints experienced by programmers while building software. AT&T provided free Unix samples to university students in the 1970s. Unix outperformed all other operating systems in terms of satisfaction among researchers and students. The system was cost-effective, adaptable, and used low-cost hardware. It met their needs better than any other available option. Universities lacked the necessary hardware to compete with Unix operating systems, which were either not "free," or were still being developed in labs.

In his seminar 1980 lecture, "The Evolution of the UNIX Time-sharing System," Dennis Ritchie eloquently summed up the spirit that ignited UNIX. He stated:

"What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing as supplied by remote-access, time-shared machines is not just to type programs into a terminal instead of a keypunch, but to encourage close communication."

UNIX's foundation included many crucial elements:

  • The Shell πŸ”₯

    From its inception, UNIX created a rudimentary command-line interpreter known as the shell, which served as the primary interface for user interaction. Rather than a modern graphical user interface, users typed commands directly into the shell, which translated them into actions that the kernel could perform. It was The Thompson shell (sh), which was included in the first UNIX systems. It appeared in 1971 with Version 1 and It was written by Ken Thompson. However, when most people emphasize the Shell's potential for automation and complex scripting, they are typically referring to the Bourne Shell (sh). It was Created by Stephen Bourne and introduced with UNIX Version 7 in 1979. The Bourne Shell brought a revolution in UNIX, transforming the shell into a full-fledged programming language with numerous advanced capabilities.

  • The Unix filesystem πŸ“‚

    The innovative filesystem was a key feature of UNIX's architecture. It provided a remarkably easy approach to organize files and folders thanks to its hierarchical structure, which allowed several layers of subdirectories. Furthermore, UNIX made external administration significantly easier by representing devices like disks and tapes as distinct device files that could be read and changed like any other file in the directory tree.

  • Redirection ‴️

    Early UNIX was known for its beautiful command-line interface, which included input redirection and pipes. The > sign allows users to redirect command output to a file. Subsequently, the pipe (|) operator was developed, allowing the output of one command to be fed directly into another, resulting in strong data flows.

  • Portability πŸ”¦

    UNIX's outstanding portability came from its elegant design. UNIX abstracted hardware details by treating device drivers as files within the filesystem tree , providing applications with a uniform interface. This meant that only the drivers needed to be modified to migrate UNIX to new platforms, while the application programs stayed untouched - a big step forward in cross-platform compatibility. Realizing this potential required a high-level programming language, thus Dennis Ritchie created the C programming language. In a watershed event in 1973, UNIX was rebuilt in C.

USA vs AT&T ⚑

For most of the 20th century, AT&T had a near-absolute, government-sanctioned monopoly on the United States telephone system. This extensive control included local and long-distance services, as well as the manufacture of most telephone equipment through its subsidiary, Western Electric. Indeed, companies that would later become giants, such as Verizon and Qwest, were once integral parts of this unified system. Given AT&T's enormous telecommunications market strength, the US government became increasingly concerned that an unconstrained "Ma Bell" would dominate the massive computer sector. As a result, the US Department of Justice filed a broad antitrust lawsuit against AT&T in 1974. Before the divestiture of AT&T, it was restricted to sell computer systems in US. For this reason, the UNIX source code found its way into universities, licensed for only a nominal fee. AT&T only distributed UNIX as source code, so users had to compile it themselves. But after 1984, divestiture was behind AT&T and it was ready to really start selling UNIX. This caused the formation of a separate UNIX Laboratory between AT&T and a part of it which were invested in the making of UNIX from beginning, which moved out of Murray Hill and down the road to Summit, New Jersey.

Berkeley Software Distribution πŸ˜„

UNIX Version 6 was the first version made generally accessible outside of Bell Laboratories in 1975. The University of California at Berkeley developed the first significant UNIX variation using this early UNIX source code. They referred to it as the Berkeley Software Distribution (BSD). Over the next decade, the BSD and Bell Labs versions of UNIX diverged. BSD maintained the free-flowing, share-the-code approach of the early Bell Labs UNIX, while AT&T focused on commercializing UNIX.

Commercializing UNIX πŸ’°

Initially, Unix spread widely because AT&T licensed its source code cheaply to universities. This freedom, however, led to many different versions of Unix being created, sparking the UNIX Wars as these versions often didn't work well together.

Following the significant legal developments of 1974, including AT&T's separation (which removed constraints), AT&T felt it was time to profit from Unix and reclaim control. Their plan was sound: they attempted to persuade major computer manufacturers such as HP, Sun Microsystems, and IBM to use AT&T's version of Unix (System V) as their primary operating system. To make this happen and bring order to the chaos, AT&T began defining clear rules for what a system had to do to still be called "UNIX." This effort led to key standards like the Portable Operating System Interface (POSIX) and AT&T's own UNIX System V Interface Definition (SVID). These specifications allowed different vendors to create compatible Unix systems.

Interestingly, despite all this, AT&T continued to distribute Unix source code. It wasn't until 1992, when AT&T formed a joint company with Novell or Univel (later fully sold to Novell), that a ready-to-use, commercial version of UNIX called UnixWare was finally sold directly from the source code. Ultimately, these very same standardization documents (POSIX and SVID) also served as crucial guides for the creation of Linux.

AT&T vs BSD 😞

For a time, the legendary BSD project was the strongest contender to become the most popular open-source kernel, possibly even before Linux. By the late 1980s, developers at the University of California, Berkeley (UC Berkeley) had precisely recreated the majority of the UNIX source code they had acquired a decade prior. UC Berkeley's independently built UNIX-like code, Net/1, was released in 1989, followed by Net/2 in 1991. In 1992, just as they were about to distribute a complete, UNIX-like operating system free of all AT&T code, AT&T filed a big lawsuit against them.

The suit aggressively claimed that BSD's software was developed using trade secrets stolen from AT&T's proprietary UNIX system. It's crucial to understand that BSD developers had meticulously rewritten the code to avoid direct copying, primarily focusing on circumventing AT&T's copyrights, which were then the main legal protection for the UNIX code. Some legal and historical observers even suggest that if AT&T had instead protected its core UNIX concepts with patents, the landscape might be entirely different today β€” perhaps there would be no Linux or any other UNIX clone.

While the lawsuit was eventually dropped when Novell acquired UNIX System Laboratories from AT&T in 1994, the damage was done. During that critical two-year period, the pervasive fear and doubt surrounding the legality of the BSD code caused it to lose significant momentum within the burgeoning open-source community. This unfortunate legal battle ultimately created the vacuum that Linux would soon fill.

GNU Appears πŸ˜‡

Long before the pivotal lawsuit against BSD, another crucial player entered the market: The GNU Project, launched in 1984. This ambitious undertaking was started by Richard M. Stallman, and its name itself is a recursive acronym: "GNU is Not UNIX."

GNU's key objective was to embark on a comprehensive re-coding of the entire UNIX operating system, with the basic idea that it may be freely supplied to anyone. Developers immediately discovered that they could achieve the same outcomes with wholly new code, and in many cases, their freshly developed components even outperformed the original UNIX versions. This open development strategy was groundbreaking: because everyone could openly inspect the code, poorly written pieces could be quickly changed, or even replaced with better alternatives over time.

It's important to note the evolving terminology here. Over time, the term "free software" – which is still preferred by the Free Software Foundation (https://www.fsf.org/) – has largely been supplemented by open source software. The latter term is championed by the Open Source Initiative.

The GPL License πŸ’«

The GNU project developed the GNU General Public License (GPL) specifically to dictate how free software should be treated and distributed. Although various other licenses exist, offering different methods for preserving software freedom, the GPL stands out as the most prominent, and it is the license under which the Linux kernel is released. Key principles of the GNU GPL include:

  • Author's Rights Retained: The original creator of the software retains their copyright and ownership, even while granting extensive freedoms to others.

  • Freedom to Distribute and Modify: Users are free to use, modify, and redistribute the GNU software, even incorporating it into their own projects. Crucially, any distribution of the software (modified or not) must include its source code or make it easily accessible.

  • Copyleft Principle (Preserved Freedoms): If you distribute or even sell software under the GPL, you must ensure that all future recipients receive the same freedoms you did. This means the GPL's terms must accompany the software, giving everyone the right to access, change, and redistribute the source code.

The GNU GPL explicitly states there is no warranty on the software. This means the original developers are under no obligation to fix any issues that may arise. However, numerous organizations, ranging from small businesses to large corporations, do offer paid support services (often through subscriptions) for GNU software.

Open Source Kernel ⭐

Despite its remarkable success in developing thousands of essential UNIX utilities, the GNU Project critically struggled to produce its own working kernel. Its ambitious efforts to build an open-source kernel, known as the GNU Hurd (link: www.gnu.org/software/hurd), proved largely unsuccessful in gaining widespread adoption, preventing it from becoming the dominant open-source kernel.

MINIX and the void πŸ˜•

At the beginning of the 1990s, the ideal version of a truly free and open-source operating system, free of proprietary licenses and legal issues, remained frustratingly out of reach. In this terrain of imperfect solutions and legal issues, stepped MINIX, a tiny, Unix-like operating system built by Andrew Tanenbaum in 1987. MINIX soon became a crucial teaching tool, created especially for academic purposes. MINIX's primary innovation was a microkernel architecture: unlike previous systems' monolithic kernels, the tiny microkernel handled only basic tasks like inter-process communication (IPC), memory management, and process scheduling, while other services ran as distinct, separated processes. However, Linus Benedict Torvalds, who was a student at the University of Helsinki during this time-period, delved into MINIX, he quickly identified its limitations for practical, cutting-edge development. He recognized that while great for learning, MINIX wasn't designed for the robust, high-performance computing he sought.

Conclusion 🎌

This unique convergence of factors – the GNU kernel's continued absence, BSD's legal battle, and the practical and licensing constraints of MINIX – created a palpable void and a clear opportunity. It directly inspired Linus Torvalds to embark on creating what would become Linux: a truly free, powerful, and unencumbered Unix-like kernel.

Top comments (0)