DEV Community

Kevin Mbanugo
Kevin Mbanugo

Posted on

The Linux Audacity

GNU/Linux is perceived as a secured, stable and mostly light-weight Operating System and a worthy replacement to Microsoft Windows. But how safe and secured is Linux really? And how immune to malware is Linux? Let’s see from recent times:

  1. Many didn’t realize that the Crowdstrike’s Falcon’s Cloud Security Solution that crashed about 8.2 million Microsoft Windows Workstations, due to faulty updates to its security software , also brought Linux to its knees despite Linux eBUF subsystem for running sandboxed programs in the kernel.

https://news.ycombinator.com/item?id=41005936

https://www.theregister.com/AMP/2024/07/21/crowdstrike_linux_crashes_restoration_tools/

  1. This year (2024), Linux was exploited for the same reason it is open source and transparent - the very essence of its fortress. A malware implant in the compression library lblzma, was used to create a backdoor on OpenSSH server. The attacker masqueraded 2 “test files” in the public repository of GitHub, used to build (compile) the program from source. Read the full comprehensive tactics by Andres Freund, who, like every other persons, wouldn’t have noticed (but for the unusual RAM usage of the SSH server), due to the high stealth of the malware and how deeply entrenched, escalating root privileges.

https://openwall.com/lists/oss-security/2024/03/29/4

  1. Sysrv malware (2024), a ferocious cryptojacking program.
  2. RamsomEXX (2022), Ramsomware targeting Linux servers.
  3. Drovorub (2020), A rootkit for targeting kernel modules
  4. Evil Gnome (2019), From the Gnome Desktop Environment, this malware poses as an extension for the DE.
  5. Linux.Backdoor.Fgt (2017), used to create backdoor access into Linux desktops and servers
  6. Rex.1 (2015), A malicious rootkit
  7. HiddenWasp (2019), A rootkit and backdoor access to Linux machines
  8. QNAPCRYPT (2019), Encrypts Linux file system and demands for ransom to decrypt.
  9. HandOfThief (2013), A banking Trojan horse targeting Linux stations, sniffs and steals sensitive bank details
  10. EburySSHBackdoor (2011) A backdoor access malware on SSH server
  11. DirtyCow (2016) - A malware responsible for escalating user privileges to root
  12. Exim Vulnerabilities (2019), A remote code execution for privilege escalation
  13. Xordos (2016), Rootkit/Backdoor
  14. LinuxRex1 (2018), Server malware
  15. LokiBot (2021), steals sensitive information
  16. Mirai (2016), malware affecting desktops and servers
  17. Lippol (2015) botnet creator
  18. Linux.Darlloz (2013), A worm targeting servers and PCs including routers, initiating remote code execution.

The list is in-exhaustive. What worries more is the fact that more threats and vulnerabilities are growing imminently with each new versions of the Linux kernel and GNU “extras”.
Contrary to side opinion on the monstrosity of the Linux Kernel to its monolithic architecture, Linux or any other .nix or Windows operating system is not more or less superior regardless its architecture (monolith or microkernel). Both architectures have their advantages and disadvantages, as well as use case. None can be attributed to whatever success an operating system may have achieved (at least, not directly).
Rather, the problem with the Linux kernel affecting its security and stability is, well, philosophical. Linux is gradually drifting away from the UNIX PHILOSOPHY which, in a nutshell, is to keep things SIMPLE, SMALL, BUILDING SINGLE PURPOSE PROGRAMS THAT DO ONE THING AND DOES IT WELL and to WRITE PROGRAMS THAT WORK TOGETHER (stdin & stdout or pipes “|” to easily exchange data). The following “reinvention” critiqued as deviating from the Unix Philosophy includes:

  1. Systemd - You can run a kernel solely but when you do, you discover it just sits there waiting to be initialized. The kernel begins to work when initialized with a service e.g a tty session, but in reality, the “init” process is the first process started by the kernel during boot. It has a process ID (PID) of 1 and remains running until the system is shut down. That’s where systemd comes in but before systemd, we had SysV which is a service and management initializer for Unix/Unix-like systems, adopted by Linux. SysV comprises of shell scripts for starting, stopping and managing services, kept in the /etc/init.d directory or sometimes, /etc/rc.d/init.d directory. Apart from kept shell scripts also, are symlinks which points to the shell scripts in init.d and are prefixed with s (for start) or k (for kill) followed by a number which dictates the order in which the scripts should be executed. SysV was extremely flexible and could be modified as dimmed fit.
    In 2010, SysV was replaced by systemd, developed by Lennart Poettering (while still an employee for Redhat). Systemd made initialization easier and better. Systemd initializes faster than SysV since systemd executes services in parallel as opposed to SysV executing its scripts sequentially. Systemd is built to be modular, consisting of several separate but integrated components for managing services (units, in systemd’s parlance). As such, it is easier to port to other operating system than SysV, and managed with integral binary components like systemctl, journald, logind, networkd etc. The problem with “easier and better” is the propensity to introduce unwarranted vulnerabilities. Critics have argued-and rightfully so-that systemd is one centralized program that controls the entire operating system, incorporating many functions that were originally handled by separate, simpler tools. It’s no wonder that Patrick, the developer and guardian of Slackware Linux (a very powerful and the most flexible distros amongst thousands) has thought it wise not to incorporate systemd into its ecosystem.

  2. Binary Logging - The drift towards binary log format (e.g systemd journal) instead of the normal text log which can be easily parsed by normal Unix tools from a programs output to another programs input, is a major cause of worry and is dimmed unnecessary.

  3. Desktop Environment - With the desperate bid by developers to release a fantastic UI/UX desktop experience and with the constant thirst by any random user to customize their desktops with extra features that comes from downloadable extensions, comes unwarranted, breeding ground for common malware as seen in the Gnome DE (see above).

  4. Over Saturated Network Managers - There are complex, over saturated, unnecessary network managers doing what the normal, standard Unix tools have been doing efficiently for decades. True, new user needs arise over time, with new hardware upgrades. But the underlying concepts are always the same and old programs can be improved upon to keep abreast of the times (evolution) rather than throwing new things into Linux (revolution). LINUX NEEDS TO EVOLVE, NOT TO BE REVOLUTIONIZED.

  5. Addition of Non-Essential Features - Many distributions come ladened with unnecessary, non-essential features, leading to bloated systems by default. Linux boast of its ability to revive old hardware with even less than 1GB RAM to run efficiently and optimally. But I have seen many Linux distributions by default that sits on approximately 4GB of RAM. Some are even much worse than Microsoft Windows. With more bloats comes more affinity for vulnerabilities.

  6. Linux Kernel Size & Complexity - Over time, the Linux kernel has grown into a huge, binary mess - in size and complexity - with more and more features and subsystems. This has resulted in a huge monolithic binary with lots of sprouting vulnerabilities.

In my 300 Level of Computer Science, I took a course (PRINCIPLES OF OPERATING SYSTEM) and the textbook used was MODERN OPERATING SYSYEM By Andrew S. Tanenbaum, who built the MINIX Operating System which is a Unix-like OS, for educational/teaching purpose. In chapter 9 (SECURITY), under the sub heading (TRUSTED SYSTEMS), Andrew rightfully answered the question if it is possible to build a secured OS and if yes, why hasn’t it been built? He wrote:

“ 1. Is it possible to build a secure computer system?

  1. If so, why is it not done?

The answer to the first one is basically yes. How to build a secure system has been known for decades. MULT1CS, designed in the 1960s, for example, had security as one of its main goals and achieved that fairly well. Why secure systems are not being built is more complicated, but it comes down to two fundamental reasons. First, current systems are not secure but users are unwilling to throw them out. If Microsoft were to announce that in addition to Windows it had a new product, SecureOS, that was guaranteed to be immune to viruses but did not run Windows applications, it is far from certain that every person and company would drop Windows like a hot potato and buy the new system immediately. The second issue is more subtle. The only way to build a secure system is to keep it simple. Features are the enemy of security. System designers believe (rightly or wrongly) that what users want is more features. More features mean more complexity, more code, more bugs, and more security errors”.

This confirms that the pair “easier and better” through features of ease of use is not always a real solution. Something easier may not always be better and something better may not always be easier. Unix was extensively criticized in the 70’s & 80’s for its lack of GUI and technical learning curve, which wasn’t that user friendly at the time, but it was rock solid, secured and powered enterprises and institutions. A balance between easier and better has to be met. We have to accept these fact in software and systems development.

A review on the inclusions into the Linux kernel and the corresponding vulnerabilities introduced, sheds more light :

1.  Drivers - The Linux kernel includes a large amount of drivers to support a wide range of hardware devices, which in turn, results to a massive, complex code base .

Resultant Vulnerabilities :
CEV-2019-14615: A vulnerability in in the intel graphics drivers that could escalate root privileges to local accounts

CEV-2020-12888:  A Bluetooth driver vulnerability capable of escalating root privileges

2.  Virtualization -  The Linux kernel can act as an hypervisor because of the Kernel-based Virtual Machine (KVM) feature, that allows the running of virtual machines.

Resultant Vulnerabilities
CEV-2020-2732: A vulnerability in the KVM on Intel processors can allow a VM user to crash the host OS or escalate super user privileges .
CEV-2021-22543: Nested virtualization that could potentially allow a guest user of the VM to crash the host operating system.

  1. File Systems: Linux Kernel supports an array of file systems including FAT32, ext4, Btrfs, NTFS, XFS and a host of others. Typically, throwing in new file system programs instead of improving on existing ones, has proven to be  a problem  with Linux.

Resultant Vulnerabilities -
CEV-2019-19816: A denial of service attack caused by a race condition in the ext4 file system, allows the attacker to execute arbitrary codes.
CRV-2020-8992:  A btrfs vulnerability that can enable a local attacker to crash the operating system and escalate superuser privileges.

  1. Containerization: This is a sandbox, isolated, resource controlled environment added as a kernel feature. Many sandboxed containerized user applications rely on this feature.

Resultant Vulnerabilities  -
CEV-2019-5736: This is a runC container runtime vulnerability that takes advantage of containers like Linux namespaces and cgroups. This can escalate container privileges to execute arbitrary codes on the host.
CEV-2020-14386: A cgroup vulnerability that allows local users to escalate root privileges.

  1. Tracing & Debugging Tools: The Linux kernel is ladened with more than a dozen  debugging and tracing tools like eBPF, dmesg, systemtap, kgdb etc. While these tools may have different use case, these tools attest to the complexities of the Linux kernel.

Resultant Vulnerabilities  -
CEV-2020-14331: A perf vulnerability that enables a local user to escalate root privileges.

  1. Memory Management -  Advanced memory management techniques which includes Non-Uniform Memory Acces, huge pages and many other memory allocations add to the complexities of the Linux kernel.

Resultant Vulnerabilities  -
CEV-2019-19319: Memory management vulnerability that could enable superuser privileges if exploited.
CEV-2020-29374: A vulnerability (use-after-free ) enabling local privilege escalation.

  1. Power Management  -  This feature includes CPU frequency scaling, hibernation etc necessary for modern computing but adding to the already messy code base.

Resultant Vulnerabilities  -
CEV-2019-9506: Bluetooth low energy protocol can allow an attacker to sniff data packets being transmitted.

Though many of these Vulnerabilities have been patched, it goes to show just how "features" can be an enemy of a secured and rock solid operating system. BSD (Berkeley Software Distribution), a Unix-like operating system, with a conservative approach to maintenance and a considerable balance between “easier and better” is seen to be much attuned (to a certain degree) to the Unix Philosophy. FreeBSD always places more emphasis on improving its already existing programs rather than just throwing new additions into the barn.
Early, traditional Unix, the Valhalla of security and rock tight stability, developed by AT&T’s Bells lab in the mid 70’s was adopted commercially and was a commercial success in research institutes, academics and other high end institutions, after the failure of Multics (mid 60’s by same developers of UNIX) due to its complexities and inability to gain traction with the hardware of those times. Unix was a much more simpler and versatile enterprise solution, so it didn’t take long for the University of California, Berkeley (mid 70’s) to get a hold of a copy to adequately modify and developed the first BSD. BSD were the progenitors of innovative tools like the C shell (csh), vi editors, and networking utilities like TCP/IP Stack which became fundamental to the growth of the internet. Impressed by Unix, Andrew S. Tanenbaum built Minix in the 80’s, officially released in 1987. It was Unix-like, but with a different architecture (microkernel) as opposed to Unix’s monolith kernel. Minix was developed with security and performance at the forefront, while being minimal, solely for educational purposes.
A barrier with Unix and these early Unix-like operating systems was the licensing of those times. The license was unapologetically expensive and these operating systems required highend hardware for enterprise and academic usage. This led Richard Stallman to develop the GNU initiative which comprises of the Hurd kernel. The Hurd kernel was intended to be a kernel for the GNU operating system. Stallman was deeply concerned by the increasing prevalence of proprietary software which restricted users freedom to modify or share. The same frustration was met by Linus Tolvald, as he could not afford to buy the Minix license, which resulted to him writing the linux kernel in 1990.
The mid 70’s to the 80’s were critical times due to the emerging market boom of Personal Computers for direct end-users. Several Personal Computers had already gained prominence, from the mid 60’s to the 80’s but only a handful could boast of having one. IBM, ceasing the opportunity, began scouting for Operating System that will run on its IBM 5150, also called IBM PC. The IBM PC was built with the intel 8088 microprocessor , a relatively simple and low-cost CPU. Of course, IBM couldn’t approach Unix or BSD which ran on more powerful processors like those on mini PC’s and workstations (also, Unix and BSD were enterprise operating systems on a high end, commercial hardware, also with its high licensing cost that end users won’t be able to afford, if IBM had purchased and incurred the cost on consumers). Minix was an educational tool and GNU/Hurd never gained traction due to complexity of microkernel development and available hardware compatibility at that time, and Linux was technically not “born” yet. The dominating operating system for the earlier PC’s was CP/M (Control Program for Micro Computers) developed in 1974 and was the dominant operating system for 8-bit microcomputers during the late 1970s and early 1980s, so it wasn’t hard of a choice for IBM to approach Gary Kildall, founder of Digital Research, Inc, and developer of CP/M. Unavailability of Gary Kildall when IBM rep came visiting along with other factors led IBM to approach Microsoft (then Micro-Soft, led by Bill Gate and Paul Allen), whose only product was the BASIC Interpreter for the Altair 8800 in 1975. Without a real operating system, Bill and Paul acquired QDOS ( Quick and Dirty Operating System) from Seattle Computer Products, adapting it to PC-DOS (IBM’s version). Thus, MSDOS (later Windows) became widely accepted as the de facto Operating System for low cost, affordable user end PC’s.
We cannot ignore that corporate takeovers and central ownerships can totally derail the purpose and ambitions of an Operating System and its community of users. The acquisition of Cent OS in 2014 - a stable Linux release with millions of user base and servers - by RedHat, came as a rude shock in the GNU/Linux community. In 2019, RedHat decided to “convert” CentOS to CentOS Stream, a rolling release and an upstream to RedHat Enterprise Linux (RHEL). This move demanded the End-Of-Life of CentOS, its support services ended and thousands of server migrations to the subscription service of RHEL

https://www.redhat.com/en/blog/centos-linux-going-end-life-what-does-mean-me

The fact that developers and community managers can decide one day to integrate a life-long, community driven project into a profit making Corporate Institution, can be quite scary. Linux has one guardian. His name is Linus Tovald and he dictates what additions goes into the kernel, which is fine by many of us. The only anxiety is that he hasn’t named a successor which leaves the community very anxious of the future.

In conclusion and in addition to all that’s been written, community managers need to stick it through any difficulty, technical or managerial, faced by a community’s distribution or tool. Internal disagreement are common and normal. But it shouldn’t be a reason to throw talents overboard. Blockings and kickouts are sadly frequent in the BSD community. OpenBSD is an offshoot of NetBSD which emerged in 1995 due to disagreements and fallouts. Theo De Raadt, one of the founders of OpenBSD was locked out of the repository due to disagreements on development, which led him to fork NetBSD, building upon it to develop OpenBSD. The BSD work has long been stalled by such fracas over the years. Also, old developers of BSD tend to migrate and abandon projects when the going gets tough, aligning with projects with a larger community or perhaps, where the whistle blows louder for more end-user features.
Developers and managers of open source communities should improve on existing tools and modules instead of just throwing in new stuffs in. Doug Mcllroy, inventor of the Unix Pipe and former head of Bells Lab once said:

“adoring admirers have fed Linux goodies to a disheartening state of obesity.”

Also,

“Everything was small... and my heart sinks for Linux when I see the size of it… The manual pages, which really used to be a manual page, is now a small volume, with a thousand options... We used to sit around in the Unix Room saying, 'What can we throw out? Why is there this option?' It's often because there is some deficiency in the basic design — you didn't really hit the right design point. Instead of adding an option, think about what was forcing you to add that option” - Doug Mcllroy [cited from Wikipedia]

System design should be kept simple, small and modular. The pair “Easier and better” is not always ideal for a secured and performance-driven system.

Top comments (0)