<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kevin Mbanugo</title>
    <description>The latest articles on DEV Community by Kevin Mbanugo (@skysoft501).</description>
    <link>https://dev.to/skysoft501</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/skysoft501"/>
    <language>en</language>
    <item>
      <title>Computer Science is Not Science</title>
      <dc:creator>Kevin Mbanugo</dc:creator>
      <pubDate>Mon, 19 Aug 2024 10:52:31 +0000</pubDate>
      <link>https://dev.to/skysoft501/computer-science-is-not-science-45km</link>
      <guid>https://dev.to/skysoft501/computer-science-is-not-science-45km</guid>
      <description>&lt;ol&gt;
&lt;li&gt;Computer Science is not science &lt;/li&gt;
&lt;li&gt;Software Engineering is not engineering &lt;/li&gt;
&lt;li&gt;Computer Science (which isn’t science) has very little to do with computers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is obvious that many professionals, students and academia’s will be provoked by these statements. But with the pent up provocation against statements that seem to outrightly challenge a discipline and career that we have ever known and loved from generation to generation, comes inquisitions like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why not?&lt;/li&gt;
&lt;li&gt;If true, what should the discipline be called?&lt;/li&gt;
&lt;li&gt;Also, if true, why the misconception which  has lasted for generations and failure to have these disciplines addressed ?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A concrete and well definitive answers will be given to address these inquisitions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why Not : Let’s have a clear picture of what science really is. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“Science is the study of the natural and physical world using theoretical models and data from observations and experiments” -[Princeton’s Wordnet]&lt;/p&gt;

&lt;p&gt;Science follows the scientific methods of observation, formulating a falsifiable hypothesis and testing out this hypotheses  with the hope of establishing an empirical evidence for or against.&lt;br&gt;
  Empirical evidence is simply data or information from observation, experiment or experience rather than theory or logical reason. It is that evidence that can be directly measured and observed in the real world and this forms the very foundation of science.&lt;br&gt;
  Normally, a scientific theory usually connotes a theory that is backed up by empirical evidence but they are classifications of theories based on how strong or how weak a theory is supported by empirical evidence. These classifications are:&lt;/p&gt;

&lt;p&gt;A.  Theories Strongly Supported by Empirical Evidence - e.g The Big Bang Theory (cosmic microwave background, red shift etc), The Theory of Natural Selection (fossil records, homologous structures etc)&lt;/p&gt;

&lt;p&gt;B. Theories with Indirect or Partial Empirical Evidence - e.g Dark Matter (supported by the gravitational effects of the galaxies but the exact nature of dark matter remains largely unknown)&lt;/p&gt;

&lt;p&gt;C.  Theories with Limited or No Empirical Evidence - These are mere hypotheses and speculations like String Theory, Multiverse hypotheses.&lt;/p&gt;

&lt;p&gt;D. Discredited Theories - Some theories that were once considered feasible have been discredited or replaced by better, well credible theories as new and better empirical evidence emerged e.g The Steady State Theory which was replaced with The Big Bang Theory.&lt;/p&gt;

&lt;p&gt;It is very evident and glaring that science and its methods are only viable for physical, natural (evolving) processes and not something abstract or logical. Softwares are considered abstract because it consists of logic, instructions and algorithms which generally lacks empirical evidence.&lt;br&gt;
Resultantly, because softwares, programs or algorithms are abstract, lacking empirical evidence, and exists neither in the natural or physical world (as “always there” waiting to be “discovered” and “investigated” as typical of science), and also because we cannot implore the use of scientific methods to probe them, then it makes perfect sense that  Computer Science cannot be science. This consequently explains why software engineering cannot be engineering.&lt;/p&gt;

&lt;p&gt;According to the American Engineers Council for Professional Development (ECPD), engineering is :&lt;/p&gt;

&lt;p&gt;“The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.”&lt;/p&gt;

&lt;p&gt;Engineering and science while closely related, are 2 distinct fields : while science is concerned with the understanding of the natural world through observation and experimentation to develop theories to explain phenomena, engineering (with the exception of engineering science, which entails everything about science methodology to create a real world solution), is the application of these scientific principles or theories to design, build and maintain systems, structures, machinery or tools. So we now see that Engineering is not engineering just because of it design approach. Traditional engineering must be validated by science. If you don’t apply science in a structured way, it would not be considered engineering. Engineering is principally about using scientific principles or methods to create or improve things. Without the application of science, it would be more alike to craftsmanship or trial-and-error problem solving (which is basically all that entails programming and software development). This is why most applied sciences like medicine and engineering are supported by mandatory certifications and licensing, issued by a governing board that ensures strict compliance to these principles.&lt;br&gt;
It is imperative I mention that most abstract disciplines like mathematics, logic, philosophy, theoretical computer science etc are regarded as formal sciences but which are still in disagreement and hasn’t been wholly accepted as scientific disciplines because of the lack of empirical evidence to support them. Unlike science which is as old as man, tracing far back to ancient civilizations, formal science is quite modern, tracing back to the mid 20th century. It was introduced by the growing number of abstract studies and the penchant need to group or classify them according to specific interests. This will be explained in detail in question 3.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If True, What Should The Discipline Be Called? : If you read any computer science book, the first chapter is likely going to be about the history of computers. Probably from the Abacus to the ENIACs, then maybe the first generation mainframes and minicomputers to micro computers. But is our discipline called computer science (which isn’t science by the way) because we use science methodologies to study a man made equipment?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fellows and Parberry said “Computer Science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes". &lt;/p&gt;

&lt;p&gt;It goes without a doubt that computers are tools.  The history of computers (not as tools) predates history itself, as old as man. Man constantly came up with solutions through certain procedures or functions to solve a problem. These include hunting, shelter making provisions or even recipes. Solutions were achieved by man’s certain procedures or functions (usually with certain defined parameters) designed to tackle his problems. Indeed, in the year 1611, computers was a name given to certain workers tasked with computing problems to arrive at solutions. There is a modern name ascribed to this discipline and it’s called Algorithm and those who study and practice it are called Algorithmist. An algorithm is a procedure or function, with well defined parameters, to solve a problem. The first, generally agreed algorithm was written by Euclid in the mid-4th century BCE. His method for computing the greatest common divisor (GCD) of any 2 positive integers is generally considered to be the first documented algorithm as seen below:&lt;/p&gt;

&lt;p&gt;E1. [Find remainder.]  Divide m by n and let r be the remainder. (We will have 0 ≤ r &amp;lt; n.)&lt;/p&gt;

&lt;p&gt;E2. [Is it zero?]            If r = 0 the algorithm terminates; n is the answer.&lt;/p&gt;

&lt;p&gt;E3. [Interchange.]      Set m ← n, n ← r, and go back to step E1.&lt;/p&gt;

&lt;p&gt;Alan Turing, considered to be the father of modern computing and the inventor of the Turing Machine which is a fully feasible procedure, and works regardless if it’s abstract or physical, as shown below:&lt;/p&gt;

&lt;p&gt;IF state = 3 and symbol = 0&lt;br&gt;
THEN write 1, set state to 0, move right&lt;/p&gt;

&lt;p&gt;Logically, it consists of an infinity long tape with a read/write head, where each tape is either 1 or 0 for states. This algorithm is responsible for all digital computers and electronics.&lt;br&gt;
As simple, short and concise as the Turing machine is, it is the very essence and basis of all modern computers both presently (while you read this article), and any time in the future. Even the new, upcoming paradigm of computing called quantum computing which uses Qbits superposition of 1’s and 0’s along side the usual 1’ and 0’s, does not fade out the Turing algorithm but enhances it, making it much more faster (parallel computing) and super efficient. &lt;br&gt;
According to Kurt D. Krebsbach,&lt;/p&gt;

&lt;p&gt;“The most advanced program in the world can be simulated by a simple TM - a machine that exists only in the abstract realm...no computer required! This is an important reason why CS is not fundamentally about physical computers, but is rather based on the abstract algorithms and abstract machines (or humans) to execute them”   — Kurt D. Krebsbach [Computer Science: Not About Computers, Not Science, Department of Mathematics and Computer Science, Lawrence University, Appleton, Wisconsin 54911 ].&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why The Misconception? - As time progresses and with the emergence of several disciplines and course of study, especially as such discipline grows (either as a new discipline or as a branch of a bigger course or increasing study groups of students or enthusiasts), there’s usually the tendency to group such disciplines to a particular, broader group or branch of a broader discipline, just for association sake. An example is bioinformatics which might be grouped under biology (true science) and the so-called computer science (not science). Also, data science (which also isn’t science) may be linked to mathematics, statistics or the so-called computer science. This has led to relational errors which have spun on for a long time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On the other hand, some of these disciplines don’t grow as fast, being stunted. Therefore, the need arises to co-join or associate with a much broader umbrella to maintain relevance. Example is Library Science (really?!) which experiences slower growth, due to factors such as niche appeal, slower technological advancement and limited resources being associated with information technology or “data science”. Same with cognitive psychology’s association with AI or behavioral economics for brain-computer interface.&lt;br&gt;
Some are just plain, old fashion error of association caused by (sometimes) a fine, thin line between -for example- engineering and science. &lt;br&gt;
Why these misconceptions haven’t been address is precisely synonymous to the same reasons buttressed above. It all boils down to relevance and preservation. Plain mistakes of association in disciplines like “Computer Science” with science may be almost impossible to correct due to prolonged usage and as such, has etched too deep to be retraced back.&lt;/p&gt;

&lt;p&gt;To wrap up, I’d like to close with these remarks by prominent icons of our discipline.&lt;/p&gt;

&lt;p&gt;“I’ve never liked the term ‘computer science’. It is a grab bag of tenuously related areas thrown together by an accident of history, like Yugoslavia” — Paul Graham [Arc Programming Language Inventor &amp;amp; Ycombinator  Founder]&lt;/p&gt;

&lt;p&gt;“Science is concerned with finding out about phenomena, and engineering is concerned with making useful artifacts. While science and engineering are closer together in computer science than in other fields, the distinction is important” — John McCarthy [Father of Artificial Intelligence &amp;amp; Inventor of LISP Programming Language].&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>datascience</category>
      <category>softwareengineering</category>
      <category>ai</category>
    </item>
    <item>
      <title>How To Hack?</title>
      <dc:creator>Kevin Mbanugo</dc:creator>
      <pubDate>Wed, 07 Aug 2024 22:07:14 +0000</pubDate>
      <link>https://dev.to/skysoft501/how-to-hack-3d2k</link>
      <guid>https://dev.to/skysoft501/how-to-hack-3d2k</guid>
      <description>&lt;p&gt;You must have come across such questions on forums and message boards. You must have also noticed the reactions of irritated (mostly experienced K-Rad Elite Hackers) users who curse, spite and make such people feel worthless, as scums of the internet. Imagine asking this shitty question on Stackoverflow? You could count yourself lucky to get zero answers, than the usual barrage of word wrenching, agonizing comments with all sort of ill names. It was much worse in the 90’s and early 2000’s. Random kids log into a serious IRC channel and begs someone to teach them how to hack. They get to be called all sort of derogatory names before being kicked out. The frustrations of the elite K-Rads is 100% inversely proportional to the absolute  sheer ignorance of these kiddies with infinitesimal knowledge of what hacking is or what it is about. But don’t worry, I will explain that frustration in this article. Keep reading. I promise you, if you follow all that is written in the next few paragraphs, you will have a great compendium of information of what hacking really is. Let’s go….&lt;/p&gt;

&lt;p&gt;First things first, Elite K-Rads usually get upset when someone asks “how to hack” because frankly THERE’S NO SUCH THING AS HOW TO HACK”.  Think about it, if they really was a thing as “how to hack”, then it will be easy for system administrators to keep you out permanently. They are thousands of professional cybersecurity experts whose sole job is to keep people like you out of networks and secure systems and they have been doing that long, long, LONG before you were born. Long before the term “cybersecurity” was even a “thing”. White bearded, real K-Rad hackers are usually hired to safeguard networks, top of their game, and highly experienced. These are the kind of people you would want to be like, before having a little chance on your dream.&lt;br&gt;
Now let’s get the obvious stuff out of the way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You are lame and possibly a waste of time. Please realize this. You know nothing of what you think you want to know or how to go about it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You probably want to “know how to hack” after you saw a Hollywood movie where a guy putting on a black hoody hacked NSA by typing fast on a keyboard within 30 seconds. Or maybe you saw your friend use an already made script off the internet to hijack a random computer which so happens to be a Windows XP box used by a grandma in Australia. You are not worthy of anything resembling a hacker or cracker, so don’t walk around calling yourself that just because you saw Mr Robot Tv Show. The more you do, the more you won’t meet anyone to help you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Again, you are wasting your time. Real K-Rad hackers (not you or your shitty script kiddie buddy) spend all their insomniac hours reading and tinkering. Most hacking are usually done without the close proximity to a computer at the first stages. Some hacking could be tinkering intricate parts of your operating system like building your operating system from scratch (Linux-From-Scratch), configuring and building the Linux kernel or stuffs like that. How can you even begin to hack without understanding of a host operating system?!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, with the obvious stuff out of the way, let’s begin:  How do I start?&lt;/p&gt;

&lt;p&gt;Have you tried READING? I will assume you know how to read. Stay away from your computer because you sure as shit don’t know what it is. Read everything and anything you can in on computer security, networks and operating systems. I don’t care if it’s out of date, the foundation is pretty much the same and it’s still very relevant.&lt;/p&gt;

&lt;p&gt;STEP 1  : RECOMMENDED BOOKS&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modern Operating System by Andrew S. Tanenbaum -pay more attention to file systems, kernel (Unix, Linux, NT), Shell Scripting.&lt;/li&gt;
&lt;li&gt;Unix For Dummies by John Levin - necessary to understand the operating system that powers most servers.&lt;/li&gt;
&lt;li&gt;The Unix Programming Environment by Brian Kernighan &amp;amp; Rob Pike&lt;/li&gt;
&lt;li&gt;Go to linux.org and read everything there &lt;/li&gt;
&lt;li&gt;The Linux Programming Interface by Michael Kerrisk&lt;/li&gt;
&lt;li&gt;Read up on :  [IP Addresses  [public &amp;amp; private IP Addresses]  [ IPv4]  [IPv6]  [static &amp;amp; dynamic IP Addresses]  [MAC address]  [MAC Address spoofing]   [DNS]   [DHCP]   [ARP - Address Resolution Protocol - IP TO MAC]   [NAT - Network Address Translation - To facilitate connection from a public IP address to a private (localhost) IP address]   [The OSI Model &amp;amp; Examples of each layer process]   [TCP/IP Model - an abstraction of layer 4 in the OSI Model. They are 4 layers. Learn them - TCP, UDP - They are the train station of the internet]    [Learn how VPN works via encapsulation]   [Firewall - you will meet a lot of it]   [Learn how  Routers work]&lt;/li&gt;
&lt;li&gt;Quickly breeze through HTTP Requests - learn how to use your shell to form HTTP Request and modify headers to get back HTTP Response. This is a little practical play on layer 7 of the OSI model, to get an idea of how HTTP works.&lt;/li&gt;
&lt;li&gt;Now, chew on the RFC’s - RFC means Request for Comments. It specifies the standard (protocol) of the internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;RFC 791 - Internet Protocol&lt;br&gt;
RFC 792 -  ICMP (read this carefully)&lt;br&gt;
RFC 1034 &amp;amp; 1035 - They form the basis of the modern internet &lt;br&gt;
RFC 5322 - Standard for electronic mail and Internet Message Format&lt;br&gt;
RFC 5321 - Specifications for Short Mail Transfer Protocol (SMTP)&lt;/p&gt;

&lt;p&gt;STEP 2 - WELCOME TO STEP 2. This step is extremely crucial. What is step 2? REPEAT  STEP 1 ALL OVER AGAIN!!! Read everything again - this time, read much slowly. You need a  firm grasp of step 1 on order to assimilate. When you finish reading step 1 for the second time, head over to step 3.&lt;/p&gt;

&lt;p&gt;STEP 3 - INSTALL LINUX&lt;/p&gt;

&lt;p&gt;DO NOT install Kali Linux. Because :&lt;/p&gt;

&lt;p&gt;A. Kali is filled with hundreds of tools and scripts that you know nothing about &lt;br&gt;
B. Contrary to lots of myopic opinion, Kali Linux is for penetration testing. Even though they are several attack tools, you’d be stupid to use them now, when you know nothing about them, how they work and inability for you to attack a specific target, other than any random vulnerable box. Don’t be a script kiddie.&lt;/p&gt;

&lt;p&gt;Install Debian Linux or Slackware Linux (pretty much any distro, but these 2  listed are super flexible like bubble gum). Play around with the Linux directories. Learn BASH scripting. It’s fun  yea …!&lt;/p&gt;

&lt;p&gt;STEP 4 - LEARN PROGRAMMING&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with C++ ( not python….NOT PYTHON!)&lt;/li&gt;
&lt;li&gt;Read “Teach yourself C++ in 21 days - the only way to escape learning C++ is to learn C which is like jumping from the frying pan into the fire. And if you must escape C, then you must learn Assembly (which will become jumping from the fire, into a volcanic molten lava). So stay safe with C++. You just have to learn from the C family. It’s the native language of all operating systems and many hardware.&lt;/li&gt;
&lt;li&gt;Learn Perl. Yes Perl programming. There’s a reason why most of your favorite Kali Linux tools are written entirely with Perl. Perl is excellent with text parsing. Perl shines with text processing &amp;amp; performs 20x more than Python (real benchmark statistics). Python may have a far wider reach, but you will want Perl’s one-liner to extend to your bash script. Perl is  wonderful for shell script. Read [Perl Programming] &amp;amp; [Perl Cook Book]&lt;/li&gt;
&lt;li&gt;Read [Head First Java 2nd Edition] - Get clearer perspectives on Objects &amp;amp; Classes.&lt;/li&gt;
&lt;li&gt;Read [Head First Python 3rd Edition] - We can’t ignore python because of its vast libraries (that’s just it..it’s libraries). They are good libraries e.g most malware are written in C &amp;amp; C++ but Python has a library, ctypes library, so you can call C functions. You can even call Windows API with Pywin32. Python has vast libraries to do lots of shit and so for that, we must learn it, BUT PYTHON COMES LAST ON THE CHAIN.
At this juncture, after coming this far, you have successfully earned my respect &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;STEP 4 - Ok, run Kali Linux (preferably from a flash drive ) on persistent mode. Time to play with those tools and script. But unlike before, you now have a clearer perspective and a sharp mental index. You wouldn’t want to use those tools to do kids stuff. Now, you want to see HOW those tools work, the modus operandi, the  techniques. Of what use is a dictionary attack or a rainbow table technique when your target machine’s password length is &lt;br&gt;
15 characters mixed with special characters, numbers, symbols etc with a 1 chance of success in 22,000 years?? You must then find another way in by programmatically exploiting vulnerabilities of a running service.&lt;br&gt;
The issues with hacking a box or infiltrating a network with already made scripts and tools is, these tools aren’t specific. They run a random search on the internet like clueless bots, searching which box is without firewall or weak passwords or that which has a particular vulnerability from a particular year (which must have been patched a long time ago). This is now when you build your own tools and malware, which won’t be a problem to you because you made it all the way to step 4.&lt;br&gt;
Learn penetration testing tools and mess with them. Learn to use pen testing tools to scan for vulnerabilities. Exploit those vulnerabilities programmatically. For example, to gain superuser privileges, you can write an insecure program to trigger a buffer overflow, causing the memory to dump , allowing you to inject your code on a memory block as superuser on a target machine to gain superuser privileges. But this is just one in a billion possibilities. It all depends on the use case, the target machine and the vulnerabilities involved. Writing your own malware and viruses to give an advantage will determine on what you already know about the host Operating System, Network architecture, the vulnerabilities to be exploited and your purpose (payload).&lt;/p&gt;

&lt;p&gt;That is it. You now notice this wasn’t  “how to hack “ because that doesn’t exist. This is  simply a GUIDE on LEARNING how to hack. It is simply the ability to keep up with latest vulnerabilities, tricks and ways to programmatically exploit those vulnerabilities in software and hardware. Subscribe to security news and mailing lists. Read everything about security and networks. A hack run (time to accomplish a successful hack) can be as long as 2 years or as short as a month. You have to be patient and learn to think. DON’T BE A BLACK HAT. Chances are you will be caught and there’s nothing cool about being a convict. Be a WHITE HAT. Help secure the internet. Break in to help, leave a note on victims machine telling how you got in and how to protect their machine more adequately. Get a job as a cybersecurity expert and only share your arcane knowledge to ONLY those worthy of it. Peace!&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>hacktoberfest</category>
      <category>programming</category>
      <category>linux</category>
    </item>
    <item>
      <title>The Linux Audacity</title>
      <dc:creator>Kevin Mbanugo</dc:creator>
      <pubDate>Sun, 04 Aug 2024 18:44:34 +0000</pubDate>
      <link>https://dev.to/skysoft501/the-linux-audacity-2o0o</link>
      <guid>https://dev.to/skysoft501/the-linux-audacity-2o0o</guid>
      <description>&lt;p&gt;GNU/Linux is perceived as a secured, stable and mostly light-weight Operating System and a worthy replacement to Microsoft Windows. But how safe and secured is Linux really? And how immune to malware is Linux? Let’s see from recent times:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Many didn’t realize that the Crowdstrike’s Falcon’s Cloud Security Solution that crashed about 8.2 million Microsoft Windows Workstations, due to faulty updates to its security software , also brought Linux to its knees despite Linux eBUF subsystem for running sandboxed programs in the kernel.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://news.ycombinator.com/item?id=41005936" rel="noopener noreferrer"&gt;https://news.ycombinator.com/item?id=41005936&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theregister.com/AMP/2024/07/21/crowdstrike_linux_crashes_restoration_tools/" rel="noopener noreferrer"&gt;https://www.theregister.com/AMP/2024/07/21/crowdstrike_linux_crashes_restoration_tools/&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This year (2024), Linux was exploited for the same reason it is open source and transparent - the very essence of its fortress.  A malware implant in the compression library lblzma, was used to create a backdoor on OpenSSH server.  The attacker masqueraded 2 “test files”  in the public repository of GitHub, used to build (compile) the program from source. Read the full comprehensive tactics by Andres Freund, who, like every other persons, wouldn’t have noticed (but for the unusual RAM usage of the SSH server), due to the high stealth of the malware and how deeply entrenched, escalating root privileges.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://openwall.com/lists/oss-security/2024/03/29/4" rel="noopener noreferrer"&gt;https://openwall.com/lists/oss-security/2024/03/29/4&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sysrv malware (2024), a ferocious cryptojacking program.&lt;/li&gt;
&lt;li&gt;RamsomEXX (2022), Ramsomware targeting Linux servers.&lt;/li&gt;
&lt;li&gt;Drovorub (2020), A rootkit for targeting kernel modules&lt;/li&gt;
&lt;li&gt;Evil Gnome (2019), From the Gnome Desktop Environment, this malware poses as an extension for the DE.&lt;/li&gt;
&lt;li&gt;Linux.Backdoor.Fgt (2017), used to create backdoor access into Linux desktops and servers &lt;/li&gt;
&lt;li&gt;Rex.1 (2015), A malicious rootkit&lt;/li&gt;
&lt;li&gt;HiddenWasp (2019), A rootkit and backdoor access to Linux machines&lt;/li&gt;
&lt;li&gt;QNAPCRYPT (2019), Encrypts Linux file system and demands for ransom to decrypt.&lt;/li&gt;
&lt;li&gt;HandOfThief (2013), A banking Trojan horse targeting Linux stations, sniffs and steals sensitive bank details&lt;/li&gt;
&lt;li&gt;EburySSHBackdoor (2011) A backdoor access malware on SSH server &lt;/li&gt;
&lt;li&gt;DirtyCow (2016) - A malware responsible for escalating user privileges to root&lt;/li&gt;
&lt;li&gt;Exim Vulnerabilities (2019), A remote code execution for privilege escalation&lt;/li&gt;
&lt;li&gt;Xordos (2016), Rootkit/Backdoor&lt;/li&gt;
&lt;li&gt;LinuxRex1 (2018), Server malware &lt;/li&gt;
&lt;li&gt;LokiBot (2021), steals sensitive information &lt;/li&gt;
&lt;li&gt;Mirai (2016), malware affecting desktops and servers &lt;/li&gt;
&lt;li&gt;Lippol (2015) botnet creator &lt;/li&gt;
&lt;li&gt;Linux.Darlloz (2013), A worm targeting servers and PCs including routers, initiating remote code execution.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The list is in-exhaustive. What worries more is the fact that more threats and vulnerabilities are growing imminently with each new versions of the Linux kernel and GNU “extras”.&lt;br&gt;
Contrary to side opinion on the monstrosity of the Linux Kernel to its monolithic architecture, Linux or any other .nix or Windows operating system is not more or less superior regardless its architecture (monolith or microkernel). Both architectures have their advantages and disadvantages, as well as use case. None can be attributed to whatever success an operating system may have achieved (at least, not directly).&lt;br&gt;
Rather, the problem with the Linux kernel affecting its security and stability is, well, philosophical. Linux is gradually drifting away from the UNIX PHILOSOPHY which, in a nutshell, is to keep things SIMPLE, SMALL, BUILDING SINGLE PURPOSE PROGRAMS THAT DO ONE THING AND DOES IT WELL and  to WRITE PROGRAMS THAT WORK TOGETHER (stdin &amp;amp; stdout or pipes “|” to easily exchange data). The following “reinvention” critiqued as deviating from the Unix Philosophy includes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Systemd - You can run a kernel solely but when you do, you discover it just sits there waiting to be initialized. The kernel begins to work when initialized with a service e.g a tty session, but in reality, the “init” process is the first process started by the kernel during boot. It has a process ID (PID) of 1 and remains running until the system is shut down. That’s where systemd comes in but before systemd, we had SysV which is a service and management initializer for Unix/Unix-like systems, adopted by Linux. SysV comprises of shell scripts for starting, stopping and managing services, kept in the /etc/init.d directory or sometimes, /etc/rc.d/init.d directory. Apart from kept shell scripts also, are symlinks which points to the shell scripts in init.d  and are prefixed with s (for start) or k (for kill) followed by a number which dictates the order in which the scripts should be executed.  SysV was extremely flexible and could be modified as dimmed fit.&lt;br&gt;
In 2010, SysV was replaced by systemd, developed by Lennart Poettering (while still an employee for Redhat). Systemd made initialization easier and better. Systemd initializes faster than SysV since systemd executes services in parallel as opposed to SysV executing its scripts sequentially. Systemd is built to be modular, consisting of several separate but integrated components  for managing services (units, in systemd’s parlance). As such, it is easier to port to other operating system than SysV, and managed with integral binary components like systemctl, journald, logind, networkd etc. The problem with “easier and better” is the propensity to introduce unwarranted vulnerabilities. Critics have argued-and rightfully so-that systemd is one centralized program that controls the entire operating system, incorporating many functions that were originally handled by separate, simpler tools. It’s no wonder that Patrick, the developer and guardian of Slackware Linux (a very powerful and the most flexible distros amongst thousands) has thought it wise not to incorporate systemd into its ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Binary Logging - The drift towards binary log format (e.g systemd journal) instead of the normal text log which can be easily parsed by normal Unix tools from a programs output to another programs input, is a major cause of worry and is dimmed unnecessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Desktop Environment -  With the desperate bid by developers to release a fantastic UI/UX desktop experience and with the constant thirst by any random user to customize their desktops with extra features that comes from downloadable extensions, comes unwarranted, breeding ground for common malware as seen in the Gnome DE (see above).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Over Saturated Network Managers - There are complex, over saturated,  unnecessary network managers doing what the normal, standard Unix tools have been doing efficiently for decades. True, new user needs arise over time, with new hardware upgrades. But the underlying concepts are always the same and old programs can be improved upon to keep abreast of the times (evolution) rather than throwing new things into Linux (revolution). LINUX NEEDS TO EVOLVE, NOT TO BE REVOLUTIONIZED.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Addition of Non-Essential Features - Many distributions come ladened with unnecessary, non-essential features, leading to bloated systems by default. Linux boast of its ability to revive old hardware with even less than 1GB RAM to run efficiently and optimally. But I have seen many Linux distributions by default that sits on approximately 4GB of RAM. Some are even much worse than Microsoft Windows. With more bloats comes more affinity for vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Linux Kernel Size &amp;amp; Complexity - Over time, the Linux kernel has grown into a huge, binary mess - in size and complexity - with more and more features and subsystems.  This has resulted in a huge monolithic binary with lots of sprouting vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In my 300 Level of Computer Science, I took a course (PRINCIPLES OF OPERATING SYSTEM) and the textbook used was MODERN OPERATING SYSYEM By Andrew S. Tanenbaum, who built the MINIX Operating System which is a Unix-like OS, for educational/teaching purpose. In chapter 9 (SECURITY), under the sub heading (TRUSTED SYSTEMS), Andrew rightfully answered the question if it is possible to build a secured OS and if yes, why hasn’t it been built? He wrote:&lt;/p&gt;

&lt;p&gt;“ 1. Is it possible to build a secure computer system?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If so, why is it not done?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The answer to the first one is basically yes. How to build a secure system has been known for decades. MULT1CS, designed in the 1960s, for example, had security as one of its main goals and achieved that fairly well. Why secure systems are not being built is more complicated, but it comes down to two fundamental reasons. First, current systems are not secure but users are unwilling to throw them out. If Microsoft were to announce that in addition to Windows it had a new product, SecureOS, that was guaranteed to be immune to viruses but did not run Windows applications, it is far from certain that every person and company would drop Windows like a hot potato and buy the new system immediately. The second issue is more subtle. The only way to build a secure system is to keep it simple. Features are the enemy of security. System designers believe (rightly or wrongly) that what users want is more features. More features mean more complexity, more code, more bugs, and more security errors”.&lt;/p&gt;

&lt;p&gt;This confirms that the pair “easier and better” through features of ease of use is not always a real solution. Something easier may not always be better and something better may not always be easier. Unix was extensively criticized in the 70’s &amp;amp; 80’s for its lack of GUI and technical learning curve, which wasn’t that user friendly at the time, but it was rock solid, secured and powered enterprises and institutions. A balance between easier and better has to be met. We have to accept these fact in software and systems development.&lt;/p&gt;

&lt;p&gt;A review on the inclusions into the Linux kernel and the corresponding vulnerabilities introduced, sheds more light :&lt;/p&gt;

&lt;p&gt;1.  Drivers - The Linux kernel includes a large amount of drivers to support a wide range of hardware devices, which in turn, results to a massive, complex code base . &lt;/p&gt;

&lt;p&gt;Resultant Vulnerabilities :&lt;br&gt;
CEV-2019-14615: A vulnerability in in the intel graphics drivers that could escalate root privileges to local accounts &lt;/p&gt;

&lt;p&gt;CEV-2020-12888:  A Bluetooth driver vulnerability capable of escalating root privileges &lt;/p&gt;

&lt;p&gt;2.  Virtualization -  The Linux kernel can act as an hypervisor because of the Kernel-based Virtual Machine (KVM) feature, that allows the running of virtual machines. &lt;/p&gt;

&lt;p&gt;Resultant Vulnerabilities &lt;br&gt;
CEV-2020-2732: A vulnerability in the KVM on Intel processors can allow a VM user to crash the host OS or escalate super user privileges .&lt;br&gt;
CEV-2021-22543: Nested virtualization that could potentially allow a guest user of the VM to crash the host operating system.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;File Systems: Linux Kernel supports an array of file systems including FAT32, ext4, Btrfs, NTFS, XFS and a host of others. Typically, throwing in new file system programs instead of improving on existing ones, has proven to be   a problem  with Linux. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resultant Vulnerabilities -&lt;br&gt;
CEV-2019-19816: A denial of service attack caused by a race condition in the ext4 file system, allows the attacker to execute arbitrary codes.&lt;br&gt;
CRV-2020-8992:  A btrfs vulnerability that can enable a local attacker to crash the operating system and escalate superuser privileges. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Containerization: This is a sandbox, isolated, resource controlled environment added as a kernel feature. Many sandboxed containerized user applications rely on this feature. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resultant Vulnerabilities  -&lt;br&gt;
CEV-2019-5736: This is a runC container runtime vulnerability that takes advantage of containers like Linux namespaces and cgroups. This can escalate container privileges to execute arbitrary codes on the host. &lt;br&gt;
CEV-2020-14386: A cgroup vulnerability that allows local users to escalate root privileges. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tracing &amp;amp; Debugging Tools: The Linux kernel is ladened with more than a dozen  debugging and tracing tools like eBPF, dmesg, systemtap, kgdb etc. While these tools may have different use case, these tools attest to the complexities of the Linux kernel. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resultant Vulnerabilities  -&lt;br&gt;
CEV-2020-14331: A perf vulnerability that enables a local user to escalate root privileges. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Memory Management -  Advanced memory management techniques which includes Non-Uniform Memory Acces, huge pages and many other memory allocations add to the complexities of the Linux kernel. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resultant Vulnerabilities  - &lt;br&gt;
CEV-2019-19319: Memory management vulnerability that could enable superuser privileges if exploited. &lt;br&gt;
CEV-2020-29374: A vulnerability (use-after-free ) enabling local privilege escalation. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Power Management  -  This feature includes CPU frequency scaling, hibernation etc necessary for modern computing but adding to the already messy code base. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resultant Vulnerabilities  - &lt;br&gt;
CEV-2019-9506: Bluetooth low energy protocol can allow an attacker to sniff data packets being transmitted. &lt;/p&gt;

&lt;p&gt;Though many of these Vulnerabilities have been patched, it goes to show just how "features" can be an enemy of a secured and rock solid operating system. BSD (Berkeley Software Distribution), a Unix-like operating system, with a conservative approach to maintenance and a considerable balance between “easier and better” is seen to be much attuned (to a certain degree) to the Unix Philosophy. FreeBSD always places more emphasis on improving its already existing programs rather than just throwing new additions into the barn. &lt;br&gt;
 Early, traditional Unix, the Valhalla of security and rock tight stability, developed by AT&amp;amp;T’s Bells lab in the mid 70’s was adopted commercially and was a commercial success in research institutes, academics and other high end institutions, after the failure of Multics (mid 60’s by same developers of UNIX) due to its complexities and inability to gain traction with the hardware of those times. Unix was a much more simpler and versatile enterprise solution, so it didn’t take long for the University of California, Berkeley (mid 70’s) to get a hold of a copy to adequately modify and developed the first BSD. BSD were the progenitors of innovative tools like the C shell (csh), vi editors, and networking utilities like TCP/IP Stack which became fundamental to the growth of the internet. Impressed by Unix, Andrew S. Tanenbaum built Minix in the 80’s, officially released in 1987. It was Unix-like, but with a different architecture (microkernel) as opposed to Unix’s monolith kernel. Minix was developed with security and performance at the forefront, while being minimal, solely for educational purposes.&lt;br&gt;
A barrier with Unix and these early Unix-like operating systems was the licensing of those times. The license was unapologetically expensive and these operating systems required highend hardware for enterprise and academic usage. This led Richard Stallman to develop the GNU initiative which comprises of the Hurd kernel. The Hurd kernel was intended to be a kernel for the GNU operating system. Stallman was deeply concerned by the increasing prevalence of proprietary software which restricted users freedom to modify or share. The same frustration was met by Linus Tolvald, as he could not afford to buy the Minix license, which resulted to him writing the linux kernel in 1990.&lt;br&gt;
The mid 70’s to the 80’s were critical times due to the emerging market boom of Personal Computers for direct end-users. Several Personal Computers had already gained prominence, from the mid 60’s to the 80’s but only a handful could boast of having one. IBM, ceasing the opportunity, began scouting for Operating System that will run on its IBM 5150, also called IBM PC. The IBM PC was built with the intel 8088 microprocessor , a relatively simple and low-cost CPU. Of course, IBM couldn’t approach Unix or BSD which ran on more powerful processors like those on mini PC’s and workstations (also, Unix and BSD were enterprise operating systems on a high end, commercial hardware, also with its high licensing cost that end users won’t be able to afford, if IBM had purchased and incurred the cost on consumers). Minix was an educational tool and GNU/Hurd never gained traction due to complexity of microkernel development and available hardware compatibility at that time, and Linux was technically not “born” yet. The dominating operating system for the earlier PC’s was CP/M  (Control Program for Micro Computers) developed in 1974 and was the dominant operating system for 8-bit microcomputers during the late 1970s and early 1980s, so it wasn’t hard of a choice for IBM to approach Gary Kildall, founder of Digital Research, Inc, and developer of CP/M. Unavailability of Gary Kildall when IBM rep came visiting along with other factors led IBM to approach Microsoft (then Micro-Soft, led by Bill Gate and Paul Allen), whose only product was the BASIC Interpreter for the Altair 8800 in 1975. Without a real operating system, Bill and Paul  acquired QDOS ( Quick and Dirty Operating System) from Seattle Computer Products, adapting it to PC-DOS (IBM’s version). Thus, MSDOS (later Windows) became widely accepted as the de facto Operating System for low cost, affordable user end PC’s.&lt;br&gt;
 We cannot ignore that corporate takeovers and central ownerships can totally derail the purpose and ambitions of an Operating System and its community of users. The acquisition of Cent OS in 2014 - a stable Linux release with millions of user base and servers - by RedHat, came as a rude shock in the GNU/Linux community. In 2019, RedHat decided to “convert” CentOS to CentOS Stream, a rolling release and an upstream to RedHat Enterprise Linux (RHEL). This move demanded the End-Of-Life of CentOS, its support services ended and thousands of server migrations to the subscription service of RHEL&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.redhat.com/en/blog/centos-linux-going-end-life-what-does-mean-me" rel="noopener noreferrer"&gt;https://www.redhat.com/en/blog/centos-linux-going-end-life-what-does-mean-me&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fact that developers and community managers can decide one day to integrate a life-long, community driven project into a profit making Corporate Institution, can be quite scary.  Linux has one guardian. His name is Linus Tovald and he dictates what additions goes into the kernel, which is fine by many of us. The only anxiety is that he hasn’t named a successor which leaves the community very anxious of the future.&lt;/p&gt;

&lt;p&gt;In conclusion and in addition to all that’s been written, community managers need to stick it through any difficulty, technical or managerial, faced by a community’s distribution or tool.  Internal disagreement are common and normal. But it shouldn’t  be a reason to throw talents overboard. Blockings and kickouts are sadly frequent in the BSD community. OpenBSD  is an offshoot of NetBSD which emerged in 1995 due to disagreements and fallouts. Theo De Raadt, one of the founders of OpenBSD was locked out of the repository due to disagreements on development, which led him to fork NetBSD, building upon it to develop OpenBSD. The BSD work has long been stalled  by such fracas over the years. Also, old developers of BSD tend to migrate and abandon projects when the going gets tough, aligning with projects with a larger community or perhaps, where the whistle blows louder for more end-user features.&lt;br&gt;
Developers and managers of open source communities should improve on existing tools and modules instead of just throwing in new stuffs in. Doug Mcllroy, inventor of the Unix Pipe and former head of Bells Lab once said:&lt;/p&gt;

&lt;p&gt;“adoring admirers have fed Linux goodies to a disheartening state of obesity.”&lt;/p&gt;

&lt;p&gt;Also,&lt;/p&gt;

&lt;p&gt;“Everything was small... and my heart sinks for Linux when I see the size of it… The manual pages, which really used to be a manual page, is now a small volume, with a thousand options... We used to sit around in the Unix Room saying, 'What can we throw out? Why is there this option?' It's often because there is some deficiency in the basic design — you didn't really hit the right design point. Instead of adding an option, think about what was forcing you to add that option” - Doug Mcllroy [cited from Wikipedia]&lt;/p&gt;

&lt;p&gt;System design should be kept simple, small and modular. The pair “Easier and better” is not always ideal for a secured and performance-driven system.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
