DEV Community

Cover image for The Fundamentals of Computer Science
B U C H M A N💻
B U C H M A N💻

Posted on

The Fundamentals of Computer Science

Introduction:

Computer science is a vast and multifaceted field but at its core, it's the study of computation, information, and automation. It encompasses a wide range of topics, from the theoretical foundations of algorithms to the practical applications of software and hardware.

Computer science entails

  • Hardware: This refers to the physical components of a computer system, including processors, memory, storage devices, and input/output devices.

  • Software: This encompasses the programs and instructions that tell the hardware what to do. It includes programming languages, algorithms, data structures, operating systems, and software development methodologies.

  • Theory: This explores the underlying principles and limitations of computation, including topics like boolean algebra, computability theory, and computation complexity.

  • Networking: This focuses on the communication between computers and devices over networks, covering protocols, internet technologies, and network security.

  • Security: This crucial aspect involves protecting computer systems and data from cyber threats, malware, unauthorized access, and other vulnerabilities.

In this article, I'll be explaining these key areas mentioned most fundamentally. How they interact with each other to create technologies and solutions to diverse problems we encounter around the world.

Prerequisites:

Learning the fundamentals of computer science requires no special knowledge, rather it requires a zeal to learn and a smile on your face 😁.

Hardware:

This refers to the physical component of a computer system, providing the tangible foundation on which software and data reside. It comprises various elements that work together to execute instructions, store information, and interact with the external world. You can refer to Hardware as “The Building Blocks of Computing”.

Let's explore some key hardware components:

Central processing unit (CPU):

Considered the brain of the computer, the CPU performs calculations and executes instructions. It fetches data from memory, performs operations, and stores results back in memory. The central processing unit (CPU) is the primary component of a computer that acts as its control center. The CPU also referred to as the “central” or “main” processor, is a complex set of electronic circuitry that runs the machine's operating system and application.

Imagine your body as a complex computer system. Just like any computer, your body needs a central processing unit (CPU) to function. This CPU, located in your brain, acts as the command center, controlling all your thoughts, actions, and reactions.

CPUFigure 1. A Central processing unit (CPU)

The brain acts like the CPU of your body in:

  • processing information from your senses,
  • making decisions,
  • sending instructions to your muscles and organs.

It's constantly

  • multitasking,
  • prioritizing tasks,
  • adapting to your new environment.

Just like a computer CPU, your body's CPU (brain) is a powerful and complex system governing all your thoughts and actions.

Memory (RAM):

This temporary storage holds data and instructions actively used by the CPU. It allows for fast access to frequently used information, enabling smooth program execution.

RAM (random access memory) is the hardware in a computing device where the operating system (OS), application programs, and data in current use are kept so they can be quickly reached by the device's processor.

The humans, short-term memory is like the body's RAM, temporarily storing information for use. It allows humans to hold onto current information, perform calculations, and follow instructions. However, its capacity is limited, and information fades quickly if not actively used or consolidated into long-term memory.

RAMFigure 2. RAM(random access memory)

Storage devices (HDD/SSD):

These permanent storage units hold data and programs even when the computer is turned off.

Hard disk drives (HDD) offer large storage capacity but slower access times, while solid-state drives (SSD) provide faster access but lower storage capacity.

The human long-term memory, similar to HDD and SSD storage devices, serves as the permanent repository of life experiences and knowledge. It allows them to recall memories, learn new things, and maintain their identity.

Input/Output (I/O) devices:

These devices allow users to interact with the computer and provide input or output. Examples include keyboards, mice, printers, scanners, and cameras.

Just like computer I/O devices, your body’s sensory and motor system allows you to interact with the world around you.

Here's how the body's I/O system works:

  1. Information gathering: Your senses (eyes, ears, skin, tongue, and nose) constantly collect information from the external environment.

  2. Transmission: Sensory nerves transmit this information in the form of electrical signals to the brain.

  3. Processing and interpretation: The brain interprets the received information, understands its meaning, and makes decisions.

  4. Action and Response: The brain sends instructions to your muscles and other organs, allowing you to react and interact with your surroundings.

Motherboard:

A motherboard is the main printed circuit board (PCB) in a computer. The motherboard acts as the central hub connecting all components, allowing them to communicate and share data. It houses the CPU, RAM, and other essential components.

Think of your central nervous system (CNS) as the motherboard of your body:

  • Connectivity: It serves as the central hub, connecting all parts of your body, including your brain, spinal cord, and nerves.
  • Information exchange: It facilitates the transmission of information between various organs and systems, allowing them to work together seamlessly.
  • Coordination: It coordinates and regulates the activities of different body parts, ensuring smooth functioning and response to internal and external stimuli.

MotherboardFigure 3. Motherboard

Network interface card (NIC):

A network interface card (NIC) is a hardware component, typically a circuit board or chip, installed on a computer so it can connect to a network. This card allows computers to connect to networks, enabling communication and resource sharing with other devices on the network.

Think of your immune system and lymphatic system as the body's NIC

  • Connection and communication: They act as the network, connecting various parts of the body and facilitating communication between different cells and tissues.
  • Resource sharing: They transport vital resources like nutrients, hormones, and immune cells throughout the body, ensuring proper functioning and defense.
  • External interaction: They act as the gateway to the outside world, filtering out harmful substances and protecting the body from pathogens and infections.

Similar to how a NIC connects to a physical network, the body’s system connects through various components:

  • Blood vessels: These act as network cables, carrying blood, lymph fluid, and other essential substances throughout the body.
  • Lymphatic organs: These function like network hubs, filtering and processing lymph fluid, filtering out harmful substances, and producing immune cells.
  • Immune cells: These act as network security, identifying and eliminating pathogens, infections, and foreign bodies.

Graphics processing unit (GPU):

A graphics processing unit is a specialized electronic circuit initially designed to accelerate computer graphics and image processing. This specialized processor handles graphics processing tasks, significantly improving the performance of graphics-intensive applications like gaming, video editing, and 3D rendering.

Imagine your body as a complex computer system capable of processing visual information and producing artistic expressions. Just like a computer has a graphics processing unit(GPU) to handle graphics-intensive tasks, your body relies on specific brain regions and neural pathways to perform these functions.

Think of these dedicated areas as the body’s GPU:

  • Visual processing: The occipital lobe acts like a dedicated graphics card, receiving and processing visual information from the eyes.
  • Creativity and imagination: The prefrontal cortex and temporal lobes function like creative software, generating ideas, manipulating images, and composing music or stories.
  • Fine motor control: The cerebellum and motor cortex act like precision drivers, coordinating muscle movements for drawing, playing musical instruments, or crafting artistic works.

Power supply unit (PSU):

A power supply unit (PSU) converts mains AC to low-voltage regulated DC power for the internal components of a computer.

It is responsible for converting alternating current (AC) power from the wall outlet to direct (DC) power used by computer components.

Imagine your body as a complex computer system requiring a constant flow of energy to function. Just like a computer needs a power supply unit (PSU) to convert AC power to DC power, your body relies on its digestive system and circulatory system to convert food into usable energy and deliver it throughout your body.

Peripherals:

These are additional devices that enhance the functionality of the computer system. Examples include external hard drives, flash drives, webcams, and microphones.

Imagine your body as a complex system with a basic set of functionalities. Just like a computer benefits from additional hardware like external peripherals, your body can utilize various tools and extensions to enhance its capabilities and interact with the world in new ways.

Impacts of computer hardware:

Computer hardware engineering maximizes efficiency and productivity by

  • Enabling faster and more reliable automated processes,
  • Hardware advancements, such as faster processors and high-performance sensors,
  • Enable real-time data processing, analysis, and decision-making.

Hardware plays a crucial role in shaping the capabilities and limitations of computing. It determines the speed and performance of applications, the amount of data that can be stored, and the types of tasks that can be accomplished. As hardware continues to evolve, it will undoubtedly continue to push the boundaries of what is possible in the digital world.

Software:

Software is a generic term used to refer to applications, scripts, and programs that run on a device. Software, the intangible counterpart to hardware, is the lifeblood of any computer system. It is the collection of instructions and data that tells the hardware what to do and how to do it.

Imagine software as the brain and nervous system of a computer, directing its every action and response.

Let's explore the types of software:

System software:

This is the basis upon which all other software operates. It includes the operating system (OS) that manages the hardware and provides a platform for applications to operate. Examples include Windows, macOS, and Linux.

System software acts as an intermediary between the user and the computer. It is designed to use and maintain the computer hardware.

Think of a house, with running water and electricity, and well furnished. Therefore, the system software is like the running water, electricity, and furniture in the house and without these things, it would be very difficult to live in the house. This means that without system software in a computer, application software cannot be executed or run successfully.

Application software:

These are programs designed to perform specific tasks for users, such as web browsers, word processors, video games, and recording software.

Think of a house, with running water and electricity, well furnished but without home appliances such as television, home theatre, washing machine, etc. Application software is the home appliance that brings livelihood and effectiveness to the house. Imagine your laptop or phone without a web browser, WhatsApp, Slack, X (formerly Twitter), etc. How boring and useless would they be?

Do you get the point now? Okay, nice!

Middleware:

Middleware is software that lies between an operating system and the applications running on it. This software is a bridge between applications and the underlying hardware and operating system. It facilitates communication and data exchange between different software components.

Functions of software:

  • Data manipulation: Software can create, edit, store, and retrieve data in various forms.
  • Logic control: It can make decisions based on predetermined rules and conditions, automating complex tasks and workflows.
  • User interface: It provides a visual and interactive environment for users to interact with the computer and applications.
  • Communication: Software facilitates communication between computers and users, as well as between different software applications.

Development of software:

  • Programming languages: These are the languages used to write instructions for the computer to follow. Examples include Python, JavaScript, Java, C++, etc.

  • Software development tools: Various tools and technologies assist programmers in writing, testing, and debugging software. Examples include code editors/IDEs, version control systems (VCS), build automation tools, testing tools, debugging tools, project management tools, communication tools, etc.

Software development life cycle (SDLC):

Software development life cycle (SDLC) is a process used by the software industry to design, develop, and test high-quality software. This is a structured approach to the software development process, involving planning, designing, developing, testing, and deploying the software.

Now, let’s explore different SDLC models:

  • Waterfall model: A sequential model where each stage is completed before moving to the next.

Waterfall modelFigure 4. Waterfall model

  • Agile model: An iterative and incremental model where the software is developed in small increments and continuously tested and improved.

Agile model Figure 5. Agile model

  • Spiral model: The model is a model used for risk management that combines the iterative development process model with the elements of the waterfall model. The spiral model is favored for large, expensive, and complicated projects. Spiral modelFigure 6. Spiral model

In plain explanation, the spiral model is a risk-driven model that combines elements of the waterfall and agile models.

  • DevOps: A culture and practice that integrates development, operations, and security throughout the SDLC. Under a DevOps model, the development and operations team work together across the entire software application life cycle, from deployment and test through deployment to operations.

Choosing the right SDLC model depends on various factors, such as the project size, complexity, and team structure.

By understanding and applying the SDLC principles, software developers can create high-quality software that meets the needs of users and businesses.

Programming Fundamentals in Software Engineering

Data and information: Data are a collection of values that do not have a specific meaning. These values may be symbols, numbers, letters, facts, etc. An example of data is a list of dates. There are two types of Data – analog and digital.

Digital data is data that represents other forms of data using specific machine language systems that can be interpreted by various technologies. Devices that use digital data are Computers/Laptops/iPads/smartwatches.

Analog data is data that is represented physically.  e.g., Sound. Analog devices include loudspeakers and sound, some thermometers, and amplifiers. Information refers to process data that is data with meaningful use. When data is interpreted, it provides context with which we can make informed decisions. Collecting rainfall and temperature data over some time can help scientists predict the weather conditions in a particular location. To understand the significance of different breakthroughs in the computer industry's history, you must have a basic understanding of how a computer operates. Therefore, let us explore how a computer represents information.

Bits and bytes:
Do you remember when we talked about computer language consisting of ones and zeros? Computers work by manipulating these and zeros, which are called binary digits or bits for short. However, these bits are too small to be useful on their own, so they are grouped into units of 8 bits. This 8-bit unit is called a byte. It's important to note that a byte is not the same as a "bite" - the word refers to a different meaning altogether.

A byte is the basic unit of a computer. It is made up of a group of 8 bits. This is why the numbers 8, 16, 32, and 64 are important in computing - they are all multiples of 8. When you encounter these numbers in various computing contexts, it's usually because the 8-bit byte is the basic building unit.

The two digits 1s and 0s can represent almost anything in a system, despite their apparent limitations.

This is an example of a byte 10000000 that can be used to represent an instruction or information. For example, 10000000 could be an instruction representing “ start a program” which tells the computer that it is the beginning or start of that particular program.
You will also hear people speak of kilobytes, megabytes, and gigabytes or often just ‘K’, ‘meg’, and ‘gig’ as in, ‘This computer has 64 gigs of RAM’, or ‘This file is 45 Kb’.

A kilobyte is 1024 bytes, a megabyte is 1024 kilobytes, and a gigabyte is 1024 megabytes. However, in everyday usage, it is common to use 1000 instead of 1024.

It is important to note that a bit is denoted with a lowercase ‘b’ while a byte is denoted with an uppercase ‘B’.

Bytes, as well as KB, MB, and other such measurements, are commonly used to describe the size of data on a computer. On the other hand, bit measurements are more often used to describe network speed.

For instance, if your network speed is 200Mbps (Megabits per second), it would take approximately 8 seconds to download a file of 200 Megabytes (since 1 byte is 8 bits).

Variables and datatypes:
In programming, a variable is a value that can be changed based on conditions or information passed to the program. A computer program typically includes instructions that direct the computer on what to do, along with data that the program uses while it is running. The data is made up of constants or fixed values that never change, and variable values that are usually initialized to "0" or some default value since the actual values will be provided by the program's user. Usually, both constants and variables are defined as specific data types. We will discuss data types later, but for now, let us focus on variables.

Variables are a fundamental concept in programming used to store data. They allow for the storage and manipulation of data, making programs more efficient and readable.

Variables provide a way of labeling data in a program, which makes it easier for the reader and programmer to understand what the data represents. For example, when a program requires users to input their first name or age, a programmer will use variables such as "firstName" or "age" because this data is not fixed.

It's important to use descriptive words when naming variables, especially in large programs, as it can become difficult to manage unknown variable names. Think of a variable as a container that holds data and can be called upon when needed.

In programming, we use the equals sign/assignment operator, ‘=’, to assign a value to a variable. To do this, we write the variable name on the left side of the operator and the value on the right. For example, in Python:

name = Daniel Okafor
Enter fullscreen mode Exit fullscreen mode

Some programming languages like JavaScript have special keywords that come before declaring a variable. For instance, in JavaScript, we use const before the variable name:

const name = Daniel Okafor
Enter fullscreen mode Exit fullscreen mode

Once we have assigned a value to a variable, we can easily use the variable name to access it in our application. We can manipulate the variable based on the data type of the value assigned to it.

In computer programming, data type is a way to classify data that informs the compiler or interpreter how the programmer intends to use the data. Data type also defines a set of values and a set of operations that can be applied to those values. Simply put, a data type specifies the type of value that a variable has.

Consider two values: 4 and 'Daniel Okafor'. While it is possible to calculate the square of 4, the same operation cannot be performed on 'Daniel Okafor'. This is because certain operations can only be applied to values of a specific data type, and attempting to use them on values of a different data type will produce an error. These errors can occur during either the compilation or execution of our program.

Most programming languages have built-in data types to represent different kinds of values. Here are some of the most common ones:

  • Integers: These represent whole numbers, including negative ones.
  • Floating point numbers: These represent numbers with decimal parts.
  • Booleans: These represent logical values, either true or false.
  • Strings: These represent text values, made up of characters like letters, digits, symbols, and signs.
  • Null: This represents a value that is unknown or unspecified.

Type checking is a process of validating the operations performed on values of different data types in a computer program. Whenever an invalid operation is attempted, a type error is thrown. For instance, if we try to multiply a number with a string, such as

x = 4 * Daniel Okafor
Enter fullscreen mode Exit fullscreen mode

it will result in a type error because the multiplication operator can only work with numbers.

There are two primary methods of type checking: Static and Dynamic.

Static type checking is performed during compile time, which is when a compiler translates the source code. Type checking is done, and if an error is detected, it is immediately thrown, and the code doesn't run. This is beneficial because it allows for early detection of type errors during the development phase. Static type checking is commonly observed in programming languages such as C++, Java, C, Go, Typescript, and others.

On the other hand, Dynamically typed programming languages perform type-checking during runtime. The program runs, but if a mismatched type code block is executed, an error is thrown. The use of dynamic type checking can make it difficult to detect type errors during development, especially if the program is not adequately tested. This approach is typically used by programming languages such as JavaScript, Python, and Ruby.

Conditionals, loops, and recursion:
Conditional statements, also known as Decision Control, allow a program to execute different actions based on whether a condition is true or false. For example, when making coffee, you can add milk if you like it, or leave it if you don't. Different programming languages provide different types of conditional statements, some of which include:

  • if: continues as normal after the block. Let's look at a simple analogy using the if statement

const isRaining = prompt("Is it raining? (yes/no)");

if (isRaining === "yes") {
  console.log("It's raining! Grab your raincoat and boots.");
}

console.log("Have a great day!"); // This line will execute regardless of whether it's raining or not.

Enter fullscreen mode Exit fullscreen mode
  • if/else: alternatives follow the else block and then continue as normal. Let's make a simple analogy with an if/else statement

const age = prompt("Enter your age:");

if (age >= 18) {
  console.log("You are eligible to vote.");
} else {
  console.log("You are not yet eligible to vote.");
}

Enter fullscreen mode Exit fullscreen mode
  • nested if: an if inside an if. For example,

if (water) {
   if (boilingWater) {
     // todo
   }
 }

Enter fullscreen mode Exit fullscreen mode
  • switch: a type of conditional statement, similar to if/else. It is present in programming languages like JavaScript, Java, C, C++, C#, etc. Switch statements use keywords like switch, case, break, etc.

A typical switch statement looks like this:


switch(expression) {
  case value-1:
    Block-1;
    Break;
  case value-2:
    Block-2;
    Break;
  case value-n:
    Block-n;
    Break;
  default:
    Block-1;
    Break;
}

Enter fullscreen mode Exit fullscreen mode

Let's make a coffee with a switch statement


coffee_type = input("What kind of coffee would you like? (Black, Latte, Cappuccino): ")

switch coffee_type:
  case "Black":
    print("Making black coffee... Drip drip drip...")
  case "Latte":
    print("Making a latte... Espresso, milk, foam, voila!")
  case "Cappuccino":
    print("Whipping up a cappuccino... Espresso, milk, foam, art...")
  default:
    print("Sorry, we don't have that type of coffee. Choose black, latte, or cappuccino.")

print("Enjoy your coffee!")

Enter fullscreen mode Exit fullscreen mode

Loop: Do you remember the code we wrote earlier for making coffee? Now, let's think about a situation where we need to make cups of coffee for an entire class of 90 students. Writing the same code 90 times would not be efficient, as computers are designed to be. Therefore, we can use something called a loop to make the process more efficient.

A loop is a programming feature that allows us to repeat a set of instructions until a specified condition is met. This condition is known as the break condition. For example, in a class of 90 students, the break condition could be set to 90. If a break condition is not specified, the loop will continue running indefinitely, causing an infinite loop.

Most programming languages have two types of loops, namely FOR loops and WHILE loops. These loops are identified by their respective keywords. Although both types can be used interchangeably, FOR loops are generally preferred when we know the precise number of times the loop needs to run. For instance, in our coffee example, we know that the loop needs to run 90 times, and hence FOR loops would be a better fit. The structure of a FOR loop generally follows the format:

for (initialize; condition; increment/decrement) 
{ doStuff() }

Enter fullscreen mode Exit fullscreen mode

A FOR loop is a type of loop that consists of two main parts: the header and the body. The header typically consists of three parts, while the body contains the code to be executed while the condition remains true.

Breakdown of a FOR loop:
In the first part of the header, we initialize our loop variable. This variable is used in the loop and is usually set to a starting value.

The second part of the header is a condition that is checked against the loop variable. If the condition is met, the code in the body of our loop runs.

The third part of the header is where we specify how the loop variable is modified after each iteration. This is either an increment or a decrement of the loop variable. The loop condition is then checked again, and if it’s met, the loop runs again.

A WHILE loop:
A while loop is a programming structure that runs as long as a specified condition is true. It is often used when the number of times a loop will run is not known in advance.

For instance, imagine we are making coffee for a variable number of people and we do not know how much water it will take to make a single cup of coffee or how much water will be wasted in between. In such a scenario, a while loop would be helpful.

while (boiledWater) {
  makeCoffee()
}

Enter fullscreen mode Exit fullscreen mode

When making coffee, the boiledWater variable is updated. It becomes falsy and stops the while loop when we run out of boiledWater.

Recursion: Imagine browsing through a Wikipedia page about a historical event. As you read, you come across another interesting event, so you click on that, find another, and keep clicking until you no longer find anything that interests you. Then you stop.

Recursive programs are those that call themselves until they reach a base condition. A recursive function can either call itself directly or indirectly.

A direct recursive function calls itself within itself, while an indirect recursive function calls other functions that eventually call the original function.

It is crucial to have a base condition, otherwise, your function will continue to call itself infinitely.

In general, a recursion problem can be solved using loops and vice versa.

Let's create a recursive function in JavaScript that adds integers from 1 to a given number.


function sumRange(range) {
 return range + sumRange(range - 1)
}

Enter fullscreen mode Exit fullscreen mode

The code above will run infinitely or throw an error as there is no base case. A base case should be added to the code.


function sumRange(range) {
if (range === 1) {
return 1
}
return range + sumRange(range  1)
}

Enter fullscreen mode Exit fullscreen mode

This works but for only positive numbers only. The same error thrown in the first code will be seen here if 0 or a negative number is shown.


function sumRange(range) {
if (range <= 0) {
return -1
}
if (range === 1) {
return 1
}
return range + sumRange(range  1)
}

Enter fullscreen mode Exit fullscreen mode

Now we have a working recursive function to find the sum of numbers up to a range, with a base case and type checking. Recursive functions are commonly used in sorting algorithms.

Big O Notation:
Time complexity refers to the amount of time it takes for a program to complete an operation. It is commonly represented by Big O notation. There are several types of time complexity, including:

  • O(1) - Constant time complexity: This occurs when accessing a value with its index in an array. The time it takes to complete the operation does not increase as the input size grows.
  • O(n) - Linear time complexity: This is seen when inserting an item into a hash table or performing a simple search. In real life, this would be like reading a book with n pages. The time it takes to complete the operation increases linearly as the input size grows.
  • O(log n) - Logarithmic time complexity: This is seen in binary search algorithms in a sorted array. As the input size grows, the operation time increases slowly (linearly).
  • O(n^2) - Quadratic time complexity: This is a very bad complexity and is seen with nested arrays. The time it takes to complete the operation increases exponentially as the input size grows.

Data structures and algorithms:
When dealing with large amounts of related data, it's important to have an efficient way of organizing and structuring them. For instance, data about students in a class or employees in an organization.

A data structure is essentially a collection of related data that helps you organize and use them effectively and efficiently. There are various types of data structures available such as arrays, stacks, queues, linked lists, heap, trees, etc.

Array: An array is a collection of data or items that are stored in a sequential order. All the elements in an array are of the same data type. Each element in an array is indexed starting with zero. You might have heard the joke that programmers start counting from zero and not one like other people.

Let's think of an array like a container that holds similar items together and they are ordered by their position within the container. For example, a bookshelf is a container of books that are ordered by their position in the bookshelf. The first book in the shelf will be index 0, the next book index 1, and so on.

Arrays are widely used as structures for building other complex data structures. They are also used for sorting algorithms.

Stacks: A stack is a linear data collection that works similar to stacking items in a tall container. It allows only the addition and removal of items in a Last in First Out (LIFO) order.

To illustrate, think of a stack of plates where the last plate added is the first to be removed. Stacks are often used for evaluating mathematical expressions and for implementing function calls in recursive programming.

Queues: A Queue is a data structure that operates similarly to stacks, but with a first-in-first-out (FIFO) order.

Imagine a line of people waiting to enter a building. The first person in line will enter the building first, and the last person in line will enter last. This is how a queue works.

Queues are useful in the following scenarios:

  • Scheduling jobs for tasks that may take a long time.
  • Handling congestion in network requests, which can be implemented as priority queuing systems.

Linked lists: A linked list is a type of linear data structure that consists of a group of connected nodes. To put it simply, it is a sequence of items that are arranged in a specific order while being linked to each other. Due to this arrangement, accessing data randomly is not possible.

Every element or item in a linked list is called a node, and each node comprises a key and a pointer. The pointer directs you to the next node, also known as the "next." The sequence begins with a "head," which leads you to the first item in the list.

Thus, the first node in a linked list is known as the head, and the last node points to NULL. Linked lists are often utilized for symbol table management and switching between programs using Alt + Tab (On a PC).

What is an algorithm? To cook a new recipe, one must read the instructions and execute each step in the given sequence. The result obtained is a perfectly cooked new dish. An algorithm is a set of instructions for solving a problem or accomplishing a task.

As humans, we can easily solve everyday problems without putting much thought into them, due to our experience and memorization. However, it is important to break down our thought processes into individual steps and translate them into what computers can understand.

Breaking down a problem into individual parts can help you become a better programmer and problem-solver.

Let's write an algorithm to find the average of two numbers and print the result:

Step 1: Declare three variables a, b, and sum as integers.
Step 2: Get input values for a and b from the user.
Step 3: Calculate the sum of a and b and divide it by 2 to get the average. Assign the result to the sum variable.
Step 4: Print the value of the sum variable.

This algorithm is easy to understand and can be written in any programming language of your choice.

Popular Algorithm - Binary Search

Imagine you are searching for your friend Sam, who is 5’5’’, in a long queue of people arranged in order of height from shortest to tallest. You don't have enough time to compare everyone's height with Sam's height one by one. What can you do?

The binary search algorithm can help you in this situation. First, you select the person in the middle of the queue and measure their height. Let's say this person is 5’7’’. You can immediately conclude that this person and everyone to their right is not Sam.

You then focus on the remaining queue and select the middle person again. Let's say their height is 5’4’’. You can now rule out this person and everyone to their left, reducing the problem by half. You repeat this process until you find Sam.

Following this method, you can quickly locate Sam in just a few steps. This is how the binary search algorithm works.

Theory

Computation theory is a fundamental aspect of computer science. Let's explore some key areas where theory plays a crucial role.

Algorithmic Theory:
Algorithmic theory is a branch of theoretical computer science that focuses on the connection between computation and the manipulation of information. It studies how computers handle data structures like strings and other objects that are computationally generated.

To help you visualize this, think of a computer as a chef and algorithmic theory as the study of their cooking techniques. Just as a chef transforms ingredients into dishes, a computer transforms data into computable objects. Algorithmic theory is like understanding the rules and limitations of the language that computers use to perform these transformations.

Complexity Theory:
Complexity theory is a useful tool to determine the speed and efficiency of an algorithm in solving a problem, especially when the problem becomes larger.

It's similar to calculating the amount of time and storage space required for a recipe, based on the number of cookies you plan to bake.

It's like a competition among algorithms to determine which one can solve the same problem faster and with less memory.

Think of algorithms as cars that solve problems. Complexity theory tells you how fast and fuel-efficient each car is, so you can choose the one that gets you there quickly and without running out of gas.

Computability theory:
Computability theory is a branch of computer science that helps to determine the limits of what computers can and cannot do. It is like a manual for magic tricks that reveals which illusions are possible and which are not. It is like understanding the rules of a game that computers play, defining what problems they can solve and what remains beyond their reach.

To help understand this, we can imagine computers as tireless workers, but there are some tasks that are like building a perpetual motion machine - they are impossible to achieve even for the most powerful computers. Computability theory explains these limitations and defines the realm of the impossible for computers.

Formal languages and automata:
Formal languages and automata can be thought of as blueprints and robots for computations. They assist us in designing effective systems and proving that our algorithms are correct, much like constructing a bridge with a strong foundation and testing it before opening it to the public.

Think of formal languages and automata as the “language of logic” for computers. We employ this language to establish the rules of computation, much like we use words to explain concepts. This aids us in comprehending how computers function and in creating programs that make sense.

Information theory:
Information theory, which is also known as the mathematical theory of communication, is an approach that studies how data is processed and measured during the transmission of information.

Think of information theory as a codebook for the universe. It helps us comprehend how information is stored, transmitted, and manipulated. Essentially, it is like the rules of the language that data speaks.

It is like a conversation between two people, but instead of words, they use bits and bytes. Information theory informs us how much information can be conveyed with each bit and how to ensure that the message arrives correctly.

At its core, information theory is about understanding the essence of information itself. It is a powerful tool that enables us to build better technologies, communicate more effectively, and even unlock the secrets of the universe.

Data compression:
Data compression is a process that encodes information using fewer bits than the original representation. This is done to help store and send information using less space. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy.

In simpler terms, data compression is like a magic trick where you shrink files. It helps you organize your digital life by squeezing out all the extra bits and making files smaller and easier to manage. Think of it as packing your clothes neatly into a suitcase. Compression makes everything smaller and easier to carry around, just like a suitcase makes it easier to travel.

Error correction:
Error correction refers to the process of detecting and repairing errors in data, restoring it to its original, error-free state. This can be likened to having a doctor for your digital world, who scans for errors, diagnoses the problem, and fixes it before any data is corrupted. By doing so, error correction ensures that your information remains healthy and secure.

Information entropy:
In the field of information theory, entropy is the measure of the average amount of information that is present in each message that is received. The concept of information entropy is used to quantify the amount of "surprise" or unpredictability present in a message. It's similar to the feeling of excitement that one experiences while opening a gift; the more unexpected the gift, the higher the entropy of the message.

Programming Language Theory (PLT):
Programming language theory encompasses the design, implementation, analysis, characteristics, and classification of formal languages used in programming.

Type systems: In computer programming, a type system is a logical set of rules which assigns a property called a type to every term. These types can be integer, floating point, string, or any other data type. Think of type systems as the traffic rules for your code. They ensure that different data types interact with each other in a safe way, preventing crashes and ensuring smooth operation.

Formal semantics: Formal semantics are used to define the exact meaning of programming language constructs. This allows people to reason about how programs behave and verify their correctness. Think of formal semantics as an "instruction manual" for programming language constructs. It provides a detailed explanation of what each piece of code does, similar to a remote control's buttons.

Concurrency theory: Concurrency theory is a concept in computer science that deals with the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. It is like a traffic cop for programs, helping tasks to run at the same time without crashing into each other. This makes programs more efficient and smooth running.

Computational Logic:
Computational logic refers to the use of logic for computation or reasoning. It is similar to the relationship between mathematical logic and mathematics, and philosophical logic and philosophy, with regards to computer science and engineering. Essentially, it is known as “logic in computer science”.

Computational logic can be seen as a bridge that connects reasoning and computing. By utilizing the power of logic, it allows computers to think and solve problems. It is like a secret code that unlocks the hidden potential of logic within machines.

Logic programming: Programming logic involves logical operations on data that follow logical principles and produce quantifiable results. Examples include Prolog and SQL.

Formal verification: Formal verification is a process of using mathematical reasoning to check whether a system's behavior, described through a formal model, meets a certain property, also described through a formal model. It can be thought of as building a bridge between math and software. Formal verification helps analyze computer programs and prove their correctness, which is crucial for ensuring the reliability and safety of important systems such as medical devices or airplane software.

Model checking: Model checking is a process used to verify the accuracy of a system's model. It automatically confirms the correctness of systems against formal specifications. This technique is commonly used in hardware and software design to detect and prevent errors before deployment.

Think of it as a super-powered test drive for your software or hardware before releasing it to the world. Model checking puts it through its paces, simulating every possible situation to ensure it behaves exactly as planned. This prevents crashes and ensures smooth operation.

Networking

Computer networking is the interconnection of computing devices that exchange data and share resources using communication protocols over wired or wireless technologies.

Networking is an essential component of the digital world. It is crucial for computer scientists to understand its principles. Networking is not just about linking devices; it also controls how information flows, how systems communicate, and how the internet operates. Let's delve into some of the critical terms in computer networking.

Network Basics:

  • Node: A network node is a connection point between network devices such as routers, printers, or switches that can send and receive data from one endpoint to another. Any device connected to the network, such as computers, printers, servers, etc., can be considered a node.

  • Network interface card (NIC): A hardware component that enables a device to connect to a network.

  • Protocol: Network protocols are a set of rules that define how devices communicate across a network to exchange information safely and efficiently.

  • Internet Protocol (IP) address: An IP address is a unique identifier for devices or networks connecting to the internet.

  • Subnet: A subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting.

  • Router: A router is a device that enables the connection of two or more networks or subnetworks that use packet-switching. It has two primary functions: managing traffic between networks by directing data packets to their intended IP addresses and allowing multiple devices to access the same internet connection.

Network Communication:

  • Transmission Control Protocol (TCP/IP): TCP/IP is a standardized set of protocols that enables computers to communicate on a network, such as the internet.

  • Open system interconnection (OSI) model: The OSI model is a standard model for network communications, consisting of seven layers. It was adopted by major computer and telecommunication companies in the early 1980s.

The seven layers of the OSI modelFigure 7. The seven layers of OSI model

  • Packet: In telecommunication and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. It is a unit of data that is transmitted over a network.

  • Ports: Ports are virtual locations on an operating system where network connections start and end, acting as virtual communication channels on a device.

  • Bandwidth: Bandwidth refers to the amount of data that can be transmitted over a network in a unit of time, usually measured in bits per second (Bps).

  • Latency: The total time required for a data packet to travel.

Network Security:

  • Firewall: In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.
  • Encryption: Encryption is a security technique used to protect sensitive information by converting it into an unreadable format, known as ciphertext, which can only be understood by authorized parties. This process involves encoding the original text, called plaintext, into an alternative form using a complex algorithm. The goal is to ensure that only authorized individuals can decipher the ciphertext and gain access to the original information.
  • Authentication: In authentication, users or computers must prove their identity to servers or clients. This involves verifying the user's identity.
  • Authorization: Authorization is the process by which a server verifies if a client has the permission to access a resource or file. It is the process of allowing a user access to specific resources.
  • Malware: Malware is any file or code that infects, explores, steals or performs any action an attacker desires, often delivered via a network.
  • DDoS attack: In computing, a denial-of-service attack is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting the services of a host connected to a network. A distributed denial-of-service (DDoS) attack is a cyber-attack that floods a network with a massive amount of traffic, rendering it unavailable to its intended users.

Additional Terms

  • VLAN: A virtual local area network is a partitioned and isolated broadcast domain in a computer network at the data link layer, creating separate domains.
  • VPN: VPN is short for virtual private network. It encrypts your internet traffic in unsecured networks to ensure your digital privacy. It creates a secure tunnel over a public network.
  • DNS: The Domain Name System (DNS) is a distributed hierarchical naming system used to translate domain names into IP addresses for computers, services, and other resources on the internet or other IP networks.
  • Wi-Fi: Wi-Fi is a wireless networking technology that allows devices to connect to a network without the need for cables and uses radio waves to provide high-speed internet access.
  • Cloud computing: Cloud computing refers to the delivery of computing services over the internet with on-demand availability of computer system resources, such as data storage and computing power, without requiring direct active management by the user.

Security

Computer security refers to a set of measures and controls designed to ensure the confidentiality, integrity, and availability of information that is processed and stored by a computer. It encompasses various aspects including physical information asset protection, data security, and computer safety practices. To have a better understanding of computer security, let's explore some important terms in this field.

Basic concepts:

  • Vulnerability: A security vulnerability is a weakness or gap in an information system that can be exploited by cybercriminals to gain unauthorized access. Such vulnerabilities weaken the system and make it susceptible to malicious attacks. Therefore, it is essential to identify and fix any weaknesses in the system to prevent any potential security breaches.
  • Threats: In the field of computer security, a threat refers to a possible negative action or event that can take advantage of a vulnerability in a computer system or application, ultimately resulting in an undesirable impact.
  • Risk: A computer security risk is any threat that could compromise the confidentiality, integrity, or availability of your data. It is the likelihood of a vulnerability being exploited to cause harm.

Cryptographic Terms:

  • Encryption and Decryption: Decryption is the process of converting an encrypted message into its original readable format, which is also referred to as plaintext. On the other hand, the encrypted data is called the cyphertext message. This process requires the correct key to convert the encrypted data back into its original form. Encryption, on the other hand, is the opposite process of decryption.
  • Hashing: Hashing is the process of transforming an input (often a key) into a fixed-size output that represents the original input in a unique way. This output is often called a hash, message digest, or checksum.
  • Digital signature: A digital signature is a type of electronic signature that is used to authenticate the identity of the signer and ensure the integrity of the signed information.

Network security:

  • Intrusion detection system (IDS): An intrusion detection system functions as a tool or software application that observes a network or systems, identifying potential malicious activity or violations of established policies by monitoring network traffic for any suspicious behavior.
  • Intrusion prevention system (IDS): An intrusion prevention system (IPS) is a security tool that keeps a constant eye on a network, spotting and blocking malicious activity automatically, ensuring a secure environment by preventing harmful network traffic.
  • DDoS attack: A denial-of-service attack aims to disrupt a machine or network resource, making it unavailable to users. In a distributed denial-of-service (DDoS) attack, a massive traffic flood renders the network inaccessible.
  • VPN: VPN, or virtual private network, encrypts your internet traffic for digital privacy in unsecured networks by creating a secure tunnel that is inaccessible.
  • Firewall: In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.

Software security:

  • Buffer overflow: A buffer overflow happens when a storage area is filled with more data than it can handle. The excess spills into nearby memory, messing up or replacing existing data. Attackers take advantage of this by overwriting an application's memory, altering its course and causing harm like file damage or exposing private info. They might inject malicious code into a program as a common way to exploit this vulnerability.
  • SQL injection: SQL injection is a sneaky move in computing where attackers slip harmful SQL statements into entry fields of data-driven applications. This lets them mess with the data by injecting malicious SQL code into the application.
  • Cross-Site scripting (XSS): XSS attacks let attackers sneak in client-side scripts on web pages seen by others, potentially bypassing access controls like the same-origin policy.
  • Malware: Malware is software made on purpose to mess with computers or networks – it might disrupt, leak private information, sneak into systems, block access to data, or mess with your computer's security and privacy without you knowing.

Conclusion

In conclusion, computer science offers a wide range of study options with ample opportunities for aspiring computer scientists to specialize in various areas. While we haven't covered all computer science terms, the ones discussed provide a solid foundation to begin and enhance the understanding of the field's fundamentals.

Attributions

Credits

Top comments (0)