DEV Community

Chris White
Chris White

Posted on

What Does The Best Programming Language Really Mean: The Old Ways

What programming language is "the best" is an often heated topic over multiple social media sites. Various reasons are given including performance, low learning curve, domain specific features, etc. In essence it very much depends. To first understand the current state of programming languages it's important to understand a bit of how things were.

The Hardware

It's important to understand the hardware as code is pretty meaningless if there's nothing to run it on. The CPU (Central Processing Unit) is the main point of execution. Given how much CPUs interact with memory, they often have a hardware cache available. This reduces the time needed to load data from memory (which the CPU is constantly doing). While it may not seem like much now memory access was a potential speed bottleneck back then.

CPU Registers

Data from memory is also pulled into CPU registers. Each register has a specific purpose for executing code (x86 example). Registers also had specific naming conventions. The EAX register for x86 has a very interesting history and is often used as a way to access system calls. Data registers can be used for storage of data for functions such as a string to print to the terminal.

CPU Features

How many registers and what registers are for as changed over the years. Intel introduced MMX technology to speed up mathematical operations (particularly important in the game/media industry before GPUs were popular). 64 bit computing further increased the size of some registers. It also meant that making programs portable in these days was difficult as it depended on the CPU and its features as to whether or not the code would run.

Multi Task Execution

There is a physical limitation as to what a single CPU can accomplish. Modern systems instead utilize multi core processors. It's essentially a single chip which holds multiple CPUs inside of it. This allows the system to execute more tasks in parallel or offload tasks of one CPU if another one is busy with processing. GPUs (Graphics Processing Unit) also assists with offloading many graphics related tasks such as video playback and game rendering.

Interrupts

Interrupts are another component of how the CPU operates. As the name suggests it would interrupt whatever was being done to handle time sensitive events. As I type on my keyboard to write this article hardware interrupts are being sent out. This is because when I type I'm expecting fairly immediate response. Faulty hardware can cause interrupts to fail which has the potential to bring a CPU to a halted state where it can't do anything else.

Interrupts can also be handled on the software side. A fairly popular example was using interrupts to call the DOS (Disk Operating System, a fairly old OS by Microsoft) API via INT 21h. From there it would look at a portion of the EAX register to tell it what specific part of the DOS API to call (09h to print a string for example). As the 21h suggests this was also a time where hexadecimal notation was commonly used for declaring values.

The Lowest Level Code

The first real kind of "programming" was direct machine code input to the CPU. Switch and dials on front panels was one way to accomplish this. Needless to say machine code is, as the name implies, not very human friendly. Pen and paper were used to write code and figure out what should be input to the machine. This makes it more error prone and complicates debugging.

Assembly

Then came in assembly language. While more readable than machine code through features such as labels, it wasn't comparable to the readable programs that exist today. An assembler is used to take assembly code and translate it to machine code. Assembly instructions primarily involved around pulling items from memory into CPU registers where the CPU would work with the data via a specific set of instructions.

Instruction Sets

There were some fairly friendly naming conventions for instruction sets such as "DEC" (decrement by 1), "INC" (increment by 1), "DIV" (unsigned division), etc. Many of the easy to decipher instructions had to do with mathematical operations. On the other hand, instructions such as "INT" (Call to interrupt), "JCC" (Jump If Condition), "LEA" (Load Effective Address), etc. required you have knowledge of low level computing concepts.

Compatibility

Compatibility of assembly language programs was dependent on a number of factors including:

  • The architecture of the CPU (x86/arm/amd)
  • Available instruction set (there were different versions of the x86 architecture which defined what instructions were available)
  • Operating system being run on (Linux or DOS)

This meant that more consideration had to be taken into which hardware the code would run on and how the operating system was configured. Compare this to modern days where Python, for example, can easily be installed across multiple operating systems.

Conclusion

This concludes a look at the old ways of doing things and I hope gives perspective as to how much is abstracted in modern times. In the next part of the series we'll be looking at a high level language that was a turning point for development: the C programming language. For those interested in how things used to work under the hood Visual2 provides a graphical ARM emulator that gives valuable information on the state of the system as assembly code is executed.

Top comments (0)