A question of tens or twos
When it comes to programming retro computers, four universal truths are practically immutable by contemporary standards: they’re slow and have very little memory … and uhhhh … they’re slow and have very little memory. Okay, so technically that’s only two, but their sheer veracity certainly begs a second mention; if you’ve never programmed a forty-year-old home computer before, then it’s almost impossible to understate the pedestrian nature of these binary sauropods. Forget C++, PHP, JavaScript, and Python; you’ll have to get down with some processor-parseltongue if you intend to converse with these relics on any meaningful level at all. Whilst early third-generation languages are often a good fit for some of the more powerful 16-bit systems (C and Pascal are particularly popular for non-critical applications), assembly language is almost certainly the preferred parlance of serious conversation.
As a somewhat more palatable variant of the CPU’s native tongue (machine code), assembly language foregoes the convenience of familiar paradigms in an attempt to converse in the most efficient manner possible. Of all its inherent eccentricities, the CPU’s binary implementation is arguably the most esoteric of all, and this is intrinsically reflected within its language of operation. For reasons we won’t go into here, digital CPUs employ base-2 numbers in lieu of our more familiar base-10 system, and this is frequently observed as a source of confusion amongst the uninitiated. Nonetheless, when considered alongside our ubiquitous decimal number system, these binary interlopers need not feel quite so alien as they may at first appear.
When working with numbers in our day-to-day lives, we usually do so in an intuitively casual manner. Indeed, so practised are the methods by which we become numerate, it’s often easy to overlook what it is we’re actually doing. For example, when expressing the number “nine thousand two hundred eighty-one”, we pay little mind to the fact that we’re implicitly constructing a weighted tally of incremental powers of ten (i.e. one lot of one [1 x 100], plus eight lots of ten [8 x 101], plus two lots of a hundred [2 x 102], plus nine lots of a thousand [9 x 103]) – see figure 1. Whilst we inscribe neither addition, multiplication, nor exponent, this is of course precisely what we’re doing when we write the decimal number “9281” (i.e. for brevity, we inscribe only the weights).
Furthermore, enchanted by our collective love of the value ten (most of us have ten fingers and toes), the decimal system often presents itself as the natural, de facto, or singular representation of numeric value. This was not lost on our Hindi/Arabic brothers, who invented the version of the decimal system we use today – Indian scholars were using advanced arithmetical techniques when we were still deliberating the best strategy for dividing “XV” into “CCCLXV” – and is likely the reason they too settled upon powers of ten. Nonetheless, rather than a system specifically designed to appease our predilection for the number ten, the Hindu approach was a generic positional system that worked equally well for powers of any base (b) with digit weights (d#), where (b > 1) and (0 ≤ d# < b) – see figure 2.
Not only did this prove to be the most efficient way of representing value via a small incremental alphabet (0 through b-1), but it also embodied intrinsic arithmetic symmetries that we often take for granted today – see figure 3. So intrinsic, in fact, that whilst pitifully patent in their utterance, they underpin our entire per-digit divide and conquer approach to contemporary arithmetic problems: rinse and repeat per-digit strategies are possible precisely because each digit remains proportionally symmetrical to its neighbour (i.e. digits, or groups of digits, always exhibit the same relationship irrespective of magnitude).
A message to the future
Coming from a generation that was spoon-fed decimal numbers from an early age, it’s almost impossible to fathom the monumental influence of Hindu numerals within the annals of human endeavour. Without exception, they represent humanity’s greatest gift to itself, and serve to highlight the infinite possibilities when cultures choose to embrace rather than shun one another. Thanks to poverty, hunger, and a perpetual imbalance of wealth, the potential of our species diminishes every day. How many cancer cures and Nobel laureates have already fallen to malaise and indifference? Only once we’ve learned to supply the need, rather than the greed, will we be truly free to explore a shared destiny.
We all stand upon the same ground; we all drink the same water; we all breathe the same air: their fate merely telegraphs the beginning of our own.
As humanity finally turned its attention towards the high-speed general-purpose computer as a means of complex problem-solving, it began to explore ways of representing, storing, and manipulating information via electronic simulation. Due to our brain’s predisposition for discrete data (we understand the many relative only to the one), almost every human-interpreted problem becomes fundamentally enumerable in its deliberation. When considering the explicit value of an integer, for example, we can do so only relative to our implicit concept of the unit-value “one”. The same holds for sub-integer values, where we’re compelled to employ integer ratios whose numerators and denominators also represent multiples of the unit-value “one”. It should also be noted that our biological need to unitise, quantise, and enumerate information is the primary reason we’re unable to represent values such as “Pi”, “root 2”, and “e”, within our preferred number systems: as with all irrational numbers, these values harbour no implicit relationship with “one”.
Nonetheless, despite its inherent flaws, enumeration became the de facto mechanism throughout our cerebral evolution and is therefore the paradigm that computing apparatus has endeavoured to emulate. In theory, an enumerating machine should be capable of modelling any problem bound for human interpretation. Unfortunately, as the fledgling semiconductor slowly matured into the behemoth we see today, it soon became apparent that decimal enumeration wasn't a particularly convenient problem to solve. Whilst certainly possible, large-scale circuits capable of routing ten discrete electrical values had consistently proven difficult to manufacture and expensive to scale. Conversely, however, the electrical relay, thermionic valve, and silicon transistor had all demonstrated a particular nuance for implementing cost-effective two-state (on or off) electrically homogeneous circuits, where complex 'switching' designs could be realised through recursive duplication of their simpler counterparts.
Thus, early engineers found themselves standing at an intellectual crossroads that would forever shape the face of contemporary computing as we know it today: should they continue down the path of decimal experimentation, or should they redefine their expectations in favour of a two-state technology that already satisfied much of their criteria? Having ultimately settled upon the latter, base-2 Hindu numerals quickly became the most convenient way of implementing our established concept of number within the context of a two-state circuit: rather than attempting to represent a collection of decimal digits (0 through 9), base-2 digits (0 through 1) could be directly mapped onto the off/on states of each relay, valve, or transistor. Whilst these numbers would invariably necessitate more digits per value than their decimal counterparts (i.e. [0 ≤ d# < 2], rather than [0 ≤ d# < 10]), their symmetries and mechanics would remain fundamentally commensurate with the Hindu approach as a whole – see figures 1 & 4 for a direct comparison.
And so it was, the base-2 'binary' computer was born.
In part #2, we'll explore how digital CPUs group, store, and manipulate binary numerals.
Top comments (0)