You've written thousands of lines of code. But have you ever stopped to think about what happens to a string like "Hello" at the lowest level of your computer?
It doesn't stay as "Hello". It becomes this:
01001000 01100101 01101100 01101100 01101111
Understanding why — and how — is one of those foundational concepts that makes everything else in computing click. This post covers it properly.
What binary actually is (and why computers use it)
Binary is a base-2 number system: only two digits, 0 and 1. The reason computers use it isn't philosophical — it's physical.
Every processor is built on transistors. A transistor is a switch. It's either off (0) or on (1). Billions of these switches, working in combination, can represent any number, any character, any instruction.
There's no "kind of on" state. Binary maps perfectly to the hardware reality.
The jump from bits to characters: ASCII
A single 0 or 1 is called a bit. Eight bits make a byte. One byte can represent 256 different values (2⁸ = 256).
But a byte is just a number. How does the number 72 become the letter H?
That's what ASCII does. It's a lookup table — a standard agreement that number 72 means H, number 65 means A, number 32 means a space. Published in 1963, it became the foundation for virtually all text encoding that followed.
Character → ASCII Decimal → 8-bit Binary
H → 72 → 01001000
e → 101 → 01100101
l → 108 → 01101100
l → 108 → 01101100
o → 111 → 01101111
Every time your code does "Hello", the machine is actually working with those five bytes.
Why this matters in practice
String comparison bugs
"A" === "a" // false
// ASCII 65 vs ASCII 97 — completely different bytes
Case sensitivity isn't a language quirk. It's a direct consequence of ASCII values being different numbers.
Character encoding errors
Ever seen ’ where an apostrophe should be? That's a UTF-8 encoded character being read as Latin-1. Different encoding tables, same bytes, different characters. Understanding binary encoding is what lets you debug these without guessing.
Off-by-one in binary flags
When you're working with bitwise operations — permissions, flags, bitmasks — you're working with binary directly:
const READ = 0b001 // 1
const WRITE = 0b010 // 2
const EXECUTE = 0b100 // 4
const permissions = READ | WRITE // 0b011 = 3
const canRead = permissions & READ // truthy
const canExecute = permissions & EXECUTE // 0 = falsy
If you don't understand binary, this looks like magic. Once you do, it's obvious.
The uppercase/lowercase relationship
One elegant consequence of ASCII's design: uppercase and lowercase letters differ by exactly one bit.
A = 01000001 (65)
a = 01100001 (97)
^
This bit flips
The difference is 32 — which is exactly 0b00100000. This is why some low-level toUpperCase implementations are literally just a bitwise operation: char & ~0x20 for uppercase, char | 0x20 for lowercase.
Quick reference: common ASCII values worth memorizing
| Range | Characters |
|---|---|
| 48–57 | Digits 0–9 |
| 65–90 | Uppercase A–Z |
| 97–122 | Lowercase a–z |
| 32 | Space |
| 48 |
'0' (not the number zero — the character) |
That last one trips people up constantly. The character '0' has ASCII value 48, not 0. When you parse a digit character to an integer, you subtract 48 (or '0') — now you know why.
ASCII vs Unicode vs UTF-8
ASCII handles 128 characters. The world has a few more than that.
Unicode is the modern standard — it defines over 1.1 million possible characters covering every language, emoji, and symbol system. UTF-8 is the most common encoding of Unicode: it uses 1 byte for ASCII-compatible characters and 2–4 bytes for everything else.
The clever part: UTF-8 is fully backward compatible with ASCII. Every valid ASCII document is valid UTF-8. For English text, they produce identical bytes.
Try it yourself
If you want to see any of this in action without writing code, this binary code translator converts text to binary and back instantly — useful for verifying examples or decoding mystery strings.
For code, here's a one-liner in JavaScript:
// Text to binary
"Hello".split("").map(c => c.charCodeAt(0).toString(2).padStart(8, "0")).join(" ")
// "01001000 01100101 01101100 01101100 01101111"
// Binary to text
"01001000 01100101".split(" ").map(b => String.fromCharCode(parseInt(b, 2))).join("")
// "He"
The takeaway
Binary isn't abstract theory. It's sitting underneath every string, every boolean, every bitwise flag in your code. The developers who understand it write cleaner low-level logic, debug encoding issues faster, and have a mental model that makes new concepts stick more easily.
Found this useful? I write about web fundamentals and developer tools at FindBest Tools.
Top comments (0)