Imagine you've been programming since the 1980s. Maybe you were a bedroom coder on a ZX Spectrum, then had a career writing BASIC and then Visual BASIC before finally transitioning to C#. Or if you were in the USA, maybe you learned on an Apple II and then learned assembly, Pascal, C, C++ and everything that came after. Four decades of programming, three for a living.
If you are such a person you will likely have never heard the word "microcode".
From from being a confession of ignorance, it turns our own surprise at never having heard of it is widespread. At the very least, many who've heard the term have never looked any further.
Most professional programmers – even very senior ones – do not know about microcode in any concrete sense, for the simple reason that most never needed to.
The reason is simple: microcode sits below the abstraction of assembler instructions, which is usually the limit of what programmers care about. If your good old ANSI C compiled to good-looking assembler, most people stopped looking any further. Microcode was designed to be invisible and - for decades - it succeeded.
That is now changing. Security vulnerabilities, performance limits and the sheer complexity of modern processors have forced microcode into view. If you write software in 2025, you probably still don't need to understand microcode in detail; but you should know it exists, what it does and why it suddenly matters.
What microcode actually is
Every CPU implements an Instruction Set Architecture (ISA) - the set of instructions that software can use. x86, ARM, RISC-V: these are ISAs. When you write assembly language, or when a compiler generates machine code, the result is a sequence of ISA instructions.
Microcode is the layer below.
Inside the CPU, some instructions are simple enough to execute directly in hardware. An integer addition, for instance, can be wired straight through: operands in, result out - done in a single cycle.
Other instructions are however more complex. They involve multiple internal steps, conditional behaviour, memory accesses, flag updates, and corner cases. Implementing all of that in pure hardware would be expensive and inflexible.
Microcode provides an alternative. Instead of hardwiring every instruction, the CPU contains a small internal control program that orchestrates the hardware for complex operations. When the CPU encounters a microcoded instruction, it fetches a sequence of micro-operations from an internal store and executes them in order.
Think of it as firmware for the instruction decoder. The ISA defines what the CPU promises to do. Microcode defines how it actually does it.
The extent of microcode use varies by architecture:
- x86 relies heavily on microcode because its instruction set is large, irregular and burdened with decades of backward compatibility.
- ARM cores vary widely – many are predominantly hardwired, while high-performance implementations such as Apple's M-series or ARM's Cortex-X designs use microcode-like structures to varying degrees.
- RISC-V implementations tend toward hardwired control, though complex extensions may introduce microcode.
Why microcode exists
Microcode originated in the 1950s as a way to simplify CPU design. Rather than creating custom hardware for every instruction, engineers could write microcode sequences that reused a common datapath. This made CPUs cheaper to design, easier to debug and simpler (and therefore cheaper) to modify.
By the 1960s, microcode had become central to computer architecture. IBM's System/360, launched in 1964, used microcode extensively. This allowed IBM to sell machines with different performance characteristics – different hardware implementations – while maintaining a single ISA across the product line. Software written for one System/360 model would run on another. Microcode made that possible. It was a big deal.
The pattern persists. x86 has survived for over forty years partly because microcode allows Intel and AMD to implement the same ancient instructions on radically different internal architectures. The 8086 of 1978 and a modern Zen 5 core both execute REP MOVSB. The microcode behind that instruction has been rewritten many times.
Modern microcode also serves as a post-silicon patching mechanism. Once a chip is fabricated, the silicon cannot be changed; but microcode can be updated. Operating systems and firmware routinely load microcode patches at boot time to fix bugs, close security holes and adjust behaviour. The physical chip stays the same; the control logic changes.
A concrete example
Consider the x86 instruction REP MOVSB. In assembly, it looks like a single operation:
REP MOVSB
The architectural specification says: copy ECX bytes from the address in RSI to the address in RDI, incrementing both pointers and decrementing ECX with each byte, until ECX reaches zero.
That is a lot of work for "one instruction." Internally, it involves:
- Loading a byte from memory
- Storing it to a different memory location
- Incrementing RSI
- Incrementing RDI
- Decrementing ECX
- Checking whether ECX is zero
- Branching back if not
None of this is visible at the ISA level. The programmer sees one instruction. The CPU sees a microcode sequence, something like:
loop:
load byte from [RSI]
store byte to [RDI]
RSI++
RDI++
ECX--
if ECX != 0, jump loop
Modern implementations are more sophisticated – they may copy multiple bytes per iteration, use vector registers, or special-case aligned transfers – but the principle holds. Microcode makes the architectural fiction of "one instruction" hold together.
Why most programmers never encountered it
If microcode has existed since the 1950s, why have most programmers never heard of it?
Three reasons.
First, microcode place in the abstraction stack is quite awkward. Programming education typically covers high-level languages, then perhaps assembly, then maybe pipelines, caches and branch prediction. Microcode sits below the ISA but above transistors – a layer that courses tend to mention briefly, if at all, then move past.
Second, microcode is intentionally invisible. CPU vendors treat it as proprietary. Intel and AMD do not publish microcode documentation. You cannot call microcode from software. You cannot observe it in a debugger. You cannot disassemble it (legally, at least). If something is undocumented, inaccessible and unobservable, it tends to disappear from working knowledge. It's low level of recognition is a sign of success.
Third, for most of computing history, microcode simply did not matter for application programming. Performance problems were algorithmic and bugs were logical. Portability issues lived in languages and operating systems. The hardware was a black box that honoured its documented interface and that was sufficient.
Microcode only intrudes when:
- Instructions misbehave in ways the ISA does not explain
- Timing side-channels reveal internal implementation details
- A "hardware bug" gets fixed by a "software update"
For most programmers, those situations never arose.
Historical irony
Here is an odd fact: microcode was more widely discussed in the 1960s and 1970s than in the 1990s and 2000s.
IBM's System/360 made microcode famous. DEC used it heavily in the PDP-11 and VAX lines. Some machines – Xerox Alto, certain Burroughs systems – even exposed writable microcode, allowing users to define new instructions. Dangerous, but fascinating. Malware authors can only dream.
Then the RISC (Reduced Instruction Set Computing) revolution promised simpler computing due to simpler instructions, executed faster, that would outperform complex microcoded instructions. The slogan was "hardwire everything." Microcode was subjected to name-calling as a relic of the CISC (Complex Instruction Set Computing) past.
Despite the name-calling, there was genuine engineering reality. Early RISC machines – MIPS, SPARC, early ARM – were indeed largely hardwired and performance improved. The argument seemed vindicated.
But x86 survived. Intel and AMD responded not by abandoning microcode but by hiding it more effectively. Modern x86 chips translate complex ISA instructions into internal micro-operations, execute those out of order across multiple pipelines and present the illusion of sequential execution. The microcode is still there. It is just buried under so many layers of complexity that even CPU architects sometimes struggle to explain exactly what is happening.
Meanwhile, the 1980s home computer generation – people who learned on the ZX Spectrum, Commodore 64, BBC Micro, Apple II – grew up with machines that were either hardwired (the 6502) or used microcode invisibly (the Z80). The 6502 famously had no microcode at all; its control logic was hand-drawn. The Z80 did use microcode internally, but this was entirely invisible to programmers and irrelevant to how you wrote software. Either way, there was nothing to notice so nothing to know about.
A whole generation of programmers came up without ever needing to know.
Why microcode matters again
In January 2018, the Spectre and Meltdown vulnerabilities became public. Not at all software bugs, these were flaws in how modern CPUs speculatively execute instructions – flaws that allowed attackers to read memory they should not have been able to access.
The response involved operating system patches, compiler changes and – famously – microcode updates.
Intel, AMD and ARM shipped new microcode that:
- Modified branch prediction behaviour
- Inserted serialisation barriers
- Changed how speculative execution interacts with memory protection
Without changing the silicon of chips that were already in computers around the world, the microcode was updated and behaviour changed.
This made microcode visible in a way it had not been for decades. "We fixed the CPU with a software update" is a sentence that only makes sense if you understand that CPU behaviour is partly defined by mutable control logic.
In the years after Spectre and Meltdown there were many more such incidents:
- Foreshadow (L1 Terminal Fault)
- MDS (Microarchitectural Data Sampling)
- TAA (TSX Asynchronous Abort)
- Retbleed
- Downfall
- Inception
Each required microcode mitigations; and each exposed ever more about the gap between architectural promises and microarchitectural reality.
What this says about modern computing
The traditional conception is that hardware is fixed and software is mutable. You design a chip, fabricate it and its behaviour is set. Software is written that runs on top and whose can be changed at will.
But the underlying reality is that microcode means a CPU is not a fixed hardware object. Its behaviour is affected in three ways:
- Architectural: defined by the ISA specification
- Microarchitectural: determined by the physical implementation
- Policy-driven: controlled by microcode that can be updated
This continues the working model of mainframes from the 1960s – but its security implications are new: mutable microcode becomes an attack surface; and when it defines security boundaries, microcode bugs become security vulnerabilities.
CPU vendors now publish microcode updates regularly. Linux distributions ship them and Windows Update delivers them. Your BIOS may load them before the operating system even starts. The CPU you are using now is not quite the CPU you bought.
This naturally results out of complexity. Modern CPUs are so complex – billions of transistors, speculative execution, out-of-order pipelines, multiple cache levels, simultaneous multithreading – that getting everything right in silicon is perhaps now impossible. Microcode provides a route for fixes without unpopular hardware upgrades: a way to fix mistakes after the fact, to adjust trade-offs and to respond to threats that were not anticipated during design.
Reframing the original surprise
So if you have been programming for decades and only recently learned about microcode, that does not indicate a gap in your education or a failure of curiosity. It means that you worked above an abstraction boundary that abstraction mostly held.
This is how successful design manifests. Abstraction exists so that programmers can ignore lower layers. For most of computing history, ignoring microcode was the correct choice: it let you focus on problems that actually mattered for your work.
We are however now in a transition where hardware is not just mutable but patchable. Not fully – most programmers still do not need to understand microcode in detail – but enough that awareness matters.
Closing thoughts
Microcode was always there. For most of us, we did not need to know. Now, sometimes, we need to understand where "software" ends and "hardware" begins. That boundary is a little more porous than programmers came to believe, but for practical purposes it held. Security research, performance engineering, and the sheer complexity of modern processors have eroded it.
If you write software that cares about security, performance or correctness at the edges, you should know that:
- The CPU is not a fixed machine; it runs updateable control code
- Microcode updates can change behaviour in ways that affect your software
- The ISA is a contract, but the implementation beneath it is mutable
The illusion of fixed ISA's is still useful. But there's a lot going on beneath it that you need.
Further reading
- Agner Fog's microarchitecture manuals https://www.agner.org/optimize/microarchitecture.pdf
- Intel's optimisation referencehttps://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html
- academic literature on Spectre-class vulnerabilities e.g. https://css.csail.mit.edu/6.858/2023/readings/spectre-meltdown.pdf There's a big rabbit-hole to go down after those.
Top comments (0)