DEV Community

Cover image for What and How of Random Access Memory

Posted on

What and How of Random Access Memory

When a computer is running code, it needs to keep track of variables (numbers, strings, arrays, etc.).

Variables are stored in random access memory (RAM).

Difference between RAM and storage

RAM is not where mp3s and apps get stored. In addition to "memory," your computer has storage (sometimes called "persistent storage" or "disk"). While memory is where we keep the variables our functions allocate as they crunch data for us, storage is where we keep files like mp3s, videos, Word documents, and even executable programs or apps.

Memory (or RAM) is faster but has less space, while storage (or "disk") is slower but has more space. A modern laptop might have ~500GB of storage but only ~16GB of RAM.

How RAM looks like?

I can't really tell how exactly it looks like, but imagine it as a long long bookcase, made with many shelves, in which every shelf is numbered, these numbers are it address.

Each shelf holds 8 bits. A bit is a tiny electrical switch that can be turned "on" or "off." But instead of calling it "on" or "off" we call it 1 or 0. Each shelf has 1 byte of storage, as 1 byte = 8 bits

Now a question comes, how everything is done?

Processor is the god. Processor does all the real work. It is connected to memory controller, and this memory controller does all the actual reading and writing from and to RAM. IT HAS DIRECT CONNECTION WITH EVERY SHELF.

This direct connection allows Random Access as it is connected to each shelf from 0 to n.

Even though the memory controller can jump between far-apart memory addresses quickly, programs tend to access memory that's nearby. So computers are tuned to get an extra speed boost when reading memory addresses that're close to each other.

Now Cache comes in picture. The processor has a cache where it stores a copy of stuff it's recently read from RAM.

This cache is much faster to read from than RAM, so the processor saves time whenever it can read something from cache instead of going out to RAM.

When the processor asks for the contents of a given memory address, the memory controller also sends the contents of a handful of nearby memory addresses. And the processor puts all of it in the cache.

But if the processor asks to read address 951, then address 362, then address 419...then the cache won't help, and it'll have to go all the way out to RAM for each read.

So reading from sequential memory addresses is faster than jumping around.

Thanks for reading this long long blog <3
Will try to keep next ones shorter :)

Top comments (0)