Invented in 1965 by Maurice Wilkes, a British computer scientist, caching was described as a system whereby data would be pulled from a main, slower memory source in order to speed up subsequent use.
Let's start with the modern official definition:
Computing component that transparently stores data so that future requests for that data can be served faster
Yes, so it seems that the cache is like a little memory system in your browser that remembers stuff you've done before so it can do it faster next time... right?
Imagine that you needed to do a research paper for school or work and that you were only allowed to use books from a library (like the olden days 📚)
You could go back to the library every time you needed a new piece of information, but the most efficient thing to do would be to take some books home with you and put them on your desk while you work.
Here, your desk is your cache. Instead of going back and forth to the library, you're storing the information you need so you can be more efficient and access it quicker if you need it again.
Let's continue with the library analogy, because it works here. Just like your desk, the cache is smaller and cannot hold as much information as the whole library.
The first time you visit any website, say
dev.to, it takes a lot longer than it will do on subsequent visits, because the first time you visit it, your computer has to download everything - the logos, the images, the scripts that make the site run, everything. After that, your computer will only need to download new information that it hasn't seen before.
Yep! Clearing your browser cache means that you are deleting all of the information previously stored in the browser. Your computer might be a little slower to load larger websites again, as it has to start from scratch, but remember that developers often update the scripts or images in their site and therefore old 'versions' of it will no longer work. This is why clearing your cache can often fix problems. You're forcing your computer to download the whole of the website again, not just download 'new' bits.
At the top of the hierarchy are memory processors, which are super fast but very small. At the bottom are the SSD's and hard drives, which have huge capacities but are really slow (compared to the top)
Let's go from the top of the hierarchy: at your desk, you can access a limited amount of information very quickly; the information in the books stacked in the library must be searched through, but there's so much more there; at the bottom of the hierarchy would be the old books that hardly anyone uses and have been moved to off site storage. This bottom tier has the biggest amount of information in, but would be the slowest to access.
I'm glad you asked!
Consider what would happen if your cache (your desk) got full? How would you know what information (books) to get rid of first? You need more room!
No really...that is what it's called. We'll shorten it to CES.
🤓Let's get nerdy....
Least Recently Used
This is a pretty logical (in my opinion!) and easy to implement strategy. It just means that you get rid of old information that you haven't used for a while - starting with the oldest and least used.
The issue with this strategy is that it requires you to keep track of when items in your cache were last accessed, which does slow it down a bit.
Grab logic firmly by the scruff of the neck and throw it outside.
As you probably guessed, this strategy involves removing a random item when the cache is full 🤷🏼♀️
This strategy is the easiest to implement and in practice doesn't end up being too different to Least Recently Used. It's used in small ARM processors to keep operations light and designs simple.
Thanks to Emmie for suggesting the topic!