Writing words, writing code. Sometimes doing both at once.
When I first learned about background jobs and scheduling workers, I was only in my first six months of learning to code. At first, I didn’t question how it all worked. I just tried to learn how it worked and to get it working.
But after a few months, I started wondering how all of it worked; at the same time, though, I was feeling so overwhelmed by all of the new things that I was being introduced to that I didn’t even try to answer the question of how does it all work that was lurking in the back of mind, and instead focused on simply trying to get it to work in the first place.
Then, a year later, in a job interview of all places, someone asked me about jobs and scheduling and how I thought it all worked. My guess? Off the top of my head, I said that jobs were bunch of objects in an array that you just iterated through and completed, and then removed when they were done. It turns out, I wasn’t that far off from the truth.
My favorite thing about abstractions is that once you start finding a pattern of abstraction in one system, language, or codebase, it becomes so much easier to start recognizing those patterns elsewhere. And some patterns are used in so much of software that we don’t really have to think about them, even though we use them multiple times in a day. One of those abstractions is that’s used to implement jobs, workers, and even the simplest of network requests — they’re called queues. And they’re all over the place!
Queues and stacks are often taught together in computer science courses because they are really similar, even though their uses and implementations are starkly different. We already know quite a bit about how stacks work, and how we can only add and remove elements to them from one side.
This is where queues and stacks differ: they’re structured and built up in totally different ways.
A queue is a linear abstract data type that can contain a long list of elements. But what’s important to remember about them is how they can grow and shrink in size.
There are some terms we’ve probably heard before that actually are derived from the nomenclature of adding to or removing an element from a queue structure. The process of of adding an element to a queue is known as enqueuing, while removing an element from a queue is known as dequeuing. The interesting thing about how queues work, however, is where the enqueuing and dequeueing occurs.
If we think about queues in a deli, or an appetizing shop, things usually shake out like this: you take a number, and wait your turn to be called. The person who arrived first is the one who is served first. That’s just how most lines tend to work, right?
The same goes for queue data structures: the first element at the front of the queue is the first one that’ll be served — or, in most cases, processed, handled, or run, depending on what the element is.
Queues abide by and operate, for the most part, according to the first-in, first-out principle. The first element in the queue, or the element at the front of the queue, is always the first to be processed and removed from the queue.
If we end up forgetting everything else there is to know about queues and stacks, this is probably the most important difference to remember: stacks are last-in, first-out (LIFO) structures, while queues are first-in, first-out (FIFO) structures.
One reason that queues and stacks tend to always come up together as parallel topics is that we end up writing similar functions in order to construct both of these data types. For example, even though they both have differently named functions, a lot of the functionality is the same:
Yet there’s another reason why they come up together and seem to be so intertwined: both of them can be implemented as either an array or as a linked list.
As we already know from last week’s deep dive, implementing a stack as an array can result in a really messy stack overflow if the elements pushed onto the stack end up being more than the array’s allocated size can handle. So when it comes to queues, what’s the deal?
Well, it turns out that arrays can end up being even worse implementation tools for queues, depending on the situation and circumstance.
Arrays can be very powerful when we know the size of our data structure ahead of time. But there a many times (in particular with queues) when we don’t really know how big our queue is going to get. So what happens when we have to enqueue an element? Well, if we don’t know the queue size beforehand and we’re using an array, it’s probable that sooner rather than later, we’ll run out of allocated memory and space. So, we’ll need to copy over the contents of our array and then allocate more space and memory, and then enqueue a new element onto the end of the queue.
But there’s yet another added complexity with array-based queue implementations: we need to be able to enqueue at the back of the array, and dequeue from the front — which isn’t always terrible because accessing the first or last element in an array doesn’t take too much time, but it’s not as simple as with stacks, where the adding and removing of elements all happens from one end of the structure. As our array grows, we’ll need to be able to access one end and the other, and the space time complexity will grow as a result.
However, with a linked list implementation of a queue, things get a whole lot simpler. We don’t need to worry about the queue size ahead of time, since memory can be distributed and the linked list implementation of our queue can grow dynamically (just as long as we don’t use all of our computer’s memory). Enqueuing and dequeuing gets to be a lot easier, because we can simply find an empty space in memory, add a node with a reference to it’s next neighbor. No need to copy and recreate our queue like we had to with an array implementation. And, if we add pointer references to the beginning and end of our list, we don’t need to traverse the entire structure to enqueue or dequeue an element, either!
In fact, once you remove the need to traverse through the queue, the space time complexity of the enqueue and dequeue functions on a queue become constant time, or O(1); that is to say, no matter how big or small or queue gets, the amount of time it takes to add or remove an element remains the same, or constant.
Of course, it’s totally possible and valid to implement both a stack and queue in either array or linked list form. But it’s important to recognize the differences between these two implementations, and when one might be more useful to us versus the other.
Now that we know how queues work, what makes them different from stacks, and when and how we might want to implement them, let’s answer the bigger question here: why does any of this matter?
Well, it actually matters a lot. The reason being: queues are everywhere. They’re all around us! We just need to take a deeper look at how the thing we use every single day actually work underneath all of their abstractions.
If you’re a backend developer, there’s a chance that you’ve had to schedule background jobs, or workers, or tasks that have to run on a separate thread, or as a different process. The method of running a job or spinning up a task that runs multiple workers in the background at any given time is all executed through a job scheduler, which runs a queue in a separate thread and executes jobs in a FIFO manner — the first job that’s added to the queue is run, removed from the queue, until the queue is empty and all of the background jobs have been processed.
If you’re a frontend developer, you’ve probably worked with jQuery, and maybe even had to create animations using its API. The jQuery API uses queues to allow a series of functions to be asynchronously executed on elements in the DOM, and builds up a queue of “steps” that transition from one CSS value to another in order to create a smooth animation. In fact, jQuery’s API actually has a
dequeue() function that you can call to move from one element in the queue to the next!
But it’s not just in our lives as developers or in the worlds of our codebases that queues matter. They also exist in the context of our individual machines!
The software that runs your operating system has its own queues of processes that have to run at any given time. But, generally, it’s not as simple as running through a list of processes all at once; instead your machine has to run different processes of different importance at any given time. Sometimes, this is implemented as a multilevel priority queue, which schedules processes to be divided up based on priority order, and then for queues to be executed based their level of importance.
For example, a system process queue would generally take more precedence than a background process queue; so, the background process queue would have to wait its turn until the system process queue had run and been fully executed until the next queue could begin to execute.
But let’s zoom out even further. What do queues look like on the macroscopic level? Well, it’s not just our individual machines that run concurrent queues all of the time. Servers all around the world (running tons of different applications built all over the world!) also process things using — you guessed it — queues!
When multiple (think hundreds or hundreds of thousands) of machines request data or send data to an application’s server, the server can be inundated with a lot of requests. Have you ever wondered how it handles those request? And how it processes all of them…without exploding?
Well, there’s method to madness, and as it turns out, the method is a queue. Request queuing is the procedure of handling requests before they even enter an application to be processed.
For example, if a hundred people all try to access Amazon’s website on Black Friday for a huge sale, a bunch of their requests will probably hit a single server that’s the closest to all of them. That server will handle all of those those incoming requests by enqueuing them, one by one, into an incoming request queue. One by one, those requests will enter the application and be processed. And when it comes time to respond back to those one hundred machines with Amazon’s homepage, the system will enqueue responses to be sent out in the order that they were processed — in an outgoing request queue.
Pretty amazing, when you think about it, right? One tiny little queue structure is what dominates the web, makes all the software go, and what helps us queue up and keep the internet running as the (almost) well-oiled machine that it is today.
Do you love to queue as much as I do? Well, in that case, you might want to check out these terrific resources!