DEV Community

Cover image for Does Studying For Whiteboard Interviews Make You A Better Engineer?
Jake Z.
Jake Z.

Posted on • Originally published at algodaily.com

Does Studying For Whiteboard Interviews Make You A Better Engineer?

There is perhaps no aspect of the field of software engineering that is more disliked than algorithmic whiteboard interviews. You know what I'm talking about-- sometimes they're called "algorithms and data structures", other times "leetcode-style" questions.

People dislike, and I mean, really dislike these interviews. Here's a sampling of quotes I randomly pulled from Hacker News. First, there's concerns about their effectiveness on a per-interview basis.

Most interviewers don't ask enough technical questions to have any idea what a candidate knows or doesn't know. If their one or two questions happen to be something the candidate knows well, they'll call them a genius. If they happen to not know, they'll label them an idiot.

We've already covered why companies use such interview questions. The problem is that they may be hit-or-miss for each person, but they're incredibly useful when hiring at scale. Other comments lament the interviewers themselves:

I find most interviewers unprepared to evaluate the person for the role, and instead exercise their own biases, stroke their egos, etc. It's largely a voodoo practice that we'll look back and laugh at as a civilization at some point.

Completely Unrelated to Work?

This next one is what we'll be discussing. I've heard over and over again that studying algorithms and data structures isn't really applicable in day to day life.

There are companies that will pay near FANG levels (maybe 10-15% haircut) without subjecting yourself to onerous completion of hundreds of LeetCode problems. There are other options, and we should be aware of them. You'll still need to prove you can code, but the questions we ask have practical application to our codebases: we use recursion and trees, for example, so we ask questions about those.

The above commentor seems to be arguing that the top companies focus more on algorithms and data structures that don't appear frequently in real-life. He talks about recursion and trees are examples of "more applicable" concepts in day-to-day software engineering. This is true to an extent.

No, I have never had to implement a linked list at work. (For some reason, people always point to this as the classic example.)

No, I've never encountered a red-black tree in my career thus far, and I hope I never will.

Yes, I've tended to lean on some pre-built abstraction (database, library, framework, or other) to get my work done, without implementing fundamental algorithms.

However, has studying for them, and knowing these problem-solving patterns, made me a better engineer? I would wager an astounding yes. Here are some ways:

Better Naming Of Things

With these kinds of interviews, you're often nervous and coding in front of a stranger. When this happens, it's really easy to lose track of where you are in a problem.

The thing that helped the most with keeping the problem's "state" in my head, whether on game day or practicing these problems at home, is improved variable and function naming. There are two important constraints in this unique situation:

  1. The names can't be too long, or you'll take a white to write them out on a whiteboard (or you'll run out space)
  2. The naming needs to be extremely specific to the problem at hand.

The reason for #2 is because you'll often have multiple pointers, nodes, or counters to keep track of.

Names like pointer1 and pointer2 don't really work when they do completely different things, or are conceptually moving in different directions. You eventually realize startingPtr and endingPtr help you remember their purpose, and eventually graduate to start and end-- or parent and child.

Management of Complexity

The pragmatic David Heinemeier Hansson once gave a keynote about Conceptual Compression. It's basically the idea of embracing abstractions-- where increasingly, a developer doesn't need to know the often messy or complex details of some particular technology to be effective with it. Studying algorithms and data structures for interviews actually taught me to see the light on this.

New developers tend to think that solving algorithmic challenges is all about holding all the details in your head. In the beginning, when you haven't learned any patterns, that's the whole point. You haven't yet learned the tools of the trade.

These patterns and "tricks" are abstractions that, once understood, let you concentrate on solving the problem. When you know how sliding window, BFS and DFS, or backtracking works, you suddenly have a filter through the various solutions. You're a level up, that is-- your focus can now be on how to use BFS in this problem, rather than figuring out the exact steps to get to the shortest path.

It is true that, without knowledge of these patterns, it becomes more difficult to solve these kinds of challenges. However, the majority of companies that employ this style of interview are upfront about the expectation that you do study them. People derail this notion, because outside interviews, we never use these abstractions though, right?

Using the Knowledge At Work

Nope, these patterns actually come up all the time! Don't believe me? Here's a few examples.

Sets to Dedupe

I was working with a massive dataset of payments for a company spanning decades. My team had written an ETL process that would pull records in chunks of 100,000 rows and transform them based on attributes of the payment. When the ETL was shipped to production, I noticed the process was taking days instead of the hours we had expected.

Upon inspection, one of the methods in the ETL was doing some de-duping. The way it was de-duping was by performing this routine:

  1. Take the chunk of 100k rows as an array
  2. Iterate through each row a. On each row, do a lookup in the original 100k to see if there was another one. b. If there was, delete it from the original array

Besides the issues with mutating the original array, the lookup time was just killing our efficiency. The great thing about tech is that most engineering cultures are blameless, so we as a team quickly changed it to this:

  1. Throw the chunk of 100k rows in a set

There was a literal 10x speed increase from this change.

Some readers might laugh and say that code like that would never make its way to production. Those people have never been under pressure to ship a feature, while having to work with a team of varying experience levels and managing PRs that regularly span hundreds or thousands of lines.

Knowing to use sets is certainly something that could be picked up just by reading code and gaining more experience. But a concerted effort to learn more of these "tricks" can save you when there's a pickle in production.

Hash the Lookups

I've certainly used hash maps professionally quite a bit, even on this site. On AlgoDaily, we have the concept of Challenges. Each time users finish a problem, we add a Completion record with that Challenge's ID referenced. Now, also note that Challenges are also tagged on multiple Categorys through a join table Taggings.

When I was building out the dashboard, I wanted to display how many Completions each user had completed per Category. This was surprisingly difficult to make efficient with the built-in ORM. Even with eager loading, ActiveRecord would make several n+1 calls to properly group each Completion by Challenge, Category, and user.

The fastest solution ended up outside of any framework. I simply made a solo database call to get all Completionss for a given user by joining them to Challenges. Then, server side, I used a good old hash map to group the Completions by Category and used it to look up counts when rendering the UI.

In this case, knowing that multiple network calls would be significantly slower than performing in-memory operations reduced the load time of the dashboard by over 70%.

Graphs and Trees in CRUD

People always say that 90% of software engineering nowadays is just data-in, data-out-- in other words, CRUD apps. That may be the start, but what happens when you want to do things and analyze the data? Or debug it?

Modeling things is one aspect, but operating on what you've modeled is usually the next step. As an example, a friend of mine was recently working on a directory of company employees. Nothing fancy, most companies have something like this.

The difference with this one was that it would just display an employee profile, and let you click to see their manager's page. Then when you went to the manager's profile, you could navigate to his manager-- an on, and on, up the chain. Nearly every company has an org chart/directory like that.

Of course, for those who've studied data structures, this is clearly a tree, or graph, in the making. And it was implemented as such. Each employee record in the database had a manager_id column that pointed to another employee.

This "shape" was incredibly helpful when doing renderings/visualizations of org charts, and when shaping the payload to populate the HTML (which itself is a tree!). Knowledge of these data structures was especially handy when the team needed to audit the departments, their headcounts, and the reporting structure.

Wait, But!

You may read these examples and assume that all of these are things that come with experience. Yet that is precisely my point. Those who've heavily studied Big O notation, Data Structures, Algorithms, and Systems Design will always have efficiency in the back of their minds.

These will always be fundamentals that have been painstakingly uncovered over decades, and are incredibly useful even in day to day work!

A friendly reminder that we’re running our 50% off summer sale. If you like the AlgoDaily emails and tutorials, you'll love lifetime access to a growing course for half off. We plan to add about several new challenges, lessons, and video tutorials each week for the next few months, so don't miss out!

Top comments (3)

Collapse
 
stereoplegic profile image
Mike Bybee

Algos and data structures are applicable to work, but contrived, timed, on-the-spot exercises are not (if your job does look like that, RUN), and tell you absolutely nothing about a developer's capabilities. Furthermore, senior devs in a position to hire should know better, and it's a disturbing tell about a company's culture if they don't.

Collapse
 
pentacular profile image
pentacular • Edited

The insight here should be that none of the above is about studying for whiteboard interviews.

It is all about demonstrating and effectively communicating an understanding of the theoretical and practical aspects of a software engineering problem.

I would recommend never practicing for whiteboard interviews - it is a complete waste of time.

Instead, invest in writing design documents and presenting them for critical review.

Those skills are what the interview is trying to evaluate.

Likewise competitive coding practices poor engineering and should be avoided.

Collapse
 
fasani profile image
Michael Fasani

Yes, I honestly think all engineers should have access to something like leet code and should actively work through the challenges. I think it helps immensely at becoming a better coder. At the same time I think testing people in interviews on those kind of questions is a bit ridiculous. There is 1600+ exercises and knowing one or the other is pointless. I mean half the time when I revisit my previous code I don’t remember it. I’ve done quite a few in JS and now I am learning Rust I plan to do them again in Rust. I think it just makes great practice. If it lands me a job all the better but I think being able to make a solution and then examine how other people solved the same issue is what makes you better. The same is true for working on open source projects, to get better at software we need to practice, study, share, learn, be challenged and that makes us grow.