Recently I got a take home challenge that was fun. The challenge started with some pattern matching questions (multiple choice) and then went into algorithms. The first two algorithms were pretty easy and I finished them pretty fast. The last one, looked quite simple but I failed the time/space complexity tests. After maybe 30 minutes on this simple problem I could not find a way to solve the problem.
In this blog I will briefly speak on the problem that I had issues with and my attempts to solve the problem. Then I will talk more specifically about time complexity because while I have looked at it very simply, I didnt really understand the concept that much so I decided to do a bit of more research.
So the problem that I had issues with was very simple in reality. The problem had 3 inputs, all numbers. The goal of the problem was to raise input one to the power of input two. Then modulo the result of that by input three. Pretty easy right? Now heres where I get confused and hence, I researched time complexity. This is was my first attempt at solving the problem.
The crazy thing, is that this failed their time complexity tests! I tried multiple different ways that I found out later were obviously more complex. I tried a loop where I would multiply input one by itself input two times, and a couple other things that I am pretty sure now were not as efficient. My original method was of constant time complexity ( I am pretty sure) which is as fast as we can do. I will explain the constant time soon.
So when I was researching time complexity, I found out some things that I think will be useful in creating more algorithms in the future and how I think about constructing functions. Time complexity deals with how an algorithm deals with inputs and how many times something has to be executed. For example if we make an array with numbers 1-10 and we want to get the number at index 0 we can make a very simple constant time algorithm like this:
This is constant time, the code only has to execute one thing, lookup what is at the array at index 0. Done. No matter how much larger that array is, it will not have to do anything else. This is why I was so confused with my original function not complying with their time complexity tests. I looked up on stack overflow and whatnot to see if Math.pow() was of constant time complexity and most people seem to agree that it is. If constant time complexity is the most efficient way to create an algorithm, then how did I fail the test?
Anyways, I still needed to look up more about time complexity regardless of the question that I was posed with. It turns out time complexity is really not that confusing. So constant time complexity, as mentioned above is quite simple, and in reality most likely doesnt come up that much.
The time complexity that I think comes up very often is linear. This will be used when we loop through things ONCE. If we were to do a nested loop it would be quadratic because we would have to compare say the first element of an array by all the preceding elements and so on. Stuff like basic for loops and whatnot would be linear. They are linear because you would get an input like an array, similar to what we saw earlier and you might have a target value that you must find or not find. The algorithm as linear would have to keep looking through the array until it found the target. It would keep increasing its executions by a factor of x until it found what its looking for. Of course, if the thing we happen to be looking for is the first thing then we might get lucky, but in general this would be of a linear time complexity.
For this function I made a sub function that created an alphabet then I would search through the alphabet for the target letter. I console logged where the function was at in execution so we could see how it is linear. We search each letter until we hit, in this case, "l".
Now in order to make this a more efficient algorithm we could make this a logarithmic function. These are super fast. If we think about a log (x) graph we have something like this :
As we notice, while the input size gets bigger, the function gets more efficient. This can be done many ways. For my example it is quite simple, but reliant on the array being sorted. What I did was I checked the midpoint element of my alphabet. Then we compare that element with our target. If the mid element is the target we are done. If the midpoint element, the letter "l", is greater than the middle of the array ('m') than we remove all the elements from 0 - the middle element. If 'l' was lesser than the middle element, then we remove all the elements from mid - end of array. The array size gets smaller and smaller getting rid of things we dont need until we have our element that we want.
So the great thing, is that we can see how our alphabet shrinks smaller and smaller, and eventually on the fifth step we have our target letter. That was 6 steps faster than the iterative version. Using logarithmic algorithms are super efficient and I think it is very valuable to know how to make them especially for interview questions. The one problem that I see with my code is that I am mutating the array. It might not be best practice to do that, but I could easily make a reference to it and then change that instead.
Top comments (0)