Quick, off the top of your head, and without using a calculator or scratch paper, what is thirteen multiplied by two hundred and thirty eight?
You could try and solve this problem using mental math, wasting your valuable time and effort, or you could do what the ancient Egyptians did and make an algorithm that plots the answer into a simple to read data structure. The solution, according to one of the first recorded algorithms in human history, is three thousand and ninety four.
The methodology itself is deceptively simple, but incredibly functional. You start with two columns, the first starting with one and the second starting with thirteen, and for each new row you double the previous number in the column. Then it becomes an easy task of finding two hundred and thirty eight in the first column and matching it with its corresponding answer on the right.
Millennia after the ancient Egyptians used these kinds of basic designs to build the pyramids and construct elaborate effigies to their cats, a entrepreneurial woman by the name of Ada Lovelace would become what many scholars now consider to be the first computer programmer, inventing and reiterating complex mechanical systems that would later allow her to envision a world where machines perform much more than basic arithmetic for the general public. (check out a modern implementation of her Bernoulli Number Function here).
Another hundred years after her, Alan Turing would utilize the same kind of problem solving to crack top-secret Nazi encryption (pro tip, don’t end all of your secret transmissions with “Heil Hitler” if you want them to stay secret) and help end the second World War.
All three of these examples share a unifying commonality, and that’s a basic adherence to algorithmic design. Or, more simply put: they all approached their problems in the same way.
When you first go searching for information regarding algorithmic design in reference to computer science, you are more than likely to find an over-abundance of resources. Many well written articles and academic papers reference methods like divide and conquer, backtracking and dynamic programming. Any one of these paradigms is worth exploring in depth and applying to your own process, but I'd rather focus on the similarities you'll find between nearly all of them. In an effort to create a more approachable and generalized overview of practical Algorithmic Design, I've broken these similarities into three linear stages of problem solving:
Definition, Execution, and Reiteration.
First, the definition stage, or what I think of as the 'understanding' phase. This is where most people tell you to start, and can often be the place where you spend the most time. It's important at this phase to clearly lay out your IOCE's: Input, Output, Constraints, and Edge Cases. You want to focus clearly on what the problem demands in relation to these pre-determined parameters, and start thinking about what kind of steps it will take to get you to the solution. I lump the process of pseudo-coding into this stage as well, as it's a practical way of getting your understanding of the problem into writing. By adding comments and beginning to plan out the direction of our program, we make the next two stages much easier.
Which brings us to execution, or where we get to actually start typing out our code and putting our design into practice. Simply put, you want to follow the arc of your design from the definition stage, being sure to test and retest along the way until you have your first working prototype.
Finally, reiteration. This is where we "redo and review" over and over again until we are sick of it. Reiteration often comes either when basic functionality has been mastered, or your logic from stage one has proven less than sound. That's okay, though! This is perfectly normal and healthy to the process at large, and even encouraged! This is how innovation is cultivated, and where our algorithms become more streamlined. It's the perfect time to start seriously thinking about time complexity in relation to space complexity, the efficiency your algorithm, and what you'd still like to accomplish before it's "done".
Let's use a practical example to walk us through the process from beginning to end: Creating a simple factorial algorithm using JavaScript.
Definition:
A factorial is an integer that represents the product of all the numbers below or equal to n, or the original number. The factorial of 5 is 120 because:
1 * 2 * 3 * 4 * 5 = 120
So, now that we have a grasp of what the problem is asking, let's define our IOCE's.
//Input: Positive integer
//Output: Factorial of original integer
//Constraint:
//Edge Case:
For now we will keep constraints and edge cases empty, as none have been imposed on us yet. Next, we need to pseudo-code out a plan. It first occurred to me that if we loop through everything from n, our given number, down to 1, we can multiply each iteration of the loop against the next and eventually get our factorial. So, now I put that plan into words.
//IOCE: Positive integer, Factorial of original integer, n/a, n/a
//declare function
//create a place to store our product
//loop from n down to 1
//on each loop, multiply each number by the next
//return our final number
Execution:
The fun part! So, let's put our plan into action:
//IOCE: Positive integer, Factorial of original integer, n/a, n/a
//declare function
const factorial = function(n){
//create a place to store our product
let factorial = 1;
//loop from n down to 1
for(let i = n; i > 0; i--){
//on each loop, multiply each number by the next
factorial *= i;
}
//return our final number
return factorial;
}
factorial(5) ---> 120
And it works! But why stop now, there is still so much to improve!
Reiteration:
So, what could we do better? First, I noticed that stopping at 1 is inefficient because multiplying an integer by 1 will always result in that same integer, so that cuts down at least one loop.
for(let i = n; i > 1; i--){
It's also worth nothing that our time complexity is currently O(n), or linear. Meaning that the higher n is, the longer the loop will take to loop through to the end result.
Somewhere during this stage you might consider refactoring the entire thing away from a for loop and instead opt for recursion, like so:
//IOCE: Positive integer, Factorial of original integer, use recursion, n/a
function recursiveFactorial(n){
//if n is strictly equal to zero, stop recursing
if(n === 0){
return 1;
//otherwise, keep multiplying n by the next call of
factorial, decrementing n on each call
} else {
return n * recursiveFactorial(n - 1);
}
}
recursiveFactorial(5) ---> 120
Notice I changed the "C" in our IOCE to reflect the self imposed constraint of using recursion.
This version initially seems a lot more sleek. It takes up slightly less space on the page than our initial approach, and requires less characters to type. You would think this would translate into having a better time complexity, but it actually doesn't. It's still O(n) because the amount of time is still fully dependent on how large n is initially. Not only that, but there is some evidence to suggest that loops are faster than recursion in general, although that's a multi-faceted topic for another time.
At this point, I feel satisfied in my altered first approach, although if I really wanted to get an absolutely constant, or O(1), time complexity for any factorial from 1 - 9 I could just take a note from the ancient data structure that the Egyptians used to solve their problem, but in a modern context using a Javascript object.
let factorialObj = {
1: 1,
2: 2,
3: 6,
4: 24,
5: 120,
6: 720,
7: 5040,
8: 40320,
9: 362800
}
factorialObj[5] ---> 120
console.log("Are you proud of me, ancient Egyptians?")
This is, of course, an imperfect solution, but shouldn't necessarily be beyond consideration. I mean, factorialObj[9] is gonna get you back 362,800 a lot faster than either of other approaches we implemented, I guarantee that.
Here, at the final stage of our implementation, we have crucial choices to make. Choices, mind you, that are provided and informed by all the work we have made up to this point.
This is what algorithmic design is all about though, discovery through understanding, implementation and reinvention. Lady Lovelace knew that. So did Turing and the builders of the ancient world. This process, when adhered to, is the perfect place to start thinking about the big picture design of whatever problem you're trying to solve.
By taking time to understand the fundamental steps between the inception and application stages of our algorithmic logic, we can not only improve our own code, but we also get the opportunity to iterate and improve on generations of prior knowledge that came before us. In summary, modern computer science owes a lot to the gays, women, and cats.
That’s just history.
Top comments (0)