One of my favorite parts about studying and coming up with algorithms is seeing the different approaches that programmers take when resolving a problem. In this article, I will discuss some popular techniques you can use to solve problems such as...
- Divide and Conquer Algorithms
- Dynamic Programming
- Greedy Algorithms
- Backtracking Algorithms
Divide and Conquer
In my article on sorting algorithms, we looked at the merge and quick sort algorithms. What both have in common is that they are divide and conquer algorithms. Divide and conquer is a common approach to algorithm design and involves breaking a problem down into smaller sub-problems that are similar to the original problem. It often solves the sub-problems recursively and combines the solutions of the sub-problems to solve the original problem.
The logic for the divide and conquer approach can be broken down into three steps:
- Divide the original problem into smaller sub-problems.
- Conquer the sub-problems by solving them with recursive algorithms that return the solution for the sub-problems.
- Combine the solutions of the sub-problems into the solution for the original problem.
Divide and Conquer Example: Binary Search
In my last post about searching algorithms, we implemented the binary search using an iterative approach. Here we will use the divide and conquer approach to implement the binary search.
function binarySearchRecursive(array, value, low, high) {
if (low <= high) {
const mid = Math.floor((low + high) / 2);
const element = array[mid];
if (element < value) {
return binarySearchRecursive(array, value, mid + 1, high);
} else if (element > value) {
return binarySearchRecursive(array, value, low, mid - 1);
} else {
return mid;
}
}
return null;
}
export function binarySearch(array, value) {
const sortedArray = quickSort(array);
const low = 0;
const high = sortedArray.length - 1;
return binarySearchRecursive(array, value, low, high);
}
Note that the binarySearch
function above is what the developer sees to perform the search and the binarySearchRecursive
is where we are using the divide and conquer approach.
Dynamic Programming
Dynamic programming is an optimization technique used to solve complex problems by breaking them down into smaller sub-problems. This may sound a lot like the divide and conquer approach but instead of breaking the problem into independent sub-problems and then combining, dynamic programming breaks the problem into dependent sub-problems.
The logic can be broken down into three steps:
- Define the sub-problems.
- Implement the recurrence that solves the sub-problems.
- Recognize and solve the base cases.
Dynamic Programming Example: Minimum Coin Change Problem
This problem is a variation of a commonly used interview question known the the coin change problem. The coin change problem consists of finding out in how many ways you can make change for a particular amount of cents using a given amount of set denominations. The minimum coin change problem simply finds the minimum number of coins needed to make a particular amount of cents using a given amount of denominations. For example, if you need to make change for 39 cents, you can use 1 quarter, 1 dime, and 4 pennies.
function minCoinChange(coins, amount) {
const cache = [];
const makeChange = (value) => {
if (!value) {
return [];
}
if (cache[value]) {
return cache[value];
}
let min = [];
let newMin;
let newAmount;
for (let i = 0; i < coins.length; i++) {
const coin = coins[i];
newAmount = value - coin;
if (newAmount >= 0) {
newMin = makeChange(newAmount);
}
if (newAmount >= 0 &&
(newMin.length < min.length - 1 || !min.length) && (newMin.length || !newAmount)) {
min = [coin].concat(newMin);
}
}
return (cache[value] = min);
}
return makeChange(amount);
}
Some notes about the implementation above: The coins
parameter represents the denominations (in the US coin system, it would be [1, 5, 10, 25]). In order to not recalculate values, we can use a cache
(this technique is called memoization). The makeChange
function is recursive and responsible for solving the problem and because it is an inner function, it has access to the cache
.
console.log(minCoinChange([1, 5, 10, 25], 37)); // [1, 1, 10, 25]
console.log(minCoinChange([1, 3, 4], 6)) // [3, 3]
Greedy Algorithms
Greedy algorithms are concerned with the best solution at the time with the hope of finding a global optimum solution. Unlike dynamic programming, it does not take into consideration the bigger picture. Greedy algorithms tend to be simple and intuitive but may not be the best overall solution.
Greedy Algorithm Example: Minimum Coin Change Problem
The coin problem that we solved dynamically above can also be solved with a greedy algorithm. How optimal this solution will be depends on the denominations passed.
function minCoinChange(coins, amount) {
const change = [];
let total = 0;
for (let i = coins.length; i>= 0; i--) {
const coin = coins[i];
while (total + coin <= amount) {
change.push(coin);
total += coin;
}
}
return change;
}
As you can see, this solution is much simpler than the dynamic programming solution. Let's take a look at some example parameters to see the difference in optimization however:
console.log(minCoinChange([1, 5, 10, 25], 37)); // [25, 10, 1, 1]
console.log(minCoinChange([1, 3, 4], 6)) // [4, 1, 1]
The greedy solution gave the optimal result for the first example but not the second (should be [3, 3], like we got from the dynamic algorithm).
Greedy algorithms are simpler and faster than dynamic programming algorithms but may not give the optimal solution all of the time.
Backtracking Algorithms
Backtracking algorithms are good for incrementally finding and building a solution.
- Try to solve the problem one way.
- If it doesn't work, backtrack and select repeat step 1 until you reach an appropriate solution.
For an example using backtracking, I will be writing a separate post going over a more complex algorithm. I haven't decided yet but I may try writing a sudoku solver so stay tuned if that interests you!
Conclusion
The possibilities with programming are endless and same goes for algorithm design but I hope this article helps you understand some common approaches.
Top comments (2)
Thanks for the article! Iβd really like to read the article about backtracking, itβs a concept I never really got to fully understand.
That's great to hear and all the encouragement I need to write about it in my next article! Should be out sometime this week π thanks for reading!