DEV Community

Cover image for Using the Master Theorem to Solve Recurrences
Tim Downey
Tim Downey

Posted on

Using the Master Theorem to Solve Recurrences

Note: This will be my first of (hopefully) many posts related to Georgia Tech's CS6515 Graduate Algorithms course. My goals here are two-fold:

  1. Explaining the material in my own words helps reinforce my own understanding of the subject (helps me study for the exams 😬).
  2. Providing yet another resource on the subject. Having it explained differently might help future students understand it better!

So that being said, here is my less formalized explanation of how the Master Theorem can be used for the asymptotic analysis of divide-and-conquer recurrence relations.

That sentence itself was a mouthful, so let's break it down a bit more and define some of those terms.

  • Asymptotic analysis - The analysis of the limits of a function. For the sake of this post we will be using Big O Notation to describe the upper bound of our recurrences.
  • Divide-and-Conquer algorithm - A class of algorithms that work by recursively breaking a problem into smaller subproblems and combining the results. Merge sort is a good example of a divide-and-conquer algorithm.
  • Recurrence relation - An equation that expressed a sequence recursively in terms of itself. For example, the recurrence for the Fibonacci Sequence is F(n) = F(n-1) + F(n-2) and the recurrence for merge sort is T(n) = 2T(n/2) + n.

So in other words, if we've got a recurrence relation such as T(n) = 2T(n/2) + n for a divide-and-conquer algorithm like merge sort, we can use the Master Theorem to figure out it's Big O complexity!

Master Theorem Basics

The Master Theorem lets us solve recurrences of the following form where a > 0 and b > 1:

T(n) = aT(n/b) + f(n)

Let's define some of those variables and use the recurrence for Merge Sort as an example: T(n) = 2T(n/2) + n.

  • n - The size of the problem. For Merge Sort for example, n would be the length of the list being sorted.
  • a - The number of subproblems in each recursive step. So in our Merge Sort example, since we are dividing the array into two halves and recursing down each half, a = 2.
  • b - The amount we're reducing the subproblems by. For Merge Sort b = 2 because we are passing half of the array (length n) to each each subproblem. n/b is the total size of each subproblem.
  • f(n) - The work to be done on n outside of the recursive steps. For Merge Sort this represents the merging step for the results of the recursion.

Master Theorem Cases

Recurrence relations that can be solved by the Master Theorem fall into three cases describing where the bulk of the time complexity cost lies for the recurrence. These cases are:

  1. Work performed in the subproblems (aT(n/b) portion) has the greatest impact on overall time complexity.
  2. Work performed in the subproblems has about the same level of impact as work performed in the dividing/combining steps (f(n) portion)
  3. Work performed in the subproblems has a lower impact on overall time complexity than the work performed in the dividing/combining steps.

To figure out which case applies you'll need to compare the size of the subproblems (i.e. aT(n/b)) with the size of the work performed outside (f(n)). This leaves us with the following comparison:

nlogb(a) <=> f(n)

So let's rethink our three cases in those terms:

  1. nlogb(a) - ε > f(n); then T(n) = O(nlogb(a))
  2. nlogb(a) == f(n); then T(n) = O(nlogb(a)*log(n))
  3. nlogb(a) + ε < f(n); then T(n) = O(f(n))

Note: ε (epsilon) is just a constant you can choose such that ε > 0.

Now I used comparison symbols for the cases above because it made it easier to reason about for me, but it is more accurate to think of these cases in terms of upper and lower bounds of the complexity of f(n). Let's rewrite them:

  1. O(nlogb(a) - ε) = f(n); then T(n) = O(nlogb(a))
  2. Θ(nlogb(a)) = f(n); then T(n) = O(nlogb(a)*log(n))
  3. Ω(nlogb(a) + ε) = f(n); then T(n) = O(f(n))

Big O denotes a tight upper bound, Big Ω (omega) denotes a tight lower bound, and Big Θ (theta) denotes a tight upper AND lower bound on complexity for f(n).

In other words, if the upper bound of the cost of f(n) is the the cost of the work for the subproblems, then the algorithm is dominated by time spent working on the subproblems (Case 1).

If the lower bound of the cost of f(n) is the the cost of the work for the subproblems, then the algorithm is dominated by time spent working on the division/combination steps outside of the recursive subproblems (Case 3).

When both the upper/lower bounds are the same, then the cost of the work for f(n) and the subproblems is about equal (Case 2).

Now let's work through a concrete example using the Merge Sort recurrence.

Master Theorem Merge Sort Example

Recurrence relation:
T(n) = 2T(n/2) + O(n)

Variables:
a = 2
b = 2
f(n) = O(n)

Comparison:
nlogb(a) <=> O(n)
n1 == O(n)

Here we see that the cost of f(n) and the subproblems are the same, so this is Case 2:
T(n) = O(nlogn)

Additional Resources

If things are still a bit fuzzy or you'd like to see additional examples, I found the following resources helpful:

Top comments (0)