DEV Community

Cover image for max subarray problem and kadane's algorithm
Joscelyn Stancek
Joscelyn Stancek

Posted on • Edited on

max subarray problem and kadane's algorithm

The max subarray problem and its history

In the late 1970s, Swedish mathematician Ulf Grenander had been discussing a problem: how can you analyze a 2D array of image data more efficiently than brute force? Computers then were slow and pictures were large relative to the RAM. To exacerbate things, in the worst case scenario brute force took O(n^6) time (sextic time complexity).

First, Grenandier simplified the question: Given just a one dimensional array of numbers, how would you most efficiently find the contiguous subarray with the largest sum?

largest subarray problem

Brute Force: A Naive Approach with Cubic Time Complexity

Brute force, it would be half as much time to analyze a 1D array as a 2D array, so O(n^3) to examine every possible combination (cubic time complexity).

def max_subarray_brute_force(arr):
    max_sum = arr[0] # assumes arr has a length

    # iterate over all possible subarrays
    for i in range(len(arr)):
        for j in range(i, len(arr)):
            current_sum = 0
            # sum the elements of the subarray arr[i:j+1]
            for k in range(i, j + 1):
                current_sum += arr[k]
            # update max_sum if the current sum is greater
            max_sum = max(max_sum, current_sum)

    return max_sum

print(max_subarray_brute_force([-2, -3, 4, -1, -2, 1, 5, -3]), "== 7")

Enter fullscreen mode Exit fullscreen mode

Grenander’s O(n²) Optimization: A Step Forward

Grenander improved it to O(n^2) solution. I couldn't find his code in my research, but my guess is he simply got rid of the innermost loop that adds up all of the numbers between the two indices. Instead, we can keep a running sum while iterating over the subarray, thus reducing the number of loops from three to two.

def max_subarray_optimized(arr):
    max_sum = arr[0]  # assumes arr has a length

    # iterate over all possible starting points of the subarray
    for i in range(len(arr)):
        current_sum = 0
        # sum the elements of the subarray starting from arr[i]
        for j in range(i, len(arr)):
            current_sum += arr[j]
            # update max_sum if the current sum is greater
            max_sum = max(max_sum, current_sum)

    return max_sum
Enter fullscreen mode Exit fullscreen mode

Shamos's Divide and Conquer: Splitting the Problem for O(n log n)

Grenander showed the problem to computer scientist Michael Shamos. Shamos thought about it for one night and came up with a divide and conquer method which is O(n log n).

It's quite clever. The idea is to divide the array into two halves, then recursively find the maximum subarray sum for each half as well as the subarray crossing the midpoint.


def max_crossing_sum(arr, left, mid, right):
    # left of mid
    left_sum = float('-inf')
    current_sum = 0
    for i in range(mid, left - 1, -1):
        current_sum += arr[i]
        left_sum = max(left_sum, current_sum)

    # right of mid
    right_sum = float('inf')
    current_sum = 0
    for i in range(mid + 1, right + 1):
        current_sum += arr[i]
        right_sum = max(right_sum, current_sum)

    # sum of elements on the left and right of mid, which is the maximum sum that crosses the midpoint
    return left_sum + right_sum

def max_subarray_divide_and_conquer(arr, left, right):
    # base case: only one element
    if left == right:
        return arr[left]

    # find the midpoint
    mid = (left + right) // 2

    # recursively find the maximum subarray sum for the left and right halves
    left_sum = max_subarray_divide_and_conquer(arr, left, mid)
    right_sum = max_subarray_divide_and_conquer(arr, mid + 1, right)
    cross_sum = max_crossing_sum(arr, left, mid, right)

    # return the maximum of the three possible cases
    return max(left_sum, right_sum, cross_sum)

def max_subarray(arr):
    return max_subarray_divide_and_conquer(arr, 0, len(arr) - 1)


print(max_subarray([-2, -3, 4, -1, -2, 1, 5, -3]), "== 7")


Enter fullscreen mode Exit fullscreen mode

This reduces the time complexity to O(nlogn) time because first the array is divided into two halves (O(logn)) and then finding the max crossing subarray takes O(n)

Kadane’s Algorithm: The Elegant O(n) Solution

Stastician Jay Kadane looked at the code and immediately identified that Shamos's solution failed to use the contiguity restraint as part of the solution.

Here's what he realized

-If an array has only negative numbers, then the answer will always be the single largest number in the array, assuming we're not allowing empty subarrays.

-If an array only has positive numbers, the answer will always be to add up the entire array.

-If you have an array of both positive and negative numbers, then you can traverse the array step by step. If at any point the number you're looking at is bigger than the sum of all the numbers that came before it, the solution cannot include any of the previous numbers. Thus, you start a new sum from the current number, while keeping track of the maximum sum encountered so far.



maxSubArray(nums):
    # avoiding type errors or index out of bounds errors
    if nums is None or len(nums) == 0:
        return 0


    max_sum = nums[0]  # max sum can't be smaller than any given element
    curr_sum = 0

    # Kadane's algorithm
    for num in nums:
        curr_sum = max(num, curr_sum + num)
        max_sum = max(curr_sum, max_sum)
    return max_sum


Enter fullscreen mode Exit fullscreen mode

LeetCode Problems to Practice

What I love about this algorithm is it can be applied to lots of other problems. Try adapting it to solve these LeetCode problems:

Easy

Maximum Ascending Subarray Sum

Medium

Longest Turbulent Subarray

Maximum Sum Circular Subarray

Maximum Product Subarray

Continuous Subarray Sum

Maximum Alternating Sum Subarray (premium)

Hard

Max Sum of Rectangle No Larger Than K

Top comments (0)