DEV Community

Cover image for 🧩 Beginner-Friendly Guide 'Maximum Matrix Sum' – LeetCode 1975 (C++, Python, JavaScript)
Om Shree
Om Shree

Posted on

🧩 Beginner-Friendly Guide 'Maximum Matrix Sum' – LeetCode 1975 (C++, Python, JavaScript)

At first glance, this problem looks like a complex puzzle where you have to flip signs across a grid. However, once you realize how negative signs can "move" across the matrix, it transforms into a clever observation about parity and absolute values.


Problem Summary

You're given:

  • An integer matrix.
  • An operation that allows you to multiply any two adjacent elements by -1.

Your goal:

  • Maximize the total sum of all elements in the matrix after performing the operation any number of times.

Example

Example

Input: matrix = [[1,-1],[-1,1]]
Output: 4
Explanation: We can follow the following steps to reach sum equals 4:

  • Multiply the 2 elements in the first row by -1.
  • Multiply the 2 elements in the first column by -1.

Intuition

The key to solving this problem lies in one powerful realization: you can move a negative sign to any position in the matrix.

If you have two negative numbers anywhere in the grid, you can perform a sequence of adjacent flips to "bring them together" and flip them both to positive. This means:

  1. If the total count of negative numbers is even, you can eventually make every single number in the matrix positive.
  2. If the total count of negative numbers is odd, you will always be left with exactly one negative number.

Since our goal is to maximize the sum, if we are forced to keep one negative sign, we should place it on the smallest possible value in the matrix. This minimizes the "loss" to our total sum. We use the absolute values for all calculations and only subtract the smallest value twice (once to remove it from the sum and once to account for its negative value) if the negative count is odd.


Code Blocks

C++

class Solution {
public:
    long long maxMatrixSum(vector<vector<int>>& matrix) {
        long long absSum = 0;
        int minAbs = INT_MAX;
        int oddNeg = 0;

        for (const auto& row : matrix) {
            for (int num : row) {
                absSum += abs(num);
                minAbs = min(minAbs, abs(num));
                if (num < 0) {
                    oddNeg ^= 1; // Flips between 0 and 1
                }
            }
        }

        // If oddNeg is 1, subtract twice the smallest element
        return absSum - (oddNeg * (long long)minAbs * 2);
    }
};

Enter fullscreen mode Exit fullscreen mode

Python

class Solution:
    def maxMatrixSum(self, matrix: list[list[int]]) -> int:
        abs_sum = 0
        min_abs = float('inf')
        neg_count = 0

        for row in matrix:
            for num in row:
                abs_sum += abs(num)
                min_abs = min(min_abs, abs(num))
                if num < 0:
                    neg_count += 1

        # If the count of negatives is odd, we must keep one negative
        if neg_count % 2 == 1:
            return abs_sum - (2 * min_abs)

        return abs_sum

Enter fullscreen mode Exit fullscreen mode

JavaScript

/**
 * @param {number[][]} matrix
 * @return {number}
 */
var maxMatrixSum = function(matrix) {
    let absSum = 0;
    let minAbs = Infinity;
    let negCount = 0;

    for (let i = 0; i < matrix.length; i++) {
        for (let j = 0; j < matrix[i].length; j++) {
            const val = matrix[i][j];
            const absVal = Math.abs(val);

            absSum += absVal;
            minAbs = Math.min(minAbs, absVal);

            if (val < 0) {
                negCount++;
            }
        }
    }

    // If there's an odd number of negatives, subtract 2 * minAbs
    if (negCount % 2 !== 0) {
        return absSum - (2 * minAbs);
    }

    return absSum;
};

Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  • Parity Matters: In many matrix problems, the "adjacency" rule is a red herring. Focus on what stays constant (the parity of negative numbers) rather than the specific steps to move them.
  • Absolute Value Logic: When you have the freedom to transform signs, thinking in terms of helps simplify the goal of maximizing a sum.
  • Greedy Minimization: When a penalty is unavoidable (like staying with one negative number), a greedy approach of applying that penalty to the smallest element is the optimal strategy.

Final Thoughts

This problem is a fantastic example of why "thinking before coding" is essential. While you could try to simulate the flips, the mathematical property of the operations makes it a simple traversal. In real-world software engineering, this type of logic is vital for optimizing data transformation pipelines or image processing algorithms, where redundant operations can be reduced to a single state check.

Top comments (0)