Recent from talks
Nothing was collected or created yet.
Maximum subarray problem
View on WikipediaMaximum subarray problem
View on GrokipediaProblem Description
Formal Statement
The maximum subarray problem consists of identifying a contiguous portion of a one-dimensional array that yields the largest possible sum of its elements.[6] Formally, given an array of integers indexed from 1 to , the task is to find indices and such that and the sum is maximized over all possible contiguous subarrays.[6] The elements of the array may be positive, negative, or zero, and the subarray must be non-empty in the standard version of the problem. The primary output is the maximum sum , although some formulations also return the starting and ending indices and .[6] A variant of the problem permits the empty subarray, which has a sum of 0 and is selected when all elements are negative; the standard case, however, requires a non-empty subarray.[6]Illustrative Example
To illustrate the maximum subarray problem, consider the array , which contains a mix of positive and negative integers to highlight the need to identify contiguous segments where positives outweigh negatives.[7] This example demonstrates the task of finding the non-empty contiguous subarray with the largest possible sum by manually examining possibilities. A manual approach involves systematically computing sums for subarrays of increasing lengths to build intuition. For single-element subarrays (length 1), the sums are simply the elements: -2, 1, -3, 4, -1, 2, 1, -5, 4, with the largest being 4 (from index 3). For length 2 subarrays: sums are -1 (indices 0-1), -2 (1-2), 1 (2-3), 3 (3-4), 1 (4-5), 3 (5-6), -4 (6-7), -1 (7-8), where the largest is 3. Extending to length 3: sums include -4 (0-2), 2 (1-3), 0 (2-4), 5 (3-5), 2 (4-6), -2 (5-7), 0 (6-8), with 5 now the leading candidate (indices 3-5). For length 4: sums such as 0 (0-3), 1 (1-4), 2 (2-5), 6 (3-6), -3 (4-7), 2 (5-8), where 6 emerges as a new high (indices 3-6). Continuing this process for lengths 5 through 9 reveals no higher sums; for instance, the full array (length 9) sums to 1 (indices 0-8), and other longer segments like indices 3-8 sum to 5. The complete manual enumeration confirms the maximum sum of 6 from the subarray [4, -1, 2, 1] (indices 3 to 6, 0-based).[7] The table below presents representative subarrays across lengths, including the optimal one (highlighted), all-negative cases, and others for comparison:| Start Index | End Index | Subarray | Sum |
|---|---|---|---|
| 0 | 0 | [-2] | -2 |
| 2 | 2 | [-3] | -3 |
| 7 | 7 | [-5] | -5 |
| 3 | 3 | [8] | 4 |
| 3 | 4 | [4, -1] | 3 |
| 3 | 5 | [4, -1, 2] | 5 |
| 3 | 6 | [4, -1, 2, 1] | 6 |
| 5 | 6 | [2, 1] | 3 |
| 6 | 8 | [1, -5, 4] | 0 |
| 8 | 8 | [8] | 4 |
| 0 | 8 | [-2, 1, -3, 4, -1, 2, 1, -5, 4] | 1 |
Historical Background
Origins in Computing
The maximum subarray problem emerged in the late 1970s within the field of computer science, particularly in contexts involving signal processing and pattern recognition. In 1977, Swedish mathematician Ulf Grenander at Brown University proposed the problem as a simplified one-dimensional model for maximum likelihood estimation in digitized image analysis, where identifying contiguous regions with the highest aggregate values could reveal patterns in noisy data.[9] This formulation arose from Grenander's work on two-dimensional arrays, which proved computationally intensive, leading to the reduction to a linear sequence to study contiguous substructures efficiently. The problem's roots in these early computing applications highlighted motivations around processing sequential data in resource-constrained environments, such as early digital signal systems. During this period, the problem gained traction in academic computing discussions, notably through a seminar course at Carnegie Mellon University led by Jon Bentley, Michael Shamos, and Joseph Kadane. Focused on stochastic analysis of algorithms, the seminar explored the problem's implications for worst-case and average-case performance in sequence processing tasks, bridging statistics and computer science.[9] Grenander had shared his ideas with Shamos earlier that year, prompting explorations of the problem's computational challenges in one-dimensional forms relevant to emerging data structures research. This seminar represented an early conceptualization of the problem within formal computing literature, emphasizing its utility in analyzing linear arrays from real-world signals. A pivotal popularization occurred in 1984 through Jon Bentley's column in Communications of the ACM, which detailed the problem's design techniques and connected it to broader algorithm development in computing education. This publication introduced the maximum subarray problem to a wider audience of programmers and educators, framing it as a canonical example in early algorithm textbooks and data structures courses for teaching sequence processing concepts. Such inclusions underscored its role in foundational computing curricula, often illustrated through applications like evaluating consecutive performance metrics in linear data streams.[9]Key Developments and Contributors
The efficient algorithms for the maximum subarray problem emerged in the late 1970s during a seminar at Carnegie Mellon University involving computer scientists Jon Bentley and Michael Shamos, alongside statistician Joseph (Jay) Kadane. Shamos developed an initial divide-and-conquer approach achieving O(n log n) time complexity, which Bentley later detailed in his 1984 publication on algorithm design techniques, marking a significant advancement over brute-force methods by leveraging recursive decomposition to identify crossing subarrays.[10] Kadane contributed the seminal linear-time dynamic programming solution, O(n), by formalizing an incremental scan that tracks the maximum subarray sum ending at each position. This algorithm, now widely known as Kadane's algorithm, was first described in the same 1984 Communications of the ACM article by Bentley, though a 2023 paper by Kadane provides his firsthand account of its origins in the late 1970s seminar discussions.[10][11] Subsequent refinements in the 1980s and 1990s focused on edge cases, such as arrays with all negative elements or allowing empty subarrays (with sum 0). Kadane's original formulation returns the largest single element in all-negative cases, while variants permitting empty subarrays reset the sum to 0 when beneficial, ensuring robustness in diverse inputs; these adaptations appeared in pedagogical discussions and implementations through the early 2000s.[11]Practical Applications
In Algorithm Design and Education
The maximum subarray problem is a cornerstone in introductory algorithms courses at universities worldwide, serving to introduce students to key paradigms such as brute-force enumeration, divide-and-conquer recursion, and dynamic programming optimization. Instructors use it to highlight efficiency trade-offs: the quadratic brute-force method exhaustively checks all subarrays, the divide-and-conquer approach achieves O(n log n) time by recursively partitioning the array and combining crossing subarrays, and dynamic programming yields linear time by building on partial solutions. This progression helps learners grasp how problem decomposition and reuse of computations reduce complexity from exponential possibilities to polynomial efficiency. A prominent example appears in the textbook Introduction to Algorithms (commonly known as CLRS) by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, where Chapter 4 dedicates Section 4.1 to the problem as an entry point into divide-and-conquer techniques. The text includes exercises that extend the core problem, such as devising a non-recursive linear-time algorithm or writing pseudocode for the brute-force variant, encouraging students to explore edge cases like all-negative arrays and implement optimizations themselves. These homework problems reinforce understanding of recursive design while preparing learners for real implementation challenges. The problem's pedagogical strength also stems from its illustration of foundational concepts like prefix sums, which track cumulative totals to avoid redundant calculations in dynamic programming, and the interplay between local optima (maximum sums ending at each position) and the global optimum (the overall maximum). By contrasting suboptimal local choices—such as discarding negative prefixes—with strategies that maintain the best ending sum, it teaches the nuances of greedy-like decisions within a dynamic framework. Beyond academia, the maximum subarray problem has been a standard coding interview question at major technology firms, evaluating candidates' skills in recognizing inefficiencies in naive solutions and deriving optimal algorithms under time constraints. Interviewers often start with the brute-force approach to assess basic understanding before probing for improvements like Kadane's algorithm, testing the ability to balance correctness with performance. This practice underscores its role in professional algorithm design training, where it exemplifies transitioning from theoretical knowledge to practical optimization.In Real-World Data Processing
In financial time series analysis, the maximum subarray problem facilitates the identification of contiguous periods in stock price data that maximize cumulative returns, representing the most profitable sequences for holding investments. By treating daily price changes as array elements, the problem models the selection of optimal buy and sell points, where the subarray sum corresponds to the net gain over that interval. This application is particularly useful for retrospective performance evaluation and strategy optimization in volatile markets.[12] In bioinformatics, the maximum subarray problem extends to analyzing gene expression matrices, where it identifies subsets of genes exhibiting high expression levels across specific sample groups, aiding in biomarker discovery and subtype classification. For instance, in breast cancer datasets, algorithms solving the max-sum submatrix variant—building on one-dimensional subarray techniques—pinpoint non-contiguous but maximally expressive gene-sample combinations, outperforming traditional biclustering by handling outliers and heterogeneity more robustly. Experiments on real datasets with thousands of genes demonstrate that constraint programming approaches achieve near-optimal solutions in seconds, enabling efficient mining of biologically relevant patterns like enriched Gene Ontology terms.[13][14] In signal processing, the maximum subarray problem detects bursts of maximum energy or activity in sequential data from audio, sensors, or RF signals, crucial for tasks like anomaly detection and event localization. Kadane's algorithm is applied to arrays of signal metrics, such as reconstruction errors in spoken term detection, to isolate contiguous segments with the highest summed intensity, thereby hypothesizing query occurrences efficiently. This linear-time method supports real-time processing in resource-constrained environments.[15] For machine learning preprocessing, particularly in sequential data from IoT streams, the maximum subarray problem enables feature engineering by selecting contiguous segments that capture peak patterns or anomalies, enhancing model inputs for time-series prediction. In object detection pipelines, it accelerates subwindow searches over feature maps to identify discriminative regions, reducing computational overhead while maintaining accuracy in high-dimensional data.[16]Solution Algorithms
Brute-Force Method
The brute-force method solves the maximum subarray problem by systematically evaluating the sum of every possible contiguous subarray in the input array of length . This is accomplished through two nested loops: the outer loop iterates over each possible starting index from 1 to , and the inner loop iterates over each possible ending index from to , accumulating the sum from to and updating the global maximum if the current sum exceeds it. The approach ensures completeness by considering all subarrays, but its time complexity is due to the quadratic number of summations performed.[17] The following pseudocode implements this method, returning the maximum subarray sum (adapted from the standard formulation in algorithmic literature):BRUTE-FORCE-MAXIMUM-SUBARRAY-SUM(A)
n ← length[A]
max_sum ← -∞
for i ← 1 to n
current_sum ← 0
for j ← i to n
current_sum ← current_sum + A[j]
if current_sum > max_sum
max_sum ← current_sum
return max_sum
BRUTE-FORCE-MAXIMUM-SUBARRAY-SUM(A)
n ← length[A]
max_sum ← -∞
for i ← 1 to n
current_sum ← 0
for j ← i to n
current_sum ← current_sum + A[j]
if current_sum > max_sum
max_sum ← current_sum
return max_sum
| Starting Index (i) | Ending Index (j) | Subarray Sum |
|---|---|---|
| 1 | 1 | -2 |
| 1 | 2 | -1 |
| 1 | 3 | -4 |
| 1 | 4 | 0 |
| 1 | 5 | -1 |
| 1 | 6 | 1 |
| 1 | 7 | 2 |
| 1 | 8 | -3 |
| 2 | 2 | 1 |
| 2 | 3 | -2 |
| 2 | 4 | 2 |
| 2 | 5 | 1 |
| 2 | 6 | 3 |
| 2 | 7 | 4 |
| 2 | 8 | -1 |
| 3 | 3 | -3 |
| 3 | 4 | 1 |
| 3 | 5 | 0 |
| 3 | 6 | 2 |
| 3 | 7 | 3 |
| 3 | 8 | -2 |
| 4 | 4 | 4 |
| 4 | 5 | 3 |
| 4 | 6 | 5 |
| 4 | 7 | 6 |
| 4 | 8 | 1 |
| 5 | 5 | -1 |
| 5 | 6 | 1 |
| 5 | 7 | 2 |
| 5 | 8 | -3 |
| 6 | 6 | 2 |
| 6 | 7 | 3 |
| 6 | 8 | -2 |
| 7 | 7 | 1 |
| 7 | 8 | -4 |
| 8 | 8 | -5 |
Divide-and-Conquer Approach
The divide-and-conquer approach solves the maximum subarray problem by recursively dividing the array into two halves, solving the subproblems independently, and then combining the results by considering subarrays that cross the midpoint. This strategy achieves O(n log n) time complexity, improving over the O(n²) brute-force method by exploiting the divide-and-combine structure to avoid exhaustive enumeration.[6] The algorithm proceeds in three main phases: divide, conquer, and combine. In the divide phase, the array from indices low to high is split at the midpoint mid = floor((low + high) / 2). The conquer phase recursively computes the maximum subarray sum in the left half (low to mid) and the right half (mid + 1 to high). For the combine phase, it calculates the maximum crossing subarray sum by finding the maximum suffix sum from the left half (starting from mid and moving leftward, accumulating sums and tracking the maximum) and the maximum prefix sum from the right half (starting from mid + 1 and moving rightward), then adding these two values to get the crossing sum. The overall maximum is the largest among the left maximum, right maximum, and crossing maximum. The base case occurs when low == high, returning the array element.[6][17] Here is the pseudocode for the algorithm, adapted from the original formulation:function maxSubarraySum(low, high):
if low == high:
return array[low]
mid = floor((low + high) / 2)
// Maximum subarray sum in left half
leftMax = maxSubarraySum(low, mid)
// Maximum subarray sum in right half
rightMax = maxSubarraySum(mid + 1, high)
// Maximum crossing sum
leftSuffix = 0
maxLeftSuffix = -∞
for i from mid downto low:
leftSuffix += array[i]
maxLeftSuffix = max(maxLeftSuffix, leftSuffix)
rightPrefix = 0
maxRightPrefix = -∞
for i from mid + 1 to high:
rightPrefix += array[i]
maxRightPrefix = max(maxRightPrefix, rightPrefix)
crossingMax = maxLeftSuffix + maxRightPrefix
return max(leftMax, rightMax, crossingMax)
function maxSubarraySum(low, high):
if low == high:
return array[low]
mid = floor((low + high) / 2)
// Maximum subarray sum in left half
leftMax = maxSubarraySum(low, mid)
// Maximum subarray sum in right half
rightMax = maxSubarraySum(mid + 1, high)
// Maximum crossing sum
leftSuffix = 0
maxLeftSuffix = -∞
for i from mid downto low:
leftSuffix += array[i]
maxLeftSuffix = max(maxLeftSuffix, leftSuffix)
rightPrefix = 0
maxRightPrefix = -∞
for i from mid + 1 to high:
rightPrefix += array[i]
maxRightPrefix = max(maxRightPrefix, rightPrefix)
crossingMax = maxLeftSuffix + maxRightPrefix
return max(leftMax, rightMax, crossingMax)
-
Recurse on left (1-3): mid=2.
- Left-left (1-2): mid=1.
- Base (1-1): 3
- Base (2-2): -7
- Crossing (1-2): leftSuffix from 1: 3 (max=3); rightPrefix from 2: -7 (max=-7); crossing=3 + (-7) = -4
- Max for 1-2: max(3, -7, -4)=3
- Right-left (3-3): 4
- Crossing (1-3): leftSuffix from 2 to 1: -7 (max=-7), then -7+3=-4 (max=-4); rightPrefix from 3: 4 (max=4); crossing=-4 + 4 = 0
- Max for 1-3: max(3, 4, 0)=4
- Left-left (1-2): mid=1.
-
Recurse on right (4-5): mid=4.
- Base (4-4): 2
- Base (5-5): -1
- Crossing (4-5): leftSuffix from 4: 2 (max=2); rightPrefix from 5: -1 (max=-1); crossing=2 + (-1) = 1
- Max for 4-5: max(2, -1, 1)=2
- Top crossing (1-5): leftSuffix from 3 to 1: 4 (max=4), then 4 + (-7)=-3 (max=4), -3 + 3=0 (max=4); rightPrefix from 4 to 5: 2 (max=2), 2 + (-1)=1 (max=2); crossing=4 + 2 = 6
- Overall max: max(4, 2, 6)=6 (subarray from index 3 to 4: [4, 2])
Kadane's Dynamic Programming Algorithm
Kadane's dynamic programming algorithm offers a linear-time solution to the maximum subarray problem by scanning the input array once and maintaining running maxima. Attributed to statistician Jay Kadane and first detailed in Jon Bentley's 1984 article on algorithm design techniques, it leverages dynamic programming principles to compute the optimal subarray sum without explicit table storage.[6] The core idea involves tracking the maximum subarray sum ending at the current position—termed the "best ending here"—while simultaneously updating the overall maximum sum discovered. For each element, this current maximum is set to the larger of the element itself (starting a new subarray) or the sum of the previous current maximum plus the element (extending the subarray). If the previous sum is negative, extending it would only decrease the total, so resetting to the current element alone is preferable. This process ensures that at every step, the algorithm considers all possible subarrays ending at the current index implicitly, capturing the global optimum through iterative updates.[6] The standard pseudocode for non-empty subarrays initializes both the current and global maxima to the first array element and proceeds as follows:max_current = A[0]
max_global = A[0]
for i = 1 to n-1:
max_current = max(A[i], max_current + A[i])
max_global = max(max_global, max_current)
return max_global
max_current = A[0]
max_global = A[0]
for i = 1 to n-1:
max_current = max(A[i], max_current + A[i])
max_global = max(max_global, max_current)
return max_global
- Initialize: max_current = 3, max_global = 3 (subarray [19]).
- At i=1 (-7): max_current = max(-7, 3 + (-7)) = max(-7, -4) = -4; max_global remains 3 (since -4 < 3).
- At i=2 (4): max_current = max(4, -4 + 4) = max(4, 0) = 4; max_global = 4 (4 > 3).
- At i=3 (2): max_current = max(2, 4 + 2) = max(2, 6) = 6; max_global = 6 (6 > 4).
- At i=4 (-1): max_current = max(-1, 6 + (-1)) = max(-1, 5) = 5; max_global remains 6 (5 < 6).
Algorithm Analysis
Time and Space Complexity
The brute-force method for the maximum subarray problem enumerates all possible contiguous subarrays by iterating over every pair of start and end indices, computing each subarray sum using prefix sums in constant time, resulting in O(n²) time complexity due to the nested loops each running in O(n) iterations. This approach requires O(1) additional space, as it only stores a few variables for the current sum and maximum. The divide-and-conquer algorithm recursively splits the array into two halves, solves the subproblems, and computes the maximum crossing subarray in O(n) time by finding the maximum suffix sum from the left half and prefix sum from the right half. The recurrence relation is , which, by the master theorem (case 2, where the work at each level dominates), yields O(n log n) time complexity. Space usage is O(log n) from the recursion depth.[20] Kadane's algorithm processes the array in a single pass, tracking the maximum sum ending at the current position and updating the global maximum, achieving O(n) time complexity through linear traversal and O(1) space by using only constant variables without auxiliary storage.[9] Any correct algorithm for the maximum subarray problem has an asymptotic lower bound of Ω(n) time complexity, as it must read and consider all input elements to identify the optimal subarray.| Algorithm | Time Complexity | Space Complexity | Operations for n=10³ | Operations for n=10⁴ | Operations for n=10⁵ | Operations for n=10⁶ |
|---|---|---|---|---|---|---|
| Brute-Force | O(n²) | O(1) | ~10⁶ | ~10⁸ | ~10¹⁰ | ~10¹² |
| Divide-and-Conquer | O(n log n) | O(log n) | ~10⁴ | ~1.3×10⁵ | ~1.7×10⁶ | ~2×10⁷ |
| Kadane's | O(n) | O(1) | ~10³ | ~10⁴ | ~10⁵ | ~10⁶ |
Handling Edge Cases
The maximum subarray problem requires careful handling when all elements in the input array are negative, as the optimal solution is then the single largest (least negative) element rather than any multi-element subarray. In Kadane's dynamic programming algorithm, this is addressed by initializing both the current subarray sum and the global maximum sum to the first element of the array, ensuring that the algorithm selects the maximum individual element without erroneously resetting to zero during updates. This modification guarantees correctness for all-negative inputs, where the maximum sum equals the maximum element value.[9] If the problem variant allows empty subarrays with a sum of zero, the algorithm must further adapt to return 0 when all possible non-empty subarray sums are negative, prioritizing the empty subarray as the maximum. A common implementation achieves this by modifying the update rule for the current subarray sum to take the maximum of 0 and the accumulated sum plus the current element, effectively discarding negative prefixes and allowing the empty case implicitly. For example, in pseudocode:max_so_far = 0
max_ending_here = 0
for each element x in the array:
max_ending_here = max(0, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
max_so_far = 0
max_ending_here = 0
for each element x in the array:
max_ending_here = max(0, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
if the array is empty:
return 0, -1, -1 // or handle as per problem specification
max_so_far = array[0]
max_ending_here = array[0]
start = 0
end = 0
temp_start = 0
for i = 1 to array.length - 1:
if max_ending_here + array[i] > array[i]:
max_ending_here += array[i]
else:
max_ending_here = array[i]
temp_start = i
if max_ending_here > max_so_far:
max_so_far = max_ending_here
start = temp_start
end = i
return max_so_far, start, end
if the array is empty:
return 0, -1, -1 // or handle as per problem specification
max_so_far = array[0]
max_ending_here = array[0]
start = 0
end = 0
temp_start = 0
for i = 1 to array.length - 1:
if max_ending_here + array[i] > array[i]:
max_ending_here += array[i]
else:
max_ending_here = array[i]
temp_start = i
if max_ending_here > max_so_far:
max_so_far = max_ending_here
start = temp_start
end = i
return max_so_far, start, end
Extensions and Variants
Circular Arrays
The circular maximum subarray problem extends the standard formulation to arrays where subarrays may wrap around from the end to the beginning, allowing contiguous segments that span the array's boundary. In this variant, the array is treated as circular, meaning the last element connects to the first, but each element can be included at most once in the subarray. This arises in scenarios where data is naturally cyclic, such as in ring buffers or periodic signals.[21] The solution leverages Kadane's algorithm applied twice: once for the standard (non-wrapping) maximum subarray sum, and once to find the minimum subarray sum, which is used to compute the wrapping case as the total array sum minus the minimum subarray sum. The overall maximum is then the larger of these two values. To find the minimum subarray sum efficiently, Kadane's algorithm can be adapted by tracking the minimum cumulative sum during a single pass, or equivalently by negating all array elements and applying the standard Kadane's algorithm to find the "maximum" on the negated array (which yields the negative of the minimum). An edge case occurs if all elements are non-positive, where the maximum is the largest single element, and if the minimum sum equals the total sum (all non-positive), the wrapping case is ignored. This approach maintains O(n time complexity.[22] Here is pseudocode for the algorithm:function maxCircularSubarray(arr):
if arr is empty:
return 0
n = length(arr)
totalSum = sum(arr)
# Standard Kadane for max subarray sum
maxSum = kadane(arr)
# Min subarray sum using modified Kadane
minSum = kadane_negated(arr) # or track min in same pass as max
# Wrapping max
if minSum == totalSum:
return maxSum
wrapMax = totalSum - minSum
return max(maxSum, wrapMax)
function kadane(arr):
maxCurrent = maxGlobal = arr[0]
for i = 1 to n-1:
maxCurrent = max(arr[i], maxCurrent + arr[i])
if maxCurrent > maxGlobal:
maxGlobal = maxCurrent
return maxGlobal
function kadane_negated(arr):
# Equivalent to min Kadane
minCurrent = minGlobal = arr[0]
for i = 1 to n-1:
minCurrent = min(arr[i], minCurrent + arr[i])
if minCurrent < minGlobal:
minGlobal = minCurrent
return minGlobal
function maxCircularSubarray(arr):
if arr is empty:
return 0
n = length(arr)
totalSum = sum(arr)
# Standard Kadane for max subarray sum
maxSum = kadane(arr)
# Min subarray sum using modified Kadane
minSum = kadane_negated(arr) # or track min in same pass as max
# Wrapping max
if minSum == totalSum:
return maxSum
wrapMax = totalSum - minSum
return max(maxSum, wrapMax)
function kadane(arr):
maxCurrent = maxGlobal = arr[0]
for i = 1 to n-1:
maxCurrent = max(arr[i], maxCurrent + arr[i])
if maxCurrent > maxGlobal:
maxGlobal = maxCurrent
return maxGlobal
function kadane_negated(arr):
# Equivalent to min Kadane
minCurrent = minGlobal = arr[0]
for i = 1 to n-1:
minCurrent = min(arr[i], minCurrent + arr[i])
if minCurrent < minGlobal:
minGlobal = minCurrent
return minGlobal
Multidimensional Arrays
The multidimensional extension of the maximum subarray problem to two dimensions, known as the maximum subrectangle sum problem, requires identifying a contiguous submatrix within an matrix of real numbers that maximizes the sum of its elements. This generalization arises naturally in applications such as image processing, where it helps detect regions of maximum intensity or pattern significance in pixel arrays. The problem was first systematically explored as a two-dimensional counterpart to the one-dimensional case. An efficient algorithm for this problem runs in time (or for square matrices), extending Kadane's dynamic programming approach from one dimension. The method iterates over all possible left and right column pairs (from 1 to ), computes a temporary one-dimensional array of size by summing the elements between the fixed columns for each row, and then applies Kadane's algorithm to this temporary array to find the maximum contiguous sum, which corresponds to the optimal top and bottom rows for that column range. The global maximum across all pairs yields the desired subrectangle sum. This nested structure leverages the linear-time efficiency of Kadane's algorithm while bounding the outer loops by . The following pseudocode outlines the core procedure, assuming a matrix and 1-based indexing:max_sum = -∞
for l = 1 to n do
initialize temp[1..m] = 0
for r = l to n do
for i = 1 to m do
temp[i] = temp[i] + A[i][r]
end for
current_max = Kadane(temp) // Apply 1D Kadane's to find max subarray sum in temp
if current_max > max_sum then
max_sum = current_max
end if
end for
end for
max_sum = -∞
for l = 1 to n do
initialize temp[1..m] = 0
for r = l to n do
for i = 1 to m do
temp[i] = temp[i] + A[i][r]
end for
current_max = Kadane(temp) // Apply 1D Kadane's to find max subarray sum in temp
if current_max > max_sum then
max_sum = current_max
end if
end for
end for
References
- Bentley (1984), “Programming pearls: Algorithm design techniques. ... We study a noisy localization problem inspired by the classical maximum subarray problem.
- This problem is widely used in applications such as pattern recognition, image processing, biological sequence analysis and data mining. One-dimensional ...