Recent from talks
Nothing was collected or created yet.
Maximum cut
View on Wikipedia

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.
The problem can be stated simply as follows. One wants a subset S of the vertex set such that the number of edges between S and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible.
There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between S and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights.
Lower bounds
[edit]Edwards obtained the following two lower bounds for maximum cuts on a graph G with n vertices and m edges:[1]
- For arbitrary graphs, the maximum cut is at least
- For connected graphs, it is at least
The bound for connected graphs is often called the Edwards–Erdős bound[2] as Erdős conjectured it. Edwards proved the Edwards-Erdős bound using the probabilistic method; Crowston et al. proved the bound using linear algebra and analysis of pseudo-boolean functions.[3]
The Edwards-Erdős bound extends to the Balanced Subgraph Problem (BSP)[3] on signed graphs G = (V, E, s), i.e. graphs where each edge is assigned + or –. For a partition of V into subsets U and W, an edge xy is balanced if either s(xy) = + and x and y are in the same subset, or s(xy) = – and x and y are different subsets. BSP aims at finding a partition with the maximum number b(G) of balanced edges in G. The Edwards-Erdős gives a lower bound on b(G) for every connected signed graph G. Edwards's bound for arbitrary graphs was improved for special classes of graphs: triangle-free graphs, graphs of given maximum degree, H-free graphs, etc.[4]
Poljak and Turzik[5] extended the Edwards-Erdős bound to weighted maximum cuts: the weight of a maximum cut is at least where w(G) and w(Tmin) are the weights of G and its minimum weight spanning tree Tmin. Gutin and Yeo obtained a number of lower bounds for weighted Max-Cut extending the Poljak-Turzik bound for arbitrary weighted graphs and bounds for special classes of weighted graphs.[6]
Computational complexity
[edit]The following decision problem related to maximum cuts has been studied widely in theoretical computer science:
This problem is known to be NP-complete. It is easy to see that the problem is in NP: a yes answer is easy to prove by presenting a large enough cut. The NP-completeness of the problem can be shown, for example, by a reduction from maximum 2-satisfiability (a restriction of the maximum satisfiability problem).[7] The weighted version of the decision problem was one of Karp's 21 NP-complete problems;[8] Karp showed the NP-completeness by a reduction from the partition problem.
The canonical optimization variant of the above decision problem is usually known as the Maximum-Cut Problem or Max-Cut and is defined as:
The optimization variant is known to be NP-Hard. The opposite problem, that of finding a minimum cut is known to be efficiently solvable via the Ford–Fulkerson algorithm.
Algorithms
[edit]Polynomial-time algorithms
[edit]As the maximum cut problem is NP-hard, no polynomial-time algorithms for Max-Cut in general graphs are known.
However, in planar graphs, the maximum cut problem is dual to the route inspection problem (the problem of finding a shortest tour that visits each edge of a graph at least once), in the sense that the edges that do not belong to a maximum cut-set of a graph G are the duals of the edges that are doubled in an optimal inspection tour of the dual graph of G. The optimal inspection tour forms a self-intersecting curve that separates the plane into two subsets, the subset of points for which the winding number of the curve is even and the subset for which the winding number is odd; these two subsets form a cut that includes all of the edges whose duals appear an odd number of times in the tour. The route inspection problem may be solved in polynomial time, and this duality allows the maximum cut problem to also be solved in polynomial time for planar graphs.[9] The Maximum-Bisection problem is known however to be NP-hard.[10]
More generally, whenever maximum cuts can be found in polynomial time for certain classes of graphs, the algorithms for this problem can be extended to the 2- and 3-clique-sums of graphs in these classes. This allows the planar graph algorithm to be extended to certain broader families of graphs closed under graph minors and having the structure of clique-sums of planar graphs and graphs of bounded size.[11] A minor-closed family of graphs has this clique-sum structure exactly when its forbidden minors include a graph with crossing number at most one.[12]
Approximation algorithms
[edit]The Max-Cut Problem is APX-hard,[13] meaning that there is no polynomial-time approximation scheme (PTAS), arbitrarily close to the optimal solution, for it, unless P = NP. Thus, every known polynomial-time approximation algorithm achieves an approximation ratio strictly less than one.
There is a simple randomized 0.5-approximation algorithm: for each vertex flip a coin to decide to which half of the partition to assign it.[14] In expectation, half of the edges are cut edges. This algorithm can be derandomized with the method of conditional probabilities; therefore there is a simple deterministic polynomial-time 0.5-approximation algorithm as well.[15] One such algorithm starts with an arbitrary partition of the vertices of the given graph and repeatedly moves one vertex at a time from one side of the partition to the other, improving the solution at each step, until no more improvements of this type can be made. The number of iterations is at most because the algorithm improves the cut by at least one edge at each step. When the algorithm terminates, at least half of the edges incident to every vertex belong to the cut, for otherwise moving the vertex would improve the cut. Therefore, the cut includes at least edges.
The polynomial-time approximation algorithm for Max-Cut with the best known approximation ratio is a method by Goemans and Williamson using semidefinite programming and randomized rounding that achieves an approximation ratio where
If the unique games conjecture is true, this is the best possible approximation ratio for maximum cut.[17] Without such unproven assumptions, it has been proven to be NP-hard to approximate the max-cut value with an approximation ratio better than .[18]
Dunning et al. provide an extended analysis of 10 heuristics for this problem, including open-source implementation.[19]
Parameterized algorithms and kernelization
[edit]While it is trivial to prove that the problem of finding a cut of size at least (the parameter) k is fixed-parameter tractable (FPT), it is much harder to show fixed-parameter tractability for the problem of deciding whether a graph G has a cut of size at least the Edwards-Erdős lower bound (see Lower bounds above) plus (the parameter)k. Crowston et al. proved that the problem can be solved in time and admits a kernel of size . They also extended the fixed-parameter tractability result to the Balanced Subgraph Problem (BSP, see Lower bounds above) and improved the kernel size to (holds also for BSP).[20] Etscheid and Mnich improved the fixed-parameter tractability result for BSP to and the kernel-size result to vertices.[21]
Weighted maximum cuts can be found in polynomial time in graphs of bounded treewidth. That is, when parameterized by treewidth rather than by the cut size, the weighted maximum cut problem is fixed-parameter tractable. It remains fixed-parameter tractable for sm-width, another graph width parameter intermediate between treewidth and clique-width. However, under standard assumptions in parameterized complexity, it is not fixed-parameter tractable for clique-width.[22]
Applications
[edit]Machine learning
[edit]Treating its nodes as features and its edges as distances, the max cut algorithm divides a graph in two well-separated subsets. In other words, it can be naturally applied to perform binary classification. Compared to more common classification algorithms, it does not require a feature space, only the distances between elements within.[23]
Theoretical physics
[edit]In statistical physics and disordered systems, the Max Cut problem is equivalent to minimizing the Hamiltonian of a spin glass model, most simply the Ising model.[24] For the Ising model on a graph G and only nearest-neighbor interactions, the Hamiltonian is
Here each vertex i of the graph is a spin site that can take a spin value A spin configuration partitions into two sets, those with spin up and those with spin down We denote with the set of edges that connect the two sets. We can then rewrite the Hamiltonian as
Minimizing this energy is equivalent to the min-cut problem or by setting the graph weights as the max-cut problem.[24]
Circuit design
[edit]The max cut problem has applications in VLSI design.[24]
See also
[edit]- Minimum cut
- Minimum k-cut
- Odd cycle transversal, equivalent to asking for the largest bipartite induced subgraph
- Unfriendly partition, a related concept for infinite graphs
Notes
[edit]- ^ Edwards (1973, 1975).
- ^ Bylka, Idzik & Tuza (1999).
- ^ a b Crowston et al. (2014).
- ^ Alon, Krivelevich & Sudakov (2005); Scott (2005); Zeng & Hou (2017).
- ^ Poljak & Turzik (1986).
- ^ Gutin & Yeo (2021).
- ^ Garey & Johnson (1979).
- ^ Karp (1972).
- ^ Hadlock (1975).
- ^ Jansen et al. (2005).
- ^ Grötschel, Jünger & Reinelt (1987).
- ^ Robertson & Seymour (1993).
- ^ Papadimitriou & Yannakakis (1991) prove MaxSNP-completeness.
- ^ Mitzenmacher & Upfal (2005, Sect. 6.2); Motwani & Raghavan (1995, Sect. 5.1).
- ^ Mitzenmacher & Upfal (2005, Sect. 6.3); Khuller, Raghavachari & Young (2007).
- ^ Gaur & Krishnamurti (2007); Ausiello et al. (2003).
- ^ Khot et al. (2007).
- ^ Håstad (2001); Trevisan et al. (2000).
- ^ Dunning, Gupta & Silberholz (2018).
- ^ Crowston, Jones & Mnich (2015).
- ^ Etscheid & Mnich (2018).
- ^ Sæther & Telle (2016).
- ^ Boykov & Jolly (2001).
- ^ a b c Barahona et al. (1988).
References
[edit]- Alon, N.; Krivelevich, M.; Sudakov, B. (2005), "Maxcut in H-free graphs", Combin. Probab. Comput., 14: 629–647, doi:10.1017/S0963548305007017, S2CID 123485000.
- Ausiello, Giorgio; Crescenzi, Pierluigi; Gambosi, Giorgio; Kann, Viggo; Marchetti-Spaccamela, Alberto; Protasi, Marco (2003), Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties, Springer. Maximum cut (optimisation version) is problem ND14 in Appendix B (page 399).
- Barahona, Francisco; Grötschel, Martin; Jünger, Michael; Reinelt, Gerhard (1988), "An application of combinatorial optimization to statistical physics and circuit layout design", Operations Research, 36 (3): 493–513, doi:10.1287/opre.36.3.493, ISSN 0030-364X, JSTOR 170992.
- Boykov, Y.Y.; Jolly, M.-P. (2001), "Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images", Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 1, IEEE Comput. Soc, pp. 105–112, doi:10.1109/iccv.2001.937505, ISBN 0-7695-1143-0, S2CID 2245438.
- Bylka, S.; Idzik, A.; Tuza, I. (1999), "Maximum cuts: Improvements and local algorithmic analogues of the Edwards-Erd6s inequality", Discrete Math., 194 (1–3): 39–58, doi:10.1016/S0012-365X(98)00115-0.
- Crowston, R.; Fellows, M.; Gutin, G.; Jones, M.; Kim, E. J.; Rosamond, F.; Ruzsa, I. Z.; Thomassé, S.; Yeo, A. (2014), "Satisfying more than half of a system of linear equations over GF(2): A multivariate approach", J. Comput. Syst. Sci., 80 (4): 687–696, doi:10.1016/j.jcss.2013.10.002.
- Crowston, R.; Gutin, G.; Jones, M.; Muciaccia, G. (2013), "Maximum balanced subgraph problem parameterized above lower bound", Theor. Comput. Sci., 513: 53–64, arXiv:1212.6848, doi:10.1016/j.tcs.2013.10.026.
- Crowston, R.; Jones, M.; Mnich, M. (2015), "Max-cut parameterized above the Edwards–Erdős bound", Algorithmica, 72 (3): 734–757, doi:10.1007/s00453-014-9870-z, S2CID 14973734.
- Dunning, Iain; Gupta, Swati; Silberholz, John (2018), "What works best when? A systematic evaluation of heuristics for Max-Cut and QUBO", INFORMS Journal on Computing, 30 (3): 608–624, doi:10.1287/ijoc.2017.0798, S2CID 485706.
- Edwards, C. S. (1973), "Some extremal properties of bipartite subgraphs", Can. J. Math., 25 (3): 475–485, doi:10.4153/CJM-1973-048-x, S2CID 121925638.
- Edwards, C. S. (1975), "An improved lower bound for the number of edges in a largest bipartite subgraph", Recent Advances in Graph Theory, pp. 167–181.
- Etscheid, M.; Mnich, M. (2018), "Linear Kernels and Linear-Time Algorithms for Finding Large Cuts", Algorithmica, 80 (9): 2574–2615, doi:10.1007/s00453-017-0388-z, hdl:11420/4693, S2CID 16301072.
- Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, ISBN 978-0-7167-1045-5. Maximum cut (decision version) is problem ND16 in Appendix A2.2. Maximum bipartite subgraph (decision version) is problem GT25 in Appendix A1.2.
- Gaur, Daya Ram; Krishnamurti, Ramesh (2007), "LP rounding and extensions", in Gonzalez, Teofilo F. (ed.), Handbook of Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC.
- Goemans, Michel X.; Williamson, David P. (1995), "Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming", Journal of the ACM, 42 (6): 1115–1145, doi:10.1145/227683.227684, S2CID 15794408.
- Grötschel, M.; Jünger, M.; Reinelt, G. (1987), "Calculating exact ground states of spin glasses: a polyhedral approach", Heidelberg colloquium on glassy dynamics (Heidelberg, 1986), Lecture Notes in Phys., vol. 275, Springer, Berlin, pp. 325–353, doi:10.1007/BFb0057526, ISBN 3-540-17777-9, MR 0916885.
- Gutin, G.; Yeo, A. (2021), "Lower Bounds for Maximum Weighted Cut", arXiv:2104.05536 [math.CO]
{{cite arXiv}}: CS1 maint: overridden setting (link). - Hadlock, F. (1975), "Finding a Maximum Cut of a Planar Graph in Polynomial Time", SIAM J. Comput., 4 (3): 221–225, doi:10.1137/0204019.
- Håstad, Johan (2001), "Some optimal inapproximability results", Journal of the ACM, 48 (4): 798–859, doi:10.1145/502090.502098, S2CID 5120748.
- Jansen, Klaus; Karpinski, Marek; Lingas, Andrzej; Seidel, Eike (2005), "Polynomial Time Approximation Schemes for MAX-BISECTION on Planar and Geometric Graphs", SIAM Journal on Computing, 35 (1): 110–119, CiteSeerX 10.1.1.62.5082, doi:10.1137/s009753970139567x.
- Karp, Richard M. (1972), "Reducibility among combinatorial problems", in Miller, R. E.; Thacher, J. W. (eds.), Complexity of Computer Computation, Plenum Press, pp. 85–103.
- Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O'Donnell, Ryan (2007), "Optimal inapproximability results for MAX-CUT and other 2-variable CSPs?", SIAM Journal on Computing, 37 (1): 319–357, doi:10.1137/S0097539705447372, S2CID 2090495.
- Khuller, Samir; Raghavachari, Balaji; Young, Neal E. (2007), "Greedy methods", in Gonzalez, Teofilo F. (ed.), Handbook of Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC.
- Mitzenmacher, Michael; Upfal, Eli (2005), Probability and Computing: Randomized Algorithms and Probabilistic Analysis, Cambridge.
- Motwani, Rajeev; Raghavan, Prabhakar (1995), Randomized Algorithms, Cambridge.
- Newman, Alantha (2008), "Max cut", in Kao, Ming-Yang (ed.), Encyclopedia of Algorithms, Springer, pp. 489–492, doi:10.1007/978-0-387-30162-4_219, ISBN 978-0-387-30770-1.
- Papadimitriou, Christos H.; Yannakakis, Mihalis (1991), "Optimization, approximation, and complexity classes", Journal of Computer and System Sciences, 43 (3): 425–440, doi:10.1016/0022-0000(91)90023-X.
- Poljak, S.; Turzik, Z. (1986), "A polynomial time heuristic for certain subgraph optimization problems with guaranteed worst case bound", Discrete Math., 58 (1): 99–104, doi:10.1016/0012-365X(86)90192-5.
- Robertson, Neil; Seymour, Paul (1993), "Excluding a graph with one crossing", in Robertson, Neil; Seymour, Paul (eds.), Graph Structure Theory: Proc. AMS–IMS–SIAM Joint Summer Research Conference on Graph Minors, Contemporary Mathematics, vol. 147, American Mathematical Society, pp. 669–675.
- Sæther, Sigve Hortemo; Telle, Jan Arne (2016), "Between treewidth and clique-width", Algorithmica, 75 (1): 218–253, arXiv:1404.7758, doi:10.1007/s00453-015-0033-7, MR 3492064.
- Scott, A. (2005), "Judicious partitions and related problems", Surveys in Combinatorics, London Mathematical Society Lecture Note Series, 327: 95–117.
- Trevisan, Luca; Sorkin, Gregory; Sudan, Madhu; Williamson, David (2000), "Gadgets, Approximation, and Linear Programming", Proceedings of the 37th IEEE Symposium on Foundations of Computer Science: 617–626.
- Zeng, Q.; Hou, J. (2017), "Bipartite Subgraphs of H-free Graphs", Bull. Aust. Math. Soc., 30 (3): 1–13, doi:10.1017/S0004972716001295.
External links
[edit]- Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, Gerhard Woeginger (2000), "Maximum Cut", in "A compendium of NP optimization problems".
- Andrea Casini, Nicola Rebagliati (2012), "A Python library for solving Max Cut"
Maximum cut
View on GrokipediaDefinition and Basics
Formal Definition
The maximum cut problem, a fundamental optimization problem in graph theory, involves finding a partition of the vertex set of an undirected graph into two disjoint subsets and such that the number of edges crossing between them is maximized. The size of the cut is given by and the objective in the unweighted case is to maximize this quantity over all possible subsets . Let denote the number of vertices and the number of edges; the maximum cut value of is denoted .[1] The decision version of the maximum cut problem asks, given an integer , whether there exists a partition with . For example, in the complete graph on vertices, , achieved by partitioning the vertices into two sets of sizes as equal as possible. In contrast, the empty graph with no edges has .[1][7] This problem is related to the minimum cut, which seeks to minimize the number of crossing edges and is solvable in polynomial time via maximum flow algorithms.[1]Variants and Related Problems
The weighted maximum cut problem extends the unweighted formulation by assigning non-negative weights to each edge , with the objective to maximize over all subsets .[1] This variant arises naturally in applications where edge importance varies, such as network design. For instance, in a complete graph with uniform edge weight , the maximum weighted cut achieves value , obtained by partitioning the vertices as evenly as possible. The balanced maximum cut variant imposes the additional constraint that the partition sets and are of roughly equal size, specifically , to ensure equitable division. This modification alters the problem's structure, leading to distinct approximation guarantees; for example, while the standard max-cut admits a 0.878-approximation via semidefinite programming, the balanced version resists similar ratios and remains MAX-SNP-hard even on dense graphs. A further generalization is the -max-cut problem, which partitions into subsets to maximize the total weight of edges with endpoints in different subsets.[8] This extends max-cut to multiway partitioning, relevant for clustering into more than two groups, and inherits NP-hardness from the case. Max-cut also relates to correlation clustering, where the two-cluster case reduces to max-cut on a signed graph by treating positive edges as penalties for separation and negative edges as rewards.[9] The maximum cut problem was first studied by Edwards in 1973, motivated by applications in statistical mechanics.Mathematical Properties
Bounds
The maximum cut value, denoted for a graph with vertices and edges, admits a trivial upper bound of , as the cut cannot exceed the total number of edges in the graph.[1] A fundamental lower bound, due to Edwards, applies to connected graphs and states that .[10] This result, originally established in 1975, provides a constructive guarantee on the excess over the random cut baseline. A proof sketch relies on a probabilistic partitioning: randomly assign vertices to two sets while adjusting for connectivity, yielding an expected cut size that meets the bound; derandomization via conditional expectations confirms its achievability. This bound is tight for certain extremal graphs, such as complete bipartite graphs with unbalanced parts. It is asymptotically tight, as there exist graphs where the maximum cut size is . The expectation of a random cut, obtained by assigning each vertex independently to one of two sets with equal probability, is .[1] This serves as a baseline for evaluating approximation algorithms, as it matches half the total edges and is achievable in expectation without optimization. Spectral methods yield another upper bound using eigenvalues. For a -regular graph, where is the degree, , with the smallest eigenvalue of the adjacency matrix.[11] This bound tightens when is close to zero (indicating limited bipartiteness) and reaches when (as in bipartite graphs). Equivalent formulations arise from the Laplacian matrix in weighted settings, linking to semidefinite relaxations for tighter estimates. These bounds extend naturally to weighted graphs, where edge weights replace unweighted edges, and the total weight substitutes for . The trivial upper bound becomes , the random expectation is , and a generalization of Edwards' lower bound is .[10] Spectral bounds similarly replace with and incorporate weighted adjacency or Laplacian matrices.[12]Computational Complexity
The decision version of the maximum cut problem, which determines whether there exists a partition of the vertices into two sets such that at least edges cross the partition, is NP-complete. This result follows from a polynomial-time reduction from 3-SAT, as established in the seminal work on simplified NP-complete graph problems, where gadgets are constructed for variables and clauses to ensure that satisfying assignments correspond to large cuts.[13] The reduction preserves the structure such that a large cut in the resulting graph implies a satisfying assignment for the 3-SAT instance, and vice versa, with the transformation computable in polynomial time. The optimization version of maximum cut, seeking the largest possible cut, is therefore NP-hard. Moreover, it is inapproximable to within a factor of for any unless P = NP, as proven using the Probabilistically Checkable Proof (PCP) theorem and gap-producing reductions from 3-SAT.[14] This tight inapproximability threshold highlights the inherent difficulty of obtaining guarantees close to the optimal solution in polynomial time. Despite this general hardness, maximum cut is solvable in polynomial time for certain graph classes. In bipartite graphs, every edge crosses the bipartition, yielding a maximum cut of size equal to the number of edges, which can be verified trivially. On trees, dynamic programming can compute the maximum cut in linear time by recursing over subtrees and tracking the contribution of edges based on partition choices at each node. For planar graphs, the problem reduces to finding a maximum-weight matching in an auxiliary graph constructed via the dual, where the matching corresponds to selecting non-adjacent faces to determine the cut edges; this can be solved in polynomial time using known matching algorithms.[15] In parameterized complexity, maximum cut is fixed-parameter tractable when parameterized by the treewidth of the graph, via standard dynamic programming on a tree decomposition that enumerates partition states across bags. It is also FPT parameterized by the solution size (the target cut size), admitting a linear kernel of vertices through reduction rules that prune low-degree vertices and contract high-connectivity components. However, the problem is W[16]-hard when parameterized by the clique-width of the input graph, indicating intractability for graphs with bounded but non-tree-like structure.[17]Algorithms
Exact Algorithms
Exact algorithms for the maximum cut problem exist for several special classes of graphs where the structural properties allow for efficient computation, often leveraging dynamic programming or reductions to other solvable problems. For planar graphs, the problem can be solved in polynomial time by reducing it to a maximum weighted matching problem in a related graph. Specifically, Hadlock showed that the maximum cut in a planar graph corresponds to a minimum odd-vertex pairing in the dual graph, which can be formulated as a T-join problem and solved using matching algorithms in O(n^3) time.[15] This approach exploits the planarity to construct an auxiliary graph where the matching directly yields the cut partition. Extensions to graphs of bounded genus or excluding fixed minors allow exact solutions in subexponential time using graph minor algorithms. In bipartite graphs, the maximum cut is trivial, as the bipartition of the vertices naturally separates all edges, achieving a cut size equal to the number of edges m. Recognizing whether a graph is bipartite and computing this cut can be done in O(n + m) time using breadth-first search, followed by an O(1) computation of the cut value.[1] For trees and outerplanar graphs, linear-time dynamic programming algorithms compute the maximum cut by processing the graph from the leaves inward. In a tree, root the graph at an arbitrary vertex and, for each subtree, compute the maximum cut size under two cases: the root vertex is in one side of the partition or the other. The contribution of each edge is added based on whether its endpoints are separated, yielding an overall O(n) time solution. Outerplanar graphs, being a subclass with treewidth at most 2, admit a similar dynamic programming approach along a canonical ordering or dual tree structure, also running in linear time.[18] Series-parallel graphs, which include trees and outerplanar graphs as subclasses, allow for an O(n) time exact algorithm via recursive decomposition. The graph is broken down into series and parallel compositions, and the maximum cut is computed bottom-up by considering the possible partitions at each composition point and maximizing the crossing edges across substructures. This method relies on the bounded treewidth and the recursive nature of the decomposition tree.[18] For general graphs, where the problem is NP-hard, exact algorithms rely on exponential-time techniques such as branch-and-bound or meet-in-the-middle approaches. A meet-in-the-middle strategy partitions the vertices into two roughly equal sets of size n/2, enumerates all 2^{n/2} subsets for one set to compute partial cut contributions from edges within and to the other set, and matches them efficiently using sorting or hashing to find the global maximum, achieving O(2^{n/2} n^2) time overall. More advanced branching algorithms can improve the base slightly, but the exponential dependence on n remains fundamental.Approximation Algorithms
A simple randomized approximation algorithm for the maximum cut problem partitions the vertices into two sets uniformly at random, yielding an expected cut size of , where is the total edge weight, since each edge crosses the cut with probability . This achieves a -approximation in expectation. A deterministic greedy variant sequentially assigns each vertex to the partition that maximizes the incremental cut size, also guaranteeing a -approximation ratio.[1][19] Local search methods improve upon these baselines by starting from an initial partition and performing vertex swaps or flips that increase the cut size until a local optimum is reached. Trevisan (1997), building on eigenvalue bounds by Poljak and others, demonstrated a 0.531-approximation guarantee for an algorithm using the smallest eigenvalue of the graph Laplacian in conjunction with greedy partitioning. The most influential classical approximation algorithm employs a semidefinite programming (SDP) relaxation, introduced by Goemans and Williamson (1995). The SDP maximizes over unit vectors for each vertex , which relaxes the quadratic integer program for max-cut. The optimal solution is rounded by selecting a random hyperplane and assigning vertices based on the sign of their vector's projection onto a random unit vector, ensuring that the expected cut value satisfies the approximation guarantee. This randomized procedure achieves an approximation ratio of approximately , derived from the integral of the arccosine function over the unit circle, which bounds the probability that an edge is cut relative to the SDP value. The algorithm can be derandomized using conditional expectations while preserving the ratio.[1] Inapproximability results establish that it is NP-hard to approximate Max-Cut within a factor better than . Håstad (2001) provided optimal inapproximability thresholds up to additive factors for max-cut via PCP reductions from 3-SAT. Subsequent refinements by Khot, Kindler, Mossel, and O'Donnell (2004) showed that, assuming the Unique Games Conjecture, no algorithm can exceed the Goemans-Williamson ratio of ≈0.878 by any constant factor.[14][20] Recent classical advances leverage graph structure for slight improvements beyond the Goemans-Williamson bound. For instance, when the input graph contains triangles, the standard max-cut SDP rounded via the hyperplane method achieves an approximation ratio of , as shown by analyzing correlations induced by triangles in the vector representation (George and Louis, 2025).[21]Parameterized and Quantum Algorithms
The parameterized complexity of the maximum cut problem has been extensively studied, particularly with respect to structural parameters like treewidth and solution excess parameters. When parameterized by treewidth , maximum cut admits a fixed-parameter tractable (FPT) algorithm via dynamic programming on a tree decomposition. The approach maintains states representing the partition of each bag into the two sides of the cut, leading to a running time of , where is the number of vertices, by enumerating subsets of the bag and computing contributions from edges within and between bags.[17] This exploits the tree decomposition to localize computations, ensuring exponential dependence only on . Kernelization techniques further enhance preprocessing for parameters related to the solution quality. For the parameter defined as the excess of the maximum cut size over the trivial lower bound of (where is the number of edges), reduction rules reduce the instance to a kernel of quadratic size, specifically vertices. These rules, developed in the 2010s, include vertex folding for induced paths and handling high-degree vertices by merging or removing them while adjusting the parameter to preserve equivalence; for instance, folding a degree-2 vertex connects its neighbors with an edge of summed weights, reducing intra-component edges without altering the optimal cut value. Such kernels enable efficient solving on the reduced instance via branching or DP.[22] More recent refinements achieve linear kernels of vertices above the Edwards-Erdős bound, using additional rules like clique reductions and block mergers.[23] Quantum algorithms offer promising avenues for maximum cut, leveraging superposition and entanglement for potentially superior approximations. The Quantum Approximate Optimization Algorithm (QAOA), introduced in 2014, applies alternating layers of mixing and cost Hamiltonians tailored to the max-cut objective, where the cost Hamiltonian encodes edge penalties for same-side vertices. For -layer QAOA, variational optimization of parameters yields cuts outperforming the classical random 0.5-approximation on small instances (up to 20 qubits), with empirical ratios approaching 0.8 for and higher for larger , with a worst-case guarantee of approximately 0.692 for on regular graphs, improving for higher , though no quantum algorithm provably exceeds the classical Goemans-Williamson ratio in the worst case. Recent advances explore specialized quantum circuits for efficiency on near-term devices. In 2024, low-depth Clifford circuits, which use only Clifford gates (implementable with low overhead), were shown to approximately solve max-cut with approximation ratios exceeding 0.89 on weighted complete graphs via adaptive construction, surpassing the classical Goemans-Williamson 0.878 ratio in specific cases while maintaining depth linear in .[24] By 2025, edge-based QAOA variants encode the problem using edge states rather than vertex spins, using approximately n qubits and improving performance on sparse graphs through localized mixing operators. Despite these developments, no proven quantum speedup over classical approximations exists for max-cut, as QAOA's worst-case ratios match or lag semidefinite programming bounds; however, hybrid quantum-classical solvers, integrating QAOA with local search, demonstrate practical gains on NISQ hardware for instances up to 100 qubits.[25] In the streaming model, where edges arrive sequentially with limited memory, quantum algorithms provide space advantages for approximating directed maximum cut. A 2024 quantum streaming algorithm achieves a 0.4844-approximation using qubits, exponentially less space than classical algorithms requiring bits for similar ratios, by employing quantum fingerprinting to estimate cut values across passes. This highlights quantum benefits in data-intensive settings, though it applies to the directed variant.[26]Applications
Machine Learning
In machine learning, the maximum cut problem serves as a quadratic unconstrained binary optimization (QUBO) formulation for binary classification tasks, where data points are represented as vertices in a graph, and edges encode pairwise similarities or disagreements; the goal is to partition the vertices into two sets that maximizes the weight of edges crossing the cut, thereby separating dissimilar points. This approach models unsupervised partitioning by assigning binary labels to maximize inter-class disagreements, with the QUBO objective minimized over binary vectors , where is derived from a similarity matrix. Correlation clustering leverages maximum cut to minimize intra-cluster disagreements on similarity graphs, where positive edge weights indicate similarity (favoring same-cluster assignment) and negative weights indicate dissimilarity (favoring different clusters); for two clusters, this reduces to solving max-cut on a transformed graph to maximize the cut value corresponding to inter-cluster edges.[27] Approximations are achieved via a semidefinite programming (SDP) relaxation similar to Goemans-Williamson, yielding guarantees on cluster quality.[1] In community detection, maximum cut maximizes modularity in social networks by partitioning graphs to optimize the difference between intra-community edges and expected random connections, particularly effective in stochastic block models (SBMs) where ground-truth communities generate denser intra-block edges.[28] SDP-based max-cut solvers approximate optimal partitions in SBMs by relaxing the modularity objective to a vector program, recursively splitting communities via hyperplane rounding to recover dense blocks with high fidelity.[28] Post-2020 applications integrate max-cut SDP relaxations into graph neural networks (GNNs) for embedding learning, where low-rank SDP solvers optimize node embeddings on hyperspheres to capture cut-maximizing structures, enabling scalable inference in tasks like multi-class Markov random fields. These methods, such as the mixing algorithm for diagonally constrained SDPs, facilitate differentiable training of GNNs by providing convex relaxations that converge globally for rank exceeding , outperforming spectral methods in embedding quality for community-structured graphs.[29] Empirically, the 0.878-approximation from Goemans-Williamson suffices for large-scale clustering in SBM datasets, achieving near-optimal recovery of ground-truth partitions when intra-block densities exceed inter-block by a constant factor, as demonstrated on benchmarks with up to millions of nodes.Theoretical Physics
The maximum cut problem finds a natural interpretation in statistical mechanics through its equivalence to the ground state of the classical Ising model. In this mapping, each vertex of the graph corresponds to a spin variable , and the ferromagnetic Ising Hamiltonian is given by , where the sum is over the edges of the graph. Minimizing the energy is equivalent to maximizing the cut size, as the number of edges crossing the cut equals . This correspondence highlights how finding an optimal partition in a graph mirrors the alignment of spins to minimize interaction energy in a physical system. In spin glass models, which introduce frustration through random or antiferromagnetic interactions, the maximum cut problem plays a central role in analyzing disordered systems. The Sherrington-Kirkpatrick (SK) model, a mean-field spin glass with fully connected spins and Gaussian-distributed couplings, relates its ground state energy minimization to the maximum cut on a complete graph with random edge weights; the optimal cut fraction provides insights into the low-temperature phase structure. Phase transitions in these models, such as the spin glass transition, are probed through the distribution of cut sizes in random graphs, revealing critical behaviors like the onset of frustration where large cuts correspond to broken ergodicity. Replica symmetry breaking (RSB) in the SK model further connects to the inherent hardness of approximating maximum cut, as the rugged energy landscape with multiple metastable states mirrors the computational challenges in finding global optima. Quantum extensions of these models leverage maximum cut to study quantum phase transitions. Quantum annealing, as implemented on D-Wave systems, formulates maximum cut instances as quadratic unconstrained binary optimization (QUBO) problems equivalent to the Ising Hamiltonian, with experimental benchmarks on random graphs demonstrating scaling behaviors up to thousands of qubits, though limited by embedding overhead and noise. More recently, the quantum approximate optimization algorithm (QAOA) simulates the transverse-field Ising model for maximum cut, where the cost Hamiltonian encodes the classical cut and the mixer introduces quantum fluctuations; this approach ties into 2023–2025 advances in variational quantum algorithms, including symmetry-based expressive QAOA and edge-based variants offering improved approximation ratios on dense graphs and NISQ devices, providing a physics-inspired framework to explore quantum critical points and entanglement in frustrated systems.[30][31][32]VLSI and Network Design
In very large scale integration (VLSI) design, the maximum cut problem aids in optimizing circuit layouts by partitioning components to reduce wire lengths and enhance routing efficiency. Specifically, it is applied in via minimization for two-layer channel routing, where the task of assigning net segments to metal layers is modeled as finding a maximum cut in a derived planar graph; edges in the cut represent vias needed to connect segments across layers, and maximizing the cut minimizes the total vias required. Since maximum cut on planar graphs is solvable in polynomial time using matching algorithms, this approach enables efficient exact solutions for practical routing instances. Additionally, the size of the maximum cut provides a theoretical lower bound on the minimum area of a VLSI layout for a given graph, as the layout area must accommodate at least the squared value of the maximum cut size divided by the number of I/O pins, establishing fundamental limits on chip density.[33] In telecommunications and network design, maximum cut facilitates graph partitioning to balance loads across subnetworks while maximizing the weight of edges crossing partitions, which corresponds to inter-switch traffic and improves overall throughput and resilience. This is particularly useful in large-scale communication networks, where approximation algorithms like the semidefinite programming relaxation achieve a 0.878 approximation ratio and are integrated into partitioning frameworks to handle real-world topologies. For instance, in multi-radio wireless mesh networks, maximum cut-based methods assign overlapping channels to interfaces, maximizing the cut to minimize interference and boost concurrent transmissions.[34] Weighted variants of maximum cut are employed in computer vision for image segmentation, where pixels are modeled as vertices in a graph with edge weights reflecting similarity (e.g., color differences); the maximum cut partitions pixels into foreground and background sets to maximize the total weight of dissimilar edges across the boundary, yielding smooth segmentations. This approach extends to more complex scenes by incorporating Gaussian mixture models for pixel affinities, enabling robust handling of occlusions and shadows. In wireless ad-hoc networks, maximum cut optimizes channel allocation by partitioning nodes into sets assigned to different channels, maximizing the cut to ensure the highest number of interference-free links across channels and thus improving spatial reuse and throughput. Algorithms solving this NP-hard problem via heuristics or quantum-inspired methods have demonstrated gains in link utilization over greedy allocations in dense topologies.[34] Beyond engineering, maximum cut analogies appear in business applications such as finance and logistics. In portfolio optimization, the problem models asset partitioning into inclusion/exclusion sets to maximize a quadratic objective akin to cut weight, capturing diversification benefits under transaction costs; quantum annealing solvers applied to these formulations have shown promise in scaling to hundreds of assets.[35] Similarly, in logistics, maximum cut supports facility location partitioning by dividing customer demand graphs to maximize serviced inter-region flows, aiding efficient supply allocation. Recent advancements leverage maximum cut in supply chain models, where graph partitions identify robust supplier clusters under disruptions.[36]References
- We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions ...