Hubbry Logo
Algorithmic paradigmAlgorithmic paradigmMain
Open search
Algorithmic paradigm
Community hub
Algorithmic paradigm
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Algorithmic paradigm
Algorithmic paradigm
from Wikipedia

An algorithmic paradigm or algorithm design paradigm is a generic model or framework which underlies the design of a class of algorithms. An algorithmic paradigm is an abstraction higher than the notion of an algorithm, just as an algorithm is an abstraction higher than a computer program.[1][2]

List of well-known paradigms

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
An algorithmic paradigm is a fundamental strategy or pattern for designing algorithms to solve computational problems, offering a reusable framework that addresses a broad class of similar challenges by emphasizing common techniques such as , , or . These paradigms guide algorithm development by providing structured methods to break down problems, analyze efficiency, and ensure correctness, often drawing from mathematical principles like or . Key algorithmic paradigms include divide-and-conquer, which recursively divides a problem into smaller subproblems, solves them independently, and combines the results—for instance, in , which achieves O(n log n) for sorting. Another prominent paradigm is dynamic programming, used for optimization problems with overlapping subproblems and optimal substructure, such as computing the in O(mn) time by building a table of solutions to subproblems. Greedy algorithms make locally optimal choices at each step to approximate a global optimum, exemplified by for minimum spanning trees, running in O(E log V) time with a union-find . Additional paradigms encompass brute-force methods, which exhaustively enumerate all possibilities to guarantee an exact solution, though often at high computational cost, as in generating all permutations for the traveling salesman problem. Randomized algorithms incorporate randomness to achieve expected efficiency, such as randomized with average O(n log n) performance. Reduction transforms one problem into another known problem, facilitating solutions or complexity proofs, like reducing 3-SAT to the in studies. These paradigms are foundational in , enabling efficient algorithm design across domains like sorting, , and optimization, while their analysis often relies on asymptotic notations such as Big-O to evaluate time and .

Definition and Fundamentals

Definition

An algorithmic paradigm is a generic model or framework that underlies the design of a class of , focusing on high-level strategies for problem-solving rather than specific implementations. It represents common concepts and ideas behind algorithm , providing a structured approach to tackling computational problems. This distinguishes algorithmic paradigms from individual algorithms, which are concrete procedures for solving specific instances of problems, as paradigms offer abstract templates applicable across multiple related problems. Unlike programming paradigms, which emphasize styles of code organization and execution (such as imperative or ), algorithmic paradigms center on the logical strategies for devising solutions, independent of how they are coded. Algorithmic paradigms play a key role in guiding efficiency analysis by enabling the evaluation of time and , often expressed in , to compare solution viability for large inputs.

Key Characteristics

Algorithmic paradigms serve as reusable blueprints for solving a wide array of computational problems, enabling designers to apply proven strategies across diverse scenarios rather than starting from scratch for each instance. This reusability stems from the paradigms' of common structural patterns, such as or incremental decision-making, which can be adapted to problems sharing similar properties, thereby significantly reducing development time and effort in algorithm creation. A core emphasis of algorithmic paradigms lies in optimizing , particularly through the and resolution of recurrence relations that govern their time and complexities. For instance, in the divide-and-conquer paradigm, the master theorem provides a systematic method to solve recurrences of the form T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n), where a1a \geq 1, b>1b > 1, and f(n)f(n) represents the cost of dividing and combining subproblems, allowing for the determination of asymptotic bounds without exhaustive case-by-case computation. This focus on ensures that paradigms guide the selection of approaches that minimize resource usage while maintaining correctness. Algorithmic paradigms strike a balance between generality and specificity, offering broad frameworks that apply to multiple problem classes while permitting tailored adaptations to unique constraints or objectives. This duality allows paradigms to provide versatile templates—such as greedy selection for optimization problems—that can be specialized with domain-specific heuristics or data structures, enhancing both applicability and without sacrificing foundational rigor. The choice of paradigm influences the criteria employed to assess algorithmic , with metrics like worst-case (bounding the maximum runtime over all inputs), average-case (averaging over probable input distributions), and (smoothing costs over a sequence of operations) tailored to the paradigm's operational characteristics. For example, is particularly useful for paradigms involving dynamic data structures, where occasional expensive operations are offset by frequent inexpensive ones, providing a more realistic view of overall efficiency than isolated worst-case bounds.

Historical Development

Early Concepts

The origins of algorithmic paradigms trace back to ancient mathematics, where systematic procedures for solving problems emerged without the aid of mechanical devices. One of the earliest examples is the , documented by the Greek mathematician in his seminal work Elements around 300 BCE. This method computes the of two integers by iteratively replacing the larger number with the remainder of the division of the two numbers, effectively reducing the problem size until the remainder is zero; the last non-zero remainder is the GCD. As a precursor to divide-and-conquer strategies, it demonstrates a foundational approach to problem decomposition, where a complex task is broken into simpler subproblems through repeated application of a basic operation, highlighting the timeless efficiency of recursive reduction in computation. In the 9th century, the Persian mathematician Muhammad ibn Musa developed systematic procedures for solving linear and quadratic equations in his treatise Al-Kitab al-Mukhtasar fi Hisab al-Jabr wal-Muqabala, laying foundational algorithmic methods that influenced later computational thinking. In the , the advent of mechanical computing devices marked a transition toward more structured algorithmic thinking, influenced by the need for automated calculation in scientific and engineering contexts. Babbage's design for the , conceived in the 1830s and detailed through the 1840s, introduced key concepts such as punched cards for programming instructions and a separation between (the "store") and (the "mill"), enabling systematic decomposition of problems into programmable sequences. This architecture allowed for conditional branching and looping, facilitating the breakdown of computations into modular steps that could be executed non-sequentially based on intermediate results, thus laying early groundwork for general-purpose algorithmic frameworks beyond fixed-function machines like his earlier . Augmenting Babbage's vision, Ada Lovelace's extensive notes published in 1843 on the emphasized its potential for universal computation. She articulated that the machine was not limited to numerical tabulation but could manipulate symbols and perform abstract operations, such as composing and resolving complex algebraic forms, likening it to a weaving patterns from logical threads. Lovelace's insights, including her algorithm for computing Bernoulli numbers, underscored the recognition of general methods applicable to non-numerical domains, foreshadowing the versatility of algorithmic paradigms in handling diverse symbolic manipulations. A pivotal formalization occurred concurrently with , introduced by in his 1847 treatise The Mathematical Analysis of Logic. Boole transformed traditional Aristotelian logic into an algebraic system using symbols to represent propositions, where operations like addition denoted disjunction and multiplication conjunction, enabling the mechanical resolution of logical inferences through . This framework provided a rigorous basis for logical paradigms, shifting from enumerative syllogisms to algorithmic manipulation of classes and relations, which would later underpin binary decision-making in computational systems.

20th Century Advancements

The foundations of algorithmic paradigms in the were profoundly shaped by Alan Turing's 1936 introduction of the universal , which formalized the concept of and provided a theoretical model for understanding the limits and structures of algorithmic processes. This machine, capable of simulating any other , established the basis for evaluating the efficiency and applicability of various paradigm models, influencing subsequent developments in algorithm design by emphasizing the role of step-by-step computation in problem-solving. In the 1950s and 1960s, key paradigms emerged through targeted innovations in optimization and sorting. Richard Bellman developed dynamic programming in 1953 as a method for solving complex multistage decision problems by breaking them into overlapping subproblems, with the approach formalized in his early report. The divide-and-conquer paradigm had earlier gained prominence through algorithms like mergesort, first proposed by in 1945 and detailed in a 1948 report by Goldstine and von Neumann, which recursively divides data into halves before merging sorted subarrays to achieve efficient sorting. Donald Knuth's multivolume series , beginning with Volume 1 in 1968 and extending to Volume 3 (Sorting and Searching) in 1973, offered a systematic classification of algorithmic paradigms, analyzing their structures, efficiencies, and implementations in depth. Knuth's work categorized techniques such as brute-force, divide-and-conquer, and greedy methods within concrete applications like searching and sorting, providing rigorous proofs and examples that standardized paradigm evaluation in . The 1970s and 1980s saw the introduction and scrutiny of paradigms like greedy algorithms and amid the era, initiated by Stephen Cook's 1971 paper demonstrating that the satisfiability problem is NP-complete. This era highlighted the limitations of these paradigms for intractable problems, as greedy approaches often yield suboptimal approximations for optimization tasks, while enables exhaustive searches but faces exponential time complexities on NP-hard instances, prompting shifts toward and approximation strategies.

Classification of Paradigms

By Design Strategy

Algorithmic paradigms can be classified based on their strategies, which outline the fundamental approaches to solving computational problems. This emphasizes the for developing algorithms, focusing on how problems are approached and resolved rather than the underlying computational resources or models. A prominent groups these strategies into exhaustive, , and optimization categories, providing a structured way to understand and select appropriate techniques for different problem types. Exhaustive strategies involve systematically exploring all possible solutions or a significant portion of the search space to guarantee finding an optimal or correct answer. Brute-force methods represent the simplest form, generating and checking every potential solution without pruning, making them complete but often inefficient for large inputs; for instance, they are used in small-scale problems like solving the traveling salesman problem by evaluating all routes. extends this by building partial solutions incrementally and abandoning unproductive paths early through constraint checking, thus avoiding full enumeration while remaining exhaustive in principle; a classic example is the , where invalid board placements are discarded during recursive placement attempts. These strategies are particularly suited to combinatorial problems where completeness is essential, though their practicality diminishes with problem size due to in possibilities. Decomposition strategies break down complex problems into simpler subproblems, solving them recursively or iteratively and then combining the results. Divide-and-conquer partitions the problem into multiple independent subproblems of roughly equal size, solves each separately, and merges the solutions, as seen in where an array is recursively divided, sorted, and merged. Decrease-and-conquer, in contrast, reduces the problem to a single smaller instance (often by removing or simplifying one element), solves it, and extends the solution to the original; variable-size decrease examples include , which builds a sorted list by incrementally placing elements, while fixed-size variants like Euclid's algorithm for GCD repeatedly reduce the input pair. Both approaches leverage to manage complexity, promoting efficiency through modular problem-solving. Transform-and-conquer, another decomposition variant, transforms the problem into a more convenient form before solving, such as balancing a or using for linear systems. Optimization strategies aim to find high-quality solutions to problems seeking minima, maxima, or optimal selections, often trading completeness for . Greedy algorithms make irrevocable local optimal choices at each step, hoping they lead to a global optimum; for example, for minimum spanning trees selects the smallest edge that does not form a cycle until the tree is complete. Dynamic programming addresses global optima by solving overlapping subproblems systematically, storing intermediate results to avoid recomputation—either top-down with or bottom-up via tabulation—as in the computation where subproblem values are cached. While greedy methods are faster but may yield suboptimal results (e.g., in some variants), dynamic programming ensures optimality for suitable problems like knapsack but at higher space and time costs. An example within strategies further groups them by implementation style: recursive strategies like divide-and-conquer and rely on function calls to handle subproblems; iterative ones, such as certain greedy or brute-force implementations, use loops for sequential processing; and randomized strategies incorporate probability for approximation or efficiency, as in methods for exhaustive-like searches. This layering highlights how design strategies intersect with computational models, such as sequential versus probabilistic execution.

By Computational Model

Algorithmic paradigms can be classified based on the underlying computational models they assume, which dictate the resources, execution behavior, and environmental interactions available to the algorithm. A primary distinction lies between deterministic and non-deterministic models. In deterministic models, such as the standard deterministic Turing machine (DTM), the computation follows a unique path for any given input, producing a fixed output without ambiguity. This model underpins paradigms where predictability is essential, ensuring that the same input always yields the same result through sequential state transitions governed by a finite set of rules. In contrast, non-deterministic models, exemplified by the non-deterministic Turing machine (NTM), allow multiple possible transitions from a given state, effectively branching the computation to explore parallel possibilities in a single step. This framework supports paradigms that handle uncertainty or search spaces efficiently in theory, as NTMs characterize the complexity class NP, where verification is polynomial-time but solving may require exploring exponential paths. For instance, paradigms like backtracking can be analyzed under NTM models to assess worst-case behaviors, though practical implementations often simulate nondeterminism deterministically at higher cost. Non-deterministic models are particularly useful for proving lower bounds and understanding inherent computational hardness, as they abstract away the need for explicit branching mechanisms. Another key classification separates sequential and parallel computational models, reflecting differences in how computations exploit concurrency. Sequential models, based on single-processor Turing machines or , assume one computation thread, making them suitable for paradigms like dynamic programming where state dependencies enforce linear progression. Parallel models extend this by allowing simultaneous operations across multiple processors. The model, introduced as an idealized shared-memory architecture, formalizes this by enabling synchronous access to a common memory by p processors, with variants like Concurrent Read Concurrent Write (CRCW) or Exclusive Read Exclusive Write (EREW) to handle contention. PRAM-based paradigms adapt sequential strategies, such as divide-and-conquer, to parallel settings; for example, merging in mergesort can be parallelized to achieve logarithmic time with linear processors under the EREW PRAM model. This classification highlights how parallel paradigms reduce from O(n log n) to O(log n) for certain problems, assuming idealized without communication overhead. Resource-bounded models further classify paradigms by imposing constraints on computational resources beyond time and space, often to address intractability in practical settings. Fixed-parameter tractability (FPT) emerges as a core concept in , where run in time f(k) * n^c, with k as a small , c constant, and f arbitrary but independent of input size n. This model supports paradigms that exploit structural parameters, like in graph algorithms, to achieve tractability for otherwise NP-hard problems. Downey and Fellows established the foundational theory, defining FPT as a generalization of polynomial-time solvability and introducing completeness under parameterized reductions for classes like W and W. For example, admits an FPT algorithm with k as the solution size, solving it in 1.2738^k * n^{O(1)} time, demonstrating how resource bounds enable efficient computation when parameters are bounded. This approach contrasts with classical complexity by slicing problems into tractable kernels, emphasizing preprocessing to reduce instance size before applying exponential-time methods on the parameter. Hybrid models integrate standard Turing machines with external aids, such as oracles, to explore paradigms that approximate solutions under computational limitations. An oracle Turing machine augments a base TM with a query tape to an oracle set A, allowing instantaneous decision on membership in A, thus modeling relativized computation. In approximation paradigms, oracles facilitate the study of classes like APX, where polynomial-time approximation within a constant factor is achievable; for instance, an oracle for the traveling salesman problem can enable PTAS (polynomial-time approximation schemes) by providing near-optimal subproblem solutions. Papadimitriou and Yannakakis formalized such hierarchies for approximation classes, including MAX-SNP completeness under L-reductions. This hybrid framework reveals limitations of derandomization and approximation preservation, proving that P ≠ NP relative to oracles where approximation is hard. By combining deterministic computation with idealized oracles, these models underpin paradigms for optimization under uncertainty, such as local search with approximate oracles for set cover.

Core Paradigms

Brute-Force

The brute-force paradigm, also known as exhaustive search, is an algorithmic strategy that solves problems by systematically generating and evaluating every possible candidate solution until the correct or optimal one is identified. This approach relies on direct enumeration based on the problem's definition, without employing any optimization or shortcuts, making it a straightforward baseline method applicable to any decidable problem. A classic example is the traveling salesman problem (TSP), where the goal is to find the shortest route visiting each city exactly once and returning to the start. In the brute-force implementation, all possible permutations of the cities are generated—totaling n!n! for nn cities—and each permutation's total distance is computed and compared to identify the minimum. This exhaustive checking ensures the exact optimal solution but highlights the paradigm's core mechanism of complete exploration. The of brute-force algorithms for combinatorial problems like TSP is typically O(n!)O(n!), reflecting the growth in the number of candidates to evaluate. is generally O(1)O(1) if solutions are generated iteratively without storage, or O(n)O(n) if permutations or paths are held in memory during computation. Brute-force methods are suitable for problems with small input sizes, such as n10n \leq 10 in permutation-based tasks, where computation remains feasible on modern hardware. They also serve as a correctness benchmark to validate more efficient algorithms by comparing outputs on small instances. However, the paradigm's primary limitation is its exponential time complexity, which causes rapid degradation in performance as input size increases; for instance, TSP with n=20n=20 requires over 2.4 quintillion evaluations, rendering it impractical for large-scale applications. For a concrete illustration outside , consider brute-force string matching, which searches for occurrences of a pattern string PP of length mm in a text string TT of length nn by sliding PP across TT and comparing characters position by position. The following outlines the process:

for i from 0 to n - m j = 0 while j < m and T[i + j] == P[j] j = j + 1 if j == m report match at position i

for i from 0 to n - m j = 0 while j < m and T[i + j] == P[j] j = j + 1 if j == m report match at position i

This naive approach has a worst-case time complexity of O(nm)O(nm).

Divide-and-Conquer

The divide-and-conquer paradigm is a fundamental approach in algorithm design where a problem is recursively broken down into smaller, non-overlapping subproblems of the same form, solved independently, and then their solutions are combined to yield the overall solution. This strategy typically involves three key steps: divide, which partitions the problem into subproblems; conquer, which recursively solves the subproblems until they reach base cases that can be solved directly (often in constant time); and combine, which merges the subproblem solutions into a solution for the original problem. The paradigm is particularly effective for problems that exhibit a natural recursive structure, allowing for efficient decomposition without requiring subproblem overlap or local optimization choices. A classic example of divide-and-conquer is mergesort, an efficient sorting algorithm that divides an array of nn elements into two halves, recursively sorts each half, and then merges the sorted halves in linear time. The recurrence relation for mergesort's time complexity is T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n), which solves to O(nlogn)O(n \log n) in both the average and worst cases, providing a stable and predictable performance superior to quadratic brute-force sorting methods. Another representative example is binary search, which locates a target element in a sorted array by repeatedly dividing the search interval in half and discarding the portion that cannot contain the target, reducing the problem size exponentially until the base case is reached. This yields a time complexity of O(logn)O(\log n), making it vastly more efficient than linear search for large datasets. To analyze the time complexity of divide-and-conquer algorithms, the Master Theorem provides a straightforward method for solving recurrences of the form T(n)=aT(n/b)+f(n)T(n) = a T(n/b) + f(n), where a1a \geq 1 is the number of subproblems, b>1b > 1 is the factor by which the input size is divided, and f(n)f(n) represents the cost of the divide and combine steps. The theorem classifies solutions into three cases based on the growth rate of f(n)f(n) relative to nlogban^{\log_b a}: if f(n)=O(nlogbaϵ)f(n) = O(n^{\log_b a - \epsilon}) for some ϵ>0\epsilon > 0, then T(n)=Θ(nlogba)T(n) = \Theta(n^{\log_b a}); if f(n)=Θ(nlogbalogkn)f(n) = \Theta(n^{\log_b a} \log^k n) for k0k \geq 0, then T(n)=Θ(nlogbalogk+1n)T(n) = \Theta(n^{\log_b a} \log^{k+1} n); and if f(n)=Ω(nlogba+ϵ)f(n) = \Omega(n^{\log_b a + \epsilon}) with the regularity condition af(n/b)cf(n)a f(n/b) \leq c f(n) for some c<1c < 1 and large nn, then T(n)=Θ(f(n))T(n) = \Theta(f(n)). For mergesort, with a=2a=2, b=2b=2, and f(n)=O(n)f(n) = O(n), it falls into the second case, confirming the O(nlogn)O(n \log n) bound. One key advantage of the divide-and-conquer paradigm is its inherent parallelizability, as subproblems can often be solved independently across multiple processors, making it suitable for distributed computing environments. Additionally, the recursive decomposition simplifies complex problems into manageable parts, facilitating easier implementation and analysis. However, a notable disadvantage is the overhead from recursive calls, which can lead to increased stack space usage—up to O(n)O(n) in the worst case for unbalanced divisions—and potential performance degradation due to function call costs in deep recursion.

Greedy Algorithms

Greedy algorithms constitute a design paradigm in which the solution is constructed by repeatedly selecting the locally optimal choice at each step, with the intention of achieving a global optimum. This approach hinges on two fundamental properties: the greedy choice property, asserting that a locally optimal choice can be part of a globally optimal solution, and the optimal substructure property, indicating that the optimal solution incorporates optimal solutions to subproblems. These properties ensure that the algorithm's decisions do not compromise the overall optimality, allowing for efficient construction without revisiting prior choices. A prominent example is for computing the minimum spanning tree of an undirected, connected, weighted graph. The algorithm sorts all edges in non-decreasing order of weight and iteratively adds the smallest edge that connects two distinct components in the forest, avoiding cycles through union-find operations. This greedy selection guarantees the minimum total weight, as proven by the matroid structure of graphic forests. Another illustrative application is , used for lossless data compression. The method constructs a prefix code by building a binary tree where each internal node represents a merge of the two lowest-probability symbols or subtrees, assigning shorter codes to more frequent symbols. This results in an optimal variable-length code that minimizes the expected code length, achieving the entropy bound for uniquely decodable codes. To establish optimality, proofs for greedy algorithms often employ the exchange argument. This technique demonstrates that if an optimal solution differs from the greedy one, edges or choices can be swapped iteratively—replacing a non-greedy choice with the greedy alternative—without increasing the total cost, eventually yielding the greedy solution. Such arguments are particularly effective for problems like minimum spanning trees, where they show equivalence between any optimal solution and the greedy outcome. Despite their efficiency, greedy algorithms are limited to problems exhibiting the requisite properties; absent these, they may yield suboptimal results. For example, the fractional knapsack problem, where items can be divided, admits a greedy solution by sorting items by value-to-weight ratio and filling the knapsack sequentially, achieving optimality due to the continuous nature. In contrast, the 0-1 knapsack problem, prohibiting fractions, lacks the greedy choice property, as selecting the highest-ratio item first can miss the global optimum, necessitating dynamic programming instead.

Dynamic Programming

Dynamic programming is an algorithmic paradigm that solves optimization problems by breaking them into overlapping subproblems, computing solutions to these subproblems once, and storing them to avoid redundant calculations, thereby achieving efficient polynomial-time solutions for otherwise exponential problems. The method was developed by Richard Bellman in the early 1950s at the to address multistage decision processes in . It applies to problems with two fundamental properties: optimal substructure, where the optimal solution to the overall problem incorporates optimal solutions to its subproblems, and overlapping subproblems, where the same subproblems recur multiple times in a recursive decomposition. These properties enable dynamic programming to extend divide-and-conquer techniques by caching results, transforming recursive calls into a directed acyclic graph of dependencies. Bellman's principle of optimality underpins optimal substructure, stating that an optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. Overlapping subproblems ensure that without caching, naive recursion would recompute the same values exponentially many times, but with storage, the computation becomes linear in the number of unique subproblems. Dynamic programming implementations follow two primary approaches: top-down memoization and bottom-up tabulation. Top-down memoization starts with the original problem and recursively solves subproblems, using a map or array to store and retrieve previously computed results, ensuring each subproblem is solved at most once. Bottom-up tabulation iteratively fills a table from base cases to the full problem, relying on the dependency order to build solutions progressively. Both achieve the same asymptotic efficiency but differ in implementation: memoization naturally handles sparse subproblems, while tabulation uses contiguous storage for dense cases. A representative example is computing the nth Fibonacci number, defined by the recurrence Fn=Fn1+Fn2F_n = F_{n-1} + F_{n-2} for n2n \geq 2, with base cases F0=0F_0 = 0 and F1=1F_1 = 1. In bottom-up tabulation, initialize an array dp of size n+1n+1 with dp[0] = 0 and dp[1] = 1, then for i=2i = 2 to nn, set dp=dp[i1]+dp[i2]\text{dp} = \text{dp}[i-1] + \text{dp}[i-2]; the result is dp[n], running in O(n)O(n) time and space. This avoids the O(2n)O(2^n) time of naive recursion by resolving the overlapping subproblems FkF_k for k<nk < n. The longest common subsequence (LCS) problem, which finds the longest subsequence present in two given sequences, exemplifies dynamic programming for string alignment. Let X=x1x2xmX = x_1 x_2 \dots x_m and Y=y1y2ynY = y_1 y_2 \dots y_n; define C[i,j]C[i,j] as the LCS length of the first ii characters of XX and first jj of YY. The recurrence is: C[i,j]={0if i=0 or j=0C[i1,j1]+1if i>0,j>0,xi=yjmax(C[i1,j],C[i,j1])if i>0,j>0,xiyjC[i,j] = \begin{cases} 0 & \text{if } i = 0 \text{ or } j = 0 \\ C[i-1,j-1] + 1 & \text{if } i > 0, j > 0, x_i = y_j \\ \max(C[i-1,j], C[i,j-1]) & \text{if } i > 0, j > 0, x_i \neq y_j \end{cases}
Add your contribution
Related Hubs
User Avatar
No comments yet.