Recent from talks
Nothing was collected or created yet.
Heap's algorithm
View on Wikipedia

Heap's algorithm generates all possible permutations of n objects. It was first proposed by B. R. Heap in 1963.[1] The algorithm minimizes movement: it generates each permutation from the previous one by interchanging a single pair of elements; the other n−2 elements are not disturbed. In a 1977 review of permutation-generating algorithms, Robert Sedgewick concluded that it was at that time the most effective algorithm for generating permutations by computer.[2]
The sequence of permutations of n objects generated by Heap's algorithm is the beginning of the sequence of permutations of n+1 objects. So there is one infinite sequence of permutations generated by Heap's algorithm (sequence A280318 in the OEIS).
Details of the algorithm
[edit]For a collection containing n different elements, Heap found a systematic method for choosing at each step a pair of elements to switch in order to produce every possible permutation of these elements exactly once.
Described recursively as a decrease and conquer method, Heap's algorithm operates at each step on the initial elements of the collection. Initially and thereafter . Each step generates the permutations that end with the same final elements. It does this by calling itself once with the element unaltered and then times with the () element exchanged for each of the initial elements. The recursive calls modify the initial elements and a rule is needed at each iteration to select which will be exchanged with the last. Heap's method says that this choice can be made by the parity of the number of elements operated on at this step. If is even, then the final element is iteratively exchanged with each element index. If is odd, the final element is always exchanged with the first.
// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// permutations with last element fixed
permutations(k - 1, A)
// permutations with last element swapped out
for i := 0; i < k-1; i += 1 do
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
permutations(k - 1, A)
end for
end if
One can also write the algorithm in a non-recursive format.[3]
procedure permutations(n : integer, A : array of any):
// c is an encoding of the stack state.
// c[k] encodes the for-loop counter for when permutations(k + 1, A) is called
c : array of int
for i := 0; i < n; i += 1 do
c[i] := 0
end for
output(A)
// i acts similarly to a stack pointer
i := 1;
while i < n do
if c[i] < i then
if i is even then
swap(A[0], A[i])
else
swap(A[c[i]], A[i])
end if
output(A)
// Swap has occurred ending the while-loop. Simulate the increment of the while-loop counter
c[i] += 1
// Simulate recursive call reaching the base case by bringing the pointer to the base case analog in the array
i := 1
else
// Calling permutations(i+1, A) has ended as the while-loop terminated. Reset the state and simulate popping the stack by incrementing the pointer.
c[i] := 0
i += 1
end if
end while
Proof
[edit]In this proof, we'll use the below implementation as Heap's algorithm as it makes the analysis easier, and certain patterns can be easily illustrated. While it is not optimal (it does not minimize moves, which is described in the section below), the implementation is correct and will produce all permutations.
// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
end for
end if
Claim: If array A has length n, then permutations(n, A) will result in either A being unchanged, if n is odd, or, if n is even, then A is rotated to the right by 1 (last element shifted in front of other elements).
Base: If array A has length 1, then permutations(1, A) will output A and stop, so A is unchanged. Since 1 is odd, this is what was claimed, so the claim is true for arrays of length 1.
Induction: If the claim is true for arrays of length l ≥ 1, then we show that the claim is true for arrays of length l+1 (together with the base case this proves that the claim is true for arrays of all lengths). Since the claim depends on whether l is odd or even, we prove each case separately.
If l is odd, then, by the induction hypothesis, for an array A of length l, permutations(l, A) will not change A, and for the claim to hold for arrays of length l+1 (which is even), we need to show that permutations(l+1, A) rotates A to the right by 1 position. Doing permutations(l+1, A) will first do permutations(l, A) (leaving A unchanged since l is odd) and then in each iteration i of the for-loop, swap the elements in positions i and l (the last position) in A. The first swap puts element l (the last element) in position 0, and element 0 in position l. The next swap puts the element in position l (where the previous iteration put original element 0) in position 1 and element 1 in position l. In the final iteration, the swap puts element l-1 is in position l, and the element in position l (where the previous iteration put original element l-2) in position l-1. To illustrate the above, look below for the case n = 4.
1,2,3,4 ... original array 1,2,3,4 ... 1st iteration (permute subarray) 4,2,3,1 ... 1st iteration (swap 1st element into last position) 4,2,3,1 ... 2nd iteration (permute subarray) 4,1,3,2 ... 2nd iteration (swap 2nd element into last position) 4,1,3,2 ... 3rd iteration (permute subarray) 4,1,2,3 ... 3rd iteration (swap 3rd element into last position) 4,1,2,3 ... 4th iteration (permute subarray) 4,1,2,3 ... 4th iteration (swap 4th element into last position) The altered array is a rotated version of the original
If l is even, then, by the induction hypothesis, for an array A of length l, permutations(l, A) rotates A to the right by 1 position, and for the claim to hold for arrays of length l+1 (which is odd), we need to show that permutations(l+1, A) leaves A unchanged. Doing permutations(l+1, A) will in each iteration i of the for-loop, first do permutations(l, A) (rotating the first l elements of A by 1 position since l is even) and then, swap the elements in positions 0 and l (the last position) in A. Rotating the first l elements and then swapping the first and last elements is equivalent to rotating the entire array. Since there are as many iterations of the loop as there are elements in the array, the entire array is rotated until each element returns to where it started. To illustrate the above, look below for the case n = 5.
1,2,3,4,5 ... original array 4,1,2,3,5 ... 1st iteration (permute subarray, which rotates it) 5,1,2,3,4 ... 1st iteration (swap) 3,5,1,2,4 ... 2nd iteration (permute subarray, which rotates it) 4,5,1,2,3 ... 2nd iteration (swap) 2,4,5,1,3 ... 3rd iteration (permute subarray, which rotates it) 3,4,5,1,2 ... 3rd iteration (swap) 1,3,4,5,2 ... 4th iteration (permute subarray, which rotates it) 2,3,4,5,1 ... 4th iteration (swap) 5,2,3,4,1 ... 5th iteration (permute subarray, which rotates it) 1,2,3,4,5 ... 5th iteration (swap) The final state of the array is in the same order as the original
The induction proof for the claim is now complete, which will now lead to why Heap's Algorithm creates all permutations of array A. Once again we will prove by induction the correctness of Heap's Algorithm.
Basis: Heap's Algorithm trivially permutes an array A of size 1 as outputting A is the one and only permutation of A.
Induction: Assume Heap's Algorithm permutes an array of size i. Using the results from the previous proof, every element of A will be in the "buffer" once when the first i elements are permuted. Because permutations of an array can be made by altering some array A through the removal of an element x from A then tacking on x to each permutation of the altered array, it follows that Heap's Algorithm permutes an array of size , for the "buffer" in essence holds the removed element, being tacked onto the permutations of the subarray of size i. Because each iteration of Heap's Algorithm has a different element of A occupying the buffer when the subarray is permuted, every permutation is generated as each element of A has a chance to be tacked onto the permutations of the array A without the buffer element.
Frequent mis-implementations
[edit]It is tempting to simplify the recursive version given above by reducing the instances of recursive calls. For example, as:
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// Recursively call once for each k
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
// swap choice dependent on parity of k (even or odd)
if k is even then
// no-op when i == k-1
swap(A[i], A[k-1])
else
// XXX incorrect additional swap when i==k-1
swap(A[0], A[k-1])
end if
end for
end if
This implementation will succeed in producing all permutations but does not minimize movement. As the recursive call-stacks unwind, it results in additional swaps at each level. Half of these will be no-ops of and where but when is odd, it results in additional swaps of the with the element.
| swaps | additional = swaps | ||
|---|---|---|---|
| 1 | 0 | 0 | 0 |
| 2 | 1 | 1 | 0 |
| 3 | 5 | 6 | 1 |
| 4 | 23 | 27 | 4 |
| 5 | 119 | 140 | 21 |
| 6 | 719 | 845 | 126 |
| 7 | 5039 | 5922 | 883 |
| 8 | 40319 | 47383 | 7064 |
| 9 | 362879 | 426456 | 63577 |
These additional swaps significantly alter the order of the prefix elements.
The additional swaps can be avoided by either adding an additional recursive call before the loop and looping times (as above) or looping times and checking that is less than as in:
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// Recursively call once for each k
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
// avoid swap when i==k-1
if (i < k - 1)
// swap choice dependent on parity of k
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
end if
end for
end if
The choice is primarily aesthetic but the latter results in checking the value of twice as often.
See also
[edit]References
[edit]- ^ Heap, B. R. (1963). "Permutations by Interchanges". The Computer Journal. 6 (3): 293–4. doi:10.1093/comjnl/6.3.293.
- ^ Sedgewick, R. (1977). "Permutation Generation Methods". ACM Computing Surveys. 9 (2): 137–164. doi:10.1145/356689.356692. S2CID 12139332.
- ^ Sedgewick, Robert (4 June 2020). "a talk on Permutation Generation Algorithms" (PDF).
Heap's algorithm
View on GrokipediaBackground
Permutations
A permutation of a set of distinct objects is a rearrangement of those objects into a linear order, equivalent to a bijective mapping from the set to itself.[3] The collection of all permutations of objects constitutes the symmetric group , a mathematical structure with exactly elements, where is the factorial of , defined as the product of all positive integers from 1 to .[4][5] For example, consider the set with : the six permutations are ABC, ACB, BAC, BCA, CAB, and CBA. The factorial exhibits rapid growth—for instance, and —resulting in an exponentially increasing number of permutations that renders complete enumeration computationally intensive even for moderately large , such as beyond 10 or 12.[5] Permutations form a cornerstone of combinatorics and underpin various applications in computer science, including sorting and testing scenarios.Permutation Generation
Generating permutations involves systematically producing all possible arrangements of a set of distinct elements, a fundamental task in combinatorics and computer science. Common approaches include generating permutations in lexicographic order, recursive backtracking, and swap-based methods. The lexicographic order method, also known as the next-permutation algorithm, produces permutations in ascending dictionary-like sequence by identifying the next valid arrangement from the current one, typically involving finding a pivot, swapping with a suitable successor, and reversing the suffix.[2] This approach, exemplified by algorithms from Fischer and Krause, ensures ordered output but requires multiple exchanges per step, making it suitable for applications needing sequential enumeration.[2] Recursive backtracking constructs permutations by incrementally building arrangements through depth-first exploration, fixing one element at a time and recursing on the remaining subset until the full permutation is formed.[2] This method naturally decomposes the problem into a tree structure, where each level represents a position in the permutation and branches correspond to possible choices for that position, effectively forming a "permutation tree" that spans all n! leaves.[2] However, standard implementations often incur redundant computations, such as repeatedly copying arrays during recursion, leading to up to O(n \cdot n!) total operations despite generating exactly n! permutations.[2] Swap-based methods generate permutations by exchanging elements in an existing array, often iteratively or recursively, to transition between arrangements.[2] A notable example is the Johnson-Trotter algorithm, which produces all permutations using exactly n! - 1 adjacent swaps, ensuring each consecutive pair differs by a single transposition of neighboring elements. Naive swap methods, however, can suffer from high swap counts and inefficient element movements, while the desire for minimal adjacent transpositions arises in scenarios requiring smooth transitions, such as in Gray codes for permutations.[2] These challenges—redundant computations, excessive swaps, and the need for minimal changes—motivate optimized variants like Heap's algorithm, which minimizes the number of swaps while generating all permutations.[2]Historical Development
Heap's algorithm was first proposed by B. R. Heap in his 1963 paper "Permutations by Interchanges," published in The Computer Journal. The algorithm emerged in the context of early computational combinatorics, where there was a growing need for efficient methods to enumerate all permutations of a set of objects, particularly for simulations and combinatorial programs limited by the computational resources of the time, such as generating permutations for up to 12 objects while minimizing storage and processing demands. Heap's work advanced efficient permutation generation methods in the early 1960s, optimizing for interchanges that could involve non-adjacent elements to reduce overall movement, in contrast to adjacent transposition methods like the Steinhaus–Johnson–Trotter algorithm. In a comprehensive 1977 survey of permutation generation methods, Robert Sedgewick evaluated Heap's algorithm alongside numerous others and described it as the most efficient among recursive approaches, particularly praising its ability to minimize the number of swaps required—achieving as few as two stores per permutation in optimized implementations—making it suitable for both high-level procedural languages and low-level assembly. Sedgewick highlighted its simplicity and speed compared to methods like Wells' (1960) and Boothroyd's (1965), as well as its advantages over iterative alternatives such as the Johnson-Trotter algorithm in certain optimized scenarios. Since its introduction, Heap's algorithm has seen no major theoretical updates or revisions, remaining a foundational technique in permutation enumeration. It continues to be referenced in modern combinatorial sequences, such as OEIS A280318, which catalogs the infinite sequence of permutations produced by the algorithm when applied iteratively to increasing set sizes, represented by their reverse colexicographic indices.[6]Algorithm Description
Recursive Formulation
Heap's algorithm can be formulated recursively through a function that generates all permutations of the first elements of the array (with the remaining elements fixed). The initial call is with to permute the entire array. The base case is , where the single permutation of the first element is output. For , the function first recursively generates all permutations of the first elements. Then, in a loop over to , it swaps two elements based on the parity of (if is even, swap positions and ; if odd, swap positions 0 and ), outputs the resulting permutation by recursing to generate all permutations of the first elements again, ensuring each position for the th element is explored systematically.[1][2] This approach builds permutations by extending smaller prefixes, with swaps ensuring adjacent permutations differ by exactly one transposition. The mechanism guarantees all distinct permutations are generated without repetition or omission.[1][2] To illustrate, consider with initial array (0-based indexing). The recursion starts at , and the sequence of permutations generated through recursive calls and swaps is:- [1, 2, 3] (initial)
- [2, 1, 3] (swap positions 0 and 2, since odd? Wait, actually following standard: for k=3 odd, after first recurse(k=2), loop i=0: swap 0 and 2, recurse; i=1: swap 0 and 2, recurse; i=2: swap 0 and 2, recurse. But the unfolding produces the sequence)
- [3, 1, 2] (swap positions 1 and 2? Detailed trace follows standard order)
- [1, 3, 2]
- [2, 3, 1]
- [3, 2, 1]
Iterative Formulation
The iterative formulation of Heap's algorithm simulates the recursion using an auxiliary array (1-based for convenience, tracking counts from 1 to k at level k) to manage the loop states at each level without a call stack. The algorithm outputs the initial permutation before entering the loop, initializes to 0 (or 1 depending on convention, but effectively starting counts at 0), and sets . It then enters a loop while : if , it performs a swap—swapping positions 0 and if is even, or positions and if is odd—followed by outputting the current permutation. It then increments and resets to 1 to simulate returning to the lower level. If , the level is complete, so reset and increment to advance. The process terminates when , having generated all permutations.[2][7] The "turnaround" at backtracks by resetting and incrementing , ensuring exploration of all branches in the permutation tree with single-swap transitions between consecutive permutations.[1][2] To illustrate for with initial array and (0 unused), output initial [1,2,3,4], set i=1. Since c[8]=0 <1, odd i=1, swap c[8]=0 and 1 (positions 0 and 1) to [2,1,3,4], output, c[8]=1, i=1. Now c[8]=1 >=1, reset c[8]=0, i=2. c[9]=0<2, even i=2, swap 0 and 2 on [2,1,3,4] to [3,1,2,4], output, c[9]=1, i=1. Continuing this way systematically generates all 24 permutations via 23 swaps, such as next [1,3,2,4] after further steps.[7] This non-recursive approach avoids stack overflow for larger , though practical limits remain around to 12 due to time for processing all permutations.[2]Correctness and Analysis
Proof of Correctness
The correctness of Heap's algorithm is established by mathematical induction on the number of elements , showing that it generates all distinct permutations of the input array exactly once. Base cases. For , the algorithm terminates immediately after outputting the single element as the sole permutation, which is correct. For , the loop iterates over to : first for , swap positions 1 and 2 (yielding the reversed order for output in the base case), then swap back; then for , no swap (outputting the initial order); this produces both permutations without repetition. Inductive hypothesis. Assume the algorithm correctly generates all distinct permutations exactly once for every . Inductive step. Consider elements. The algorithm executes a loop over to : for each , it swaps the elements at positions and , recursively invokes the procedure on the first positions (which, by the inductive hypothesis, generates all permutations of those positions), and then swaps back to restore the array for the next iteration. After each recursive call on , the current full array state is output as a permutation of all elements. For the case , no swap occurs, fixing the original th element in position while permuting the first . For , the swap places the original th element into position (and the original th into position ), so the recursive calls permute the remaining elements across positions to with the th element fixed at . Thus, the blocks (one per ) each produce permutations where a distinct element occupies position , covering all possible arrangements of the remaining elements in the other positions; the total is . Uniqueness. The blocks are disjoint because each fixes a different element in position (the original elements from positions to , cycled via the swaps). Within each block, uniqueness follows from the inductive hypothesis. The swaps are deterministic and reversible (swapping back after recursion), ensuring the array returns to its pre-block state, preventing carryover that could cause duplicates across blocks. In the iterative formulation, the parity-based direction of iteration (forward for even , reverse for odd ) maintains this positioning without altering the coverage or introducing overlaps. For illustration with , the top-level loop executes four blocks, each invoking the procedure (which generates permutations by the inductive hypothesis); this yields total permutations, verifiable as all distinct arrangements of four elements (e.g., starting from , the blocks fix 0, then 1, then 2, then 3 in the last position across their sub-permutations). The algorithm also achieves minimality in swaps: across all invocations, it performs exactly swaps in total, as each new permutation after the initial one is obtained via a single adjacent or non-adjacent transposition from the previous, forming a Hamiltonian path on the Cayley graph of the symmetric group generated by transpositions.[1]Complexity Analysis
Heap's algorithm exhibits a time complexity of , where is the number of elements, because it generates all permutations, and each permutation requires operations for processing or output, such as printing or swapping elements.[2][10] The space complexity is for both recursive and iterative implementations, relying on a single array of size for the elements and an auxiliary structure like a counter array or recursion stack of depth , without storing the full set of permutations.[2][11] In terms of swap efficiency, the algorithm performs exactly swaps in total across all permutations, which is minimal for any algorithm that generates permutations by adjacent transpositions, as each successive permutation differs from the previous by a single swap.[1][2] This leads to an amortized analysis of swaps per permutation on average, since the total swaps divided by permutations approaches 1 for large .[2] Due to the factorial growth in the number of permutations, Heap's algorithm is practical for n up to about 12 on modern hardware, as 12! ≈ 479 million permutations can be generated in under a minute when processing sequentially without storage, though larger n lead to rapid exponential growth in time.[2]Implementations and Variants
Pseudocode Examples
Heap's algorithm admits both recursive and iterative implementations, each capable of generating all permutations of an array of n distinct elements through a series of targeted swaps. The recursive formulation leverages a divide-and-conquer approach, fixing the last element while permuting the prefix, and is directly derived from the original description by Heap. The iterative version employs a counter array to simulate the recursion stack, ensuring constant space beyond the input array and avoiding stack overflow for large n. Both versions assume 0-based array indexing for modern implementations, though the original presentation used 1-based indexing; the output mechanism can be adapted from printing the array to yielding permutations or invoking a callback for processing.Recursive Pseudocode
The following pseudocode implements the recursive version, where the functiongenerate is initially called as generate(a, n) for an array a of length n. Each recursive call processes the prefix of length k, and the base case outputs the full permutation when k = 1. Swaps ensure adjacent transpositions generate all unique permutations without repetition.
function generate(a, k):
if k == 1:
output(a) // e.g., print or yield the current [permutation](/page/Permutation)
return
for i = [0](/page/0) to k-2: // loop over [0](/page/0) to k-2 (prefix indices)
generate(a, k-1)
if (k % 2 == 1): // odd k: swap first and last of current prefix
swap(a[0], a[k-1])
else: // even k: swap i-th and last of current prefix
swap(a[i], a[k-1])
generate(a, k-1) // final recursive call after loop
function generate(a, k):
if k == 1:
output(a) // e.g., print or yield the current [permutation](/page/Permutation)
return
for i = [0](/page/0) to k-2: // loop over [0](/page/0) to k-2 (prefix indices)
generate(a, k-1)
if (k % 2 == 1): // odd k: swap first and last of current prefix
swap(a[0], a[k-1])
else: // even k: swap i-th and last of current prefix
swap(a[i], a[k-1])
generate(a, k-1) // final recursive call after loop
- Call generate([1,2,3], 3): k=3 (odd), loop i=0: call generate([1,2,3],2) → outputs [1,2,3], [2,1,3] (from inner for k=2); then swap a↔a[9] on [2,1,3] → [3,1,2]
- Then i=1: call generate([3,1,2],2) → outputs [3,1,2], [1,3,2]; swap a↔a[9] on [1,3,2] → [2,3,1]
- Final call generate([2,3,1],2) → outputs [2,3,1], [3,2,1]
- The full sequence outputs: [1,2,3], [2,1,3], [3,1,2], [1,3,2], [2,3,1], [3,2,1].
Iterative Pseudocode
The iterative formulation, as refined in subsequent analyses, uses a counter arrayc of length n to track the number of times each position has been "used" in swaps, resetting and advancing the index i to mimic recursive unfolding. This version generates permutations in the same order as the recursive one and is invoked once for the full array. The parity check (i even or odd) determines the swap partners, ensuring systematic coverage.
function generate(a, n):
c = [array](/page/Array) of size n, initialized to 0
i = 0
output(a) // initial permutation
while i < n:
if c[i] < i:
if i % 2 == 0: // even i: swap first and i-th
swap(a[0], a[i])
else: // odd i: swap c[i]-th and i-th
swap(a[c[i]], a[i])
output(a)
c[i] += 1
i = 0 // reset to start
else:
c[i] = 0
i += 1 // advance
function generate(a, n):
c = [array](/page/Array) of size n, initialized to 0
i = 0
output(a) // initial permutation
while i < n:
if c[i] < i:
if i % 2 == 0: // even i: swap first and i-th
swap(a[0], a[i])
else: // odd i: swap c[i]-th and i-th
swap(a[c[i]], a[i])
output(a)
c[i] += 1
i = 0 // reset to start
else:
c[i] = 0
i += 1 // advance
- Initial output [1,2,3]; i=0, c<0 false, i=1
- i=1, c[8]=0 <1 yes, odd, swap a↔a[8] → [2,1,3]; output [2,1,3]; c[8]=1; i=0
- i=0 false, i=1; i=1, 1<1 false, c[8]=0, i=2
- i=2, c[9]=0 <2 yes, even, swap a↔a[9] on [2,1,3] → [3,1,2]; output [3,1,2]; c[9]=1; i=0
- i=0 false, i=1; i=1, 0<1 yes, odd, swap a↔a[8] on [3,1,2] → [1,3,2]; output [1,3,2]; c[8]=1; i=0
- i=0 false, i=1; i=1 false, i=2
- i=2, 1<2 yes, even, swap a↔a[9] on [1,3,2] → [2,3,1]; output [2,3,1]; c[9]=2; i=0
- i=0 false, i=1; i=1, 0<1 yes, odd, swap a↔a[8] on [2,3,1] → [3,2,1]; output [3,2,1]; c[8]=1; i=0
- i=0 false, i=1; i=1 false, i=2; i=2, 2<2 false, c[9]=0, i=3 end.
- Full sequence: [1,2,3], [2,1,3], [3,1,2], [1,3,2], [2,3,1], [3,2,1].
output step can be replaced with a yield statement in generator contexts or a user-defined process function to handle each permutation, accommodating languages without built-in printing. The 0-based indexing aligns with common programming conventions, though adaptations to 1-based (e.g., swapping a[8] and a) are straightforward for legacy systems.
Common Implementation Errors
A common error in implementing the recursive formulation of Heap's algorithm arises from incorrect placement of swaps relative to recursive calls, such as performing swaps without proper conditioning on the loop index. This can lead to additional unnecessary swaps while still generating all permutations, but failing to minimize movements as intended. For example, an improper variant might perform extra interchanges between the kth and 0th elements for odd k, resulting in more swaps overall. This manifests for n=9 as generating all 362,880 permutations but with 63,577 additional swaps, totaling 426,456 swaps instead of the optimal 362,879 (n! - 1).[2][1] Another frequent mistake involves off-by-one errors in the parity checks for even or odd values of k (the current subproblem size). The algorithm relies on swapping the last element with the first position if k is odd or with the loop index i if k is even to ensure all unique permutations are produced without repetition; an off-by-one shift, such as checking k % 2 == 1 instead of the correct condition, can cause the swaps to overlap incorrectly, leading to duplicate permutations or missed ones. For instance, this error might produce the identity permutation multiple times while omitting others, violating the algorithm's guarantee of exactly n! distinct outputs. The fix is to use a precise modulo 2 operation for parity (e.g., if (k & 1) for odd) and verify against known small cases like n=3, which should yield exactly 6 permutations.[10][1] In the iterative formulation, mishandling the direction or control array (often denoted as c[0..n-1], which tracks the cycling position for swaps at each level) is a prevalent issue. This array must be incremented correctly after each permutation output and reset to 0 when it reaches the level index +1; errors like off-by-one in the increment or reset conditions can trap the algorithm in infinite loops by failing to advance to deeper levels or by repeatedly cycling the same subarray. Alternatively, incomplete resets might halt generation prematurely, producing fewer than n! permutations. To mitigate, always initialize the array to 0, use exact bounds for increments (c = (c + 1) % (i + 1)), and test with n=4, expecting 24 permutations without loops.[2] Misimplementations of Heap's algorithm often result in roughly double the expected number of swaps compared to the minimal (n! - 1 total swaps across all permutations), which can be detected by instrumenting the code to count swaps and comparing against the theoretical minimum derived from the algorithm's design. Correct implementations, as described in the original formulation, perform exactly one adjacent or specified swap per permutation transition to minimize movement. Always backtrack swaps after recursion in the recursive version, employ modulo 2 strictly for parity decisions, and validate iterative control array updates with small inputs like n=3 or n=4 to catch these errors early.[1][2]Comparisons to Other Algorithms
Heap's algorithm and the Steinhaus–Johnson–Trotter (SJT) algorithm both generate all permutations of elements using exactly swaps, minimizing changes between successive permutations to a single interchange.[2] However, SJT restricts swaps to adjacent elements, forming a Hamiltonian path in the permutohedron graph where edges represent adjacent transpositions, which can impose additional computational overhead for tracking directions and mobility.[2] In contrast, Heap's algorithm permits non-adjacent swaps for greater simplicity in implementation, often exchanging the first and second elements, making it faster in practice despite the lack of adjacency constraint.[2][1] Compared to lexicographic generation methods, such as the algorithm underlying C++'sstd::next_permutation, Heap's algorithm requires fewer swaps overall—precisely versus potentially up to operations per permutation in lexicographic approaches, which may involve multiple exchanges or array copies to advance to the next ordered permutation.[2] Lexicographic methods produce permutations in sorted order, facilitating applications needing sequential enumeration, but at the cost of higher per-step complexity and no guarantee of minimal changes.[2] Heap's approach, while unordered, excels in environments where swap operations are expensive, such as hardware testing, due to its fixed single-swap transitions.[1]
Relative to recursive backtracking algorithms, which explore permutations by branching on each position and undoing choices, Heap's algorithm is more efficient by avoiding redundant branching and directly constructing each permutation from the prior one via targeted swaps, reducing overhead in swap-costly scenarios.[2] Backtracking methods, while straightforward for partial permutations or with constraints, incur higher time due to repeated partial reconstructions, whereas Heap's recursive structure ensures exhaustive coverage with minimal redundancy.[2]
A key trade-off of Heap's algorithm is its lack of inherent ordering, unlike lexicographic or SJT methods, which may limit its use in applications requiring sorted output; modern libraries like Python's itertools.permutations often employ hybrid recursive variants optimized for speed and memory, sometimes incorporating Heap-like swap minimization for efficiency.[2] While Heap's excels in total swaps for sequential generation, its recursive nature hinders parallelization compared to iterative alternatives like SJT, which can more readily support distributed processing.[2]References
Summary of Heap's Algorithm for Generating Permutations
- We shall begin with simple algorithms that generate permutations of an array by successively exchanging elements; these algorithms all have a common control.