Hubbry Logo
Heap's algorithmHeap's algorithmMain
Open search
Heap's algorithm
Community hub
Heap's algorithm
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Heap's algorithm
Heap's algorithm
from Wikipedia
A map of the 24 permutations and the 23 swaps used in Heap's algorithm permuting the four letters A (amber), B (blue), C (cyan) and D (dark red)
Wheel diagram of all permutations of length generated by Heap's algorithm, where each permutation is color-coded (1=blue, 2=green, 3=yellow, 4=red).

Heap's algorithm generates all possible permutations of n objects. It was first proposed by B. R. Heap in 1963.[1] The algorithm minimizes movement: it generates each permutation from the previous one by interchanging a single pair of elements; the other n−2 elements are not disturbed. In a 1977 review of permutation-generating algorithms, Robert Sedgewick concluded that it was at that time the most effective algorithm for generating permutations by computer.[2]

The sequence of permutations of n objects generated by Heap's algorithm is the beginning of the sequence of permutations of n+1 objects. So there is one infinite sequence of permutations generated by Heap's algorithm (sequence A280318 in the OEIS).

Details of the algorithm

[edit]

For a collection containing n different elements, Heap found a systematic method for choosing at each step a pair of elements to switch in order to produce every possible permutation of these elements exactly once.

Described recursively as a decrease and conquer method, Heap's algorithm operates at each step on the initial elements of the collection. Initially and thereafter . Each step generates the permutations that end with the same final elements. It does this by calling itself once with the element unaltered and then times with the () element exchanged for each of the initial elements. The recursive calls modify the initial elements and a rule is needed at each iteration to select which will be exchanged with the last. Heap's method says that this choice can be made by the parity of the number of elements operated on at this step. If is even, then the final element is iteratively exchanged with each element index. If is odd, the final element is always exchanged with the first.

// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
    if k = 1 then
        output(A)
    else
        // permutations with last element fixed
        permutations(k - 1, A)
        // permutations with last element swapped out
        for i := 0; i < k-1; i += 1 do
            if k is even then
                swap(A[i], A[k-1])
            else
                swap(A[0], A[k-1])
            end if
            permutations(k - 1, A)
        end for
    end if

One can also write the algorithm in a non-recursive format.[3]

procedure permutations(n : integer, A : array of any):
    // c is an encoding of the stack state.
    // c[k] encodes the for-loop counter for when permutations(k + 1, A) is called
    c : array of int

    for i := 0; i < n; i += 1 do
        c[i] := 0
    end for

    output(A)
    
    // i acts similarly to a stack pointer
    i := 1;
    while i < n do
        if  c[i] < i then
            if i is even then
                swap(A[0], A[i])
            else
                swap(A[c[i]], A[i])
            end if
            output(A)
            // Swap has occurred ending the while-loop. Simulate the increment of the while-loop counter
            c[i] += 1
            // Simulate recursive call reaching the base case by bringing the pointer to the base case analog in the array
            i := 1
        else
            // Calling permutations(i+1, A) has ended as the while-loop terminated. Reset the state and simulate popping the stack by incrementing the pointer.
            c[i] := 0
            i += 1
        end if
    end while

Proof

[edit]

In this proof, we'll use the below implementation as Heap's algorithm as it makes the analysis easier, and certain patterns can be easily illustrated. While it is not optimal (it does not minimize moves, which is described in the section below), the implementation is correct and will produce all permutations.

// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
    if k = 1 then
        output(A)
    else
        for i := 0; i < k; i += 1 do
            permutations(k - 1, A)
            if k is even then
                swap(A[i], A[k-1])
            else
                swap(A[0], A[k-1])
            end if

        end for
    end if

Claim: If array A has length n, then permutations(n, A) will result in either A being unchanged, if n is odd, or, if n is even, then A is rotated to the right by 1 (last element shifted in front of other elements).

Base: If array A has length 1, then permutations(1, A) will output A and stop, so A is unchanged. Since 1 is odd, this is what was claimed, so the claim is true for arrays of length 1.

Induction: If the claim is true for arrays of length l ≥ 1, then we show that the claim is true for arrays of length l+1 (together with the base case this proves that the claim is true for arrays of all lengths). Since the claim depends on whether l is odd or even, we prove each case separately.

If l is odd, then, by the induction hypothesis, for an array A of length l, permutations(l, A) will not change A, and for the claim to hold for arrays of length l+1 (which is even), we need to show that permutations(l+1, A) rotates A to the right by 1 position. Doing permutations(l+1, A) will first do permutations(l, A) (leaving A unchanged since l is odd) and then in each iteration i of the for-loop, swap the elements in positions i and l (the last position) in A. The first swap puts element l (the last element) in position 0, and element 0 in position l. The next swap puts the element in position l (where the previous iteration put original element 0) in position 1 and element 1 in position l. In the final iteration, the swap puts element l-1 is in position l, and the element in position l (where the previous iteration put original element l-2) in position l-1. To illustrate the above, look below for the case n = 4.

1,2,3,4 ... original array
1,2,3,4 ... 1st iteration (permute subarray)
4,2,3,1 ... 1st iteration (swap 1st element into last position)
4,2,3,1 ... 2nd iteration (permute subarray)
4,1,3,2 ... 2nd iteration (swap 2nd element into last position)
4,1,3,2 ... 3rd iteration (permute subarray)
4,1,2,3 ... 3rd iteration (swap 3rd element into last position)
4,1,2,3 ... 4th iteration (permute subarray)
4,1,2,3 ... 4th iteration (swap 4th element into last position)
The altered array is a rotated version of the original

If l is even, then, by the induction hypothesis, for an array A of length l, permutations(l, A) rotates A to the right by 1 position, and for the claim to hold for arrays of length l+1 (which is odd), we need to show that permutations(l+1, A) leaves A unchanged. Doing permutations(l+1, A) will in each iteration i of the for-loop, first do permutations(l, A) (rotating the first l elements of A by 1 position since l is even) and then, swap the elements in positions 0 and l (the last position) in A. Rotating the first l elements and then swapping the first and last elements is equivalent to rotating the entire array. Since there are as many iterations of the loop as there are elements in the array, the entire array is rotated until each element returns to where it started. To illustrate the above, look below for the case n = 5.

1,2,3,4,5 ... original array
4,1,2,3,5 ... 1st iteration (permute subarray, which rotates it)
5,1,2,3,4 ... 1st iteration (swap)
3,5,1,2,4 ... 2nd iteration (permute subarray, which rotates it)
4,5,1,2,3 ... 2nd iteration (swap)
2,4,5,1,3 ... 3rd iteration (permute subarray, which rotates it)
3,4,5,1,2 ... 3rd iteration (swap)
1,3,4,5,2 ... 4th iteration (permute subarray, which rotates it)
2,3,4,5,1 ... 4th iteration (swap)
5,2,3,4,1 ... 5th iteration (permute subarray, which rotates it)
1,2,3,4,5 ... 5th iteration (swap)
The final state of the array is in the same order as the original

The induction proof for the claim is now complete, which will now lead to why Heap's Algorithm creates all permutations of array A. Once again we will prove by induction the correctness of Heap's Algorithm.

Basis: Heap's Algorithm trivially permutes an array A of size 1 as outputting A is the one and only permutation of A.

Induction: Assume Heap's Algorithm permutes an array of size i. Using the results from the previous proof, every element of A will be in the "buffer" once when the first i elements are permuted. Because permutations of an array can be made by altering some array A through the removal of an element x from A then tacking on x to each permutation of the altered array, it follows that Heap's Algorithm permutes an array of size , for the "buffer" in essence holds the removed element, being tacked onto the permutations of the subarray of size i. Because each iteration of Heap's Algorithm has a different element of A occupying the buffer when the subarray is permuted, every permutation is generated as each element of A has a chance to be tacked onto the permutations of the array A without the buffer element.

Frequent mis-implementations

[edit]

It is tempting to simplify the recursive version given above by reducing the instances of recursive calls. For example, as:

procedure permutations(k : integer, A : array of any):
    if k = 1 then
        output(A)
    else

        // Recursively call once for each k
        for i := 0; i < k; i += 1 do
            permutations(k - 1, A)
            // swap choice dependent on parity of k (even or odd)
            if k is even then
                // no-op when i == k-1
                swap(A[i], A[k-1])
            else
                // XXX incorrect additional swap when i==k-1
                swap(A[0], A[k-1]) 
            end if

        end for
    end if

This implementation will succeed in producing all permutations but does not minimize movement. As the recursive call-stacks unwind, it results in additional swaps at each level. Half of these will be no-ops of and where but when is odd, it results in additional swaps of the with the element.

swaps additional = swaps
1 0 0 0
2 1 1 0
3 5 6 1
4 23 27 4
5 119 140 21
6 719 845 126
7 5039 5922 883
8 40319 47383 7064
9 362879 426456 63577

These additional swaps significantly alter the order of the prefix elements.

The additional swaps can be avoided by either adding an additional recursive call before the loop and looping times (as above) or looping times and checking that is less than as in:

procedure permutations(k : integer, A : array of any):
    if k = 1 then
        output(A)
    else

        // Recursively call once for each k
        for i := 0; i < k; i += 1 do
            permutations(k - 1, A)
            // avoid swap when i==k-1
            if (i < k - 1)
                // swap choice dependent on parity of k
                if k is even then
                    swap(A[i], A[k-1])
                else
                    swap(A[0], A[k-1])
                end if
            end if
        end for
    end if

The choice is primarily aesthetic but the latter results in checking the value of twice as often.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Heap's algorithm is a procedure for generating all possible of a set of nn distinct objects through a series of pairwise interchanges, ensuring that each successive permutation differs from the previous one by swapping exactly two elements while leaving the remaining n2n-2 elements in place. Developed by B. R. Heap and first published in , the algorithm employs an inductive approach that builds upon permutations of smaller subsets to systematically enumerate the full n!n! permutations without repetition. The algorithm operates by fixing the nnth object in position and generating all (n1)!(n-1)! permutations of the first n1n-1 objects, then interchanging the nnth object with each of the first n1n-1 objects in turn to produce distinct permutations for the nnth position. This process is controlled using a stack or array of counters that track the number of permutations generated at each level, with the base case handling permutations of 1 or 2 objects directly. An iterative variant exists that avoids deep recursion by using a loop and index array to simulate the recursive calls, making it suitable for implementation on computers with limited stack space. One of the key advantages of Heap's algorithm is its efficiency in terms of element movements, requiring only n!1n! - 1 swaps to produce all s, which is optimal for algorithms that generate permutations via adjacent or single transpositions. In practice, optimized implementations achieve a of approximately O(nn!)O(n \cdot n!) due to the processing of each permutation, with practical limits around n12n \leq 12 on mid-1960s hardware, though modern systems can handle larger nn. Compared to earlier methods like those of Wells or Boothroyd, it reduces overhead by minimizing storage needs and interchange operations, making it particularly useful for applications such as enumerating graph labelings or combinatorial testing where sequential permutation generation is required.

Background

Permutations

A of a set of nn distinct objects is a rearrangement of those objects into a linear order, equivalent to a bijective mapping from the set to itself. The collection of all permutations of nn objects constitutes the symmetric group SnS_n, a mathematical structure with exactly n!n! elements, where n!n! is the of nn, defined as the product of all positive integers from 1 to nn. For example, consider the set {A,B,C}\{A, B, C\} with n=3n=3: the six permutations are ABC, ACB, BAC, BCA, CAB, and CBA. The factorial n!n! exhibits rapid growth—for instance, 5!=1205! = 120 and 10!=3,628,80010! = 3,628,800—resulting in an exponentially increasing number of permutations that renders complete enumeration computationally intensive even for moderately large nn, such as beyond 10 or 12. Permutations form a cornerstone of and underpin various applications in , including sorting and testing scenarios.

Permutation Generation

Generating permutations involves systematically producing all possible arrangements of a set of distinct elements, a fundamental task in and . Common approaches include generating permutations in , recursive , and swap-based methods. The method, also known as the next-permutation algorithm, produces permutations in ascending dictionary-like sequence by identifying the next valid arrangement from the current one, typically involving finding a pivot, swapping with a suitable successor, and reversing the . This approach, exemplified by algorithms from and Krause, ensures ordered output but requires multiple exchanges per step, making it suitable for applications needing sequential . Recursive constructs permutations by incrementally building arrangements through depth-first exploration, fixing one element at a time and recursing on the remaining until the full is formed. This method naturally decomposes the problem into a , where each level represents a position in the and branches correspond to possible choices for that position, effectively forming a "permutation tree" that spans all n! leaves. However, standard implementations often incur redundant computations, such as repeatedly copying arrays during , leading to up to O(n \cdot n!) total operations despite generating exactly n! permutations. Swap-based methods generate permutations by exchanging elements in an existing , often iteratively or recursively, to transition between arrangements. A notable example is the Johnson-Trotter algorithm, which produces all permutations using exactly n! - 1 adjacent swaps, ensuring each consecutive pair differs by a single transposition of neighboring elements. Naive swap methods, however, can suffer from high swap counts and inefficient element movements, while the desire for minimal adjacent transpositions arises in scenarios requiring smooth transitions, such as in Gray codes for permutations. These challenges—redundant computations, excessive swaps, and the need for minimal changes—motivate optimized variants like Heap's algorithm, which minimizes the number of swaps while generating all permutations.

Historical Development

Heap's algorithm was first proposed by B. R. Heap in his 1963 paper "Permutations by Interchanges," published in The Computer Journal. The algorithm emerged in the context of early computational , where there was a growing need for efficient methods to enumerate all permutations of a set of objects, particularly for simulations and combinatorial programs limited by the computational resources of the time, such as generating permutations for up to 12 objects while minimizing storage and processing demands. Heap's work advanced efficient permutation generation methods in the early 1960s, optimizing for interchanges that could involve non-adjacent elements to reduce overall movement, in contrast to adjacent transposition methods like the . In a comprehensive 1977 survey of permutation generation methods, Robert Sedgewick evaluated Heap's algorithm alongside numerous others and described it as the most efficient among recursive approaches, particularly praising its ability to minimize the number of swaps required—achieving as few as two stores per permutation in optimized implementations—making it suitable for both high-level procedural languages and low-level assembly. Sedgewick highlighted its simplicity and speed compared to methods like Wells' (1960) and Boothroyd's (1965), as well as its advantages over iterative alternatives such as the Johnson-Trotter algorithm in certain optimized scenarios. Since its introduction, Heap's algorithm has seen no major theoretical updates or revisions, remaining a foundational technique in permutation enumeration. It continues to be referenced in modern combinatorial sequences, such as OEIS A280318, which catalogs the infinite sequence of permutations produced by the algorithm when applied iteratively to increasing set sizes, represented by their reverse colexicographic indices.

Algorithm Description

Recursive Formulation

Heap's algorithm can be formulated recursively through a function that generates all of the first kk elements of the array (with the remaining elements fixed). The initial call is with k=nk = n to permute the entire array. The base case is k=1k = 1, where the single permutation of the first element is output. For k>1k > 1, the function first recursively generates all (k1)!(k-1)! permutations of the first k1k-1 elements. Then, in a loop over i=0i = 0 to k1k-1, it swaps two elements based on the parity of kk (if kk is even, swap positions ii and k1k-1; if odd, swap positions 0 and k1k-1), outputs the resulting permutation by recursing to generate all (k1)!(k-1)! permutations of the first k1k-1 elements again, ensuring each position for the kkth element is explored systematically. This approach builds permutations by extending smaller prefixes, with swaps ensuring adjacent permutations differ by exactly one transposition. The mechanism guarantees all n!n! distinct permutations are generated without repetition or omission. To illustrate, consider n=3n=3 with initial array A=[1,2,3]A = [1, 2, 3] (0-based indexing). The recursion starts at k=3k=3, and the sequence of permutations generated through recursive calls and swaps is:
  • [1, 2, 3] (initial)
  • [2, 1, 3] (swap positions 0 and 2, since k=3k=3 odd? Wait, actually following standard: for k=3 odd, after first recurse(k=2), loop i=0: swap 0 and 2, recurse; i=1: swap 0 and 2, recurse; i=2: swap 0 and 2, recurse. But the unfolding produces the sequence)
  • [3, 1, 2] (swap positions 1 and 2? Detailed trace follows standard order)
  • [1, 3, 2]
  • [2, 3, 1]
  • [3, 2, 1]
This trace demonstrates how the swaps at each depth produce the six distinct permutations in a specific order. The total number of swaps performed is n!1n! - 1, which is the minimum required to connect all via transpositions.

Iterative Formulation

The iterative formulation of Heap's algorithm simulates the using an auxiliary array c[1n]c[1 \dots n] (1-based for convenience, tracking counts from 1 to k at level k) to manage the loop states at each level without a . The algorithm outputs the initial permutation before entering the loop, initializes cc to 0 (or 1 depending on convention, but effectively starting counts at 0), and sets i=1i = 1. It then enters a loop while i<ni < n: if c<ic < i, it performs a swap—swapping positions 0 and ii if ii is even, or positions cc and ii if ii is odd—followed by outputting the current permutation. It then increments cc and resets ii to 1 to simulate returning to the lower level. If cic \geq i, the level ii is complete, so reset c=0c = 0 and increment ii to advance. The process terminates when i=ni = n, having generated all n!n! . The "turnaround" at cic \geq i backtracks by resetting and incrementing ii, ensuring exploration of all branches in the permutation tree with single-swap transitions between consecutive permutations. To illustrate for n=4n=4 with initial array A=[1,2,3,4]A = [1, 2, 3, 4] and c=[0,0,0,0]c = [0, 0, 0, 0] (0 unused), output initial [1,2,3,4], set i=1. Since c=0 <1, odd i=1, swap c=0 and 1 (positions 0 and 1) to [2,1,3,4], output, c=1, i=1. Now c=1 >=1, reset c=0, i=2. c=0<2, even i=2, swap 0 and 2 on [2,1,3,4] to [3,1,2,4], output, c=1, i=1. Continuing this way systematically generates all 24 permutations via 23 swaps, such as next [1,3,2,4] after further steps. This non-recursive approach avoids for larger nn, though practical limits remain around n=10n = 10 to 12 due to O(nn!)O(n \cdot n!) time for processing all permutations.

Correctness and

Proof of Correctness

The correctness of Heap's algorithm is established by on the number of elements nn, showing that it generates all n!n! distinct permutations of the input exactly once. Base cases. For n=1n=1, the algorithm terminates immediately after outputting the single element as the sole , which is correct. For n=2n=2, the loop iterates over i=1i=1 to 22: first for i=1i=1, swap positions 1 and 2 (yielding the reversed order for output in the base case), then swap back; then for i=2i=2, no swap (outputting the initial order); this produces both permutations without repetition. Inductive hypothesis. Assume the algorithm correctly generates all k!k! distinct permutations exactly once for every k<nk < n. Inductive step. Consider nn elements. The algorithm executes a loop over i=1i = 1 to nn: for each ii, it swaps the elements at positions ii and nn, recursively invokes the procedure on the first n1n-1 positions (which, by the inductive hypothesis, generates all (n1)!(n-1)! permutations of those positions), and then swaps back to restore the array for the next iteration. After each recursive call on n1n-1, the current full array state is output as a permutation of all nn elements. For the case i=ni = n, no swap occurs, fixing the original nnth element in position nn while permuting the first n1n-1. For i<ni < n, the swap places the original nnth element into position ii (and the original iith into position nn), so the recursive calls permute the remaining elements across positions 11 to n1n-1 with the nnth element fixed at ii. Thus, the nn blocks (one per ii) each produce (n1)!(n-1)! permutations where a distinct element occupies position nn, covering all possible arrangements of the remaining elements in the other positions; the total is n×(n1)!=n!n \times (n-1)! = n!. Uniqueness. The blocks are disjoint because each fixes a different element in position nn (the original elements from positions 11 to nn, cycled via the swaps). Within each block, follows from the . The swaps are deterministic and reversible (swapping back after ), ensuring the array returns to its pre-block state, preventing carryover that could cause duplicates across blocks. In the iterative formulation, the parity-based direction of iteration (forward for even nn, reverse for odd nn) maintains this positioning without altering the coverage or introducing overlaps. For illustration with n=4n=4, the top-level loop executes four blocks, each invoking the n=3n=3 procedure (which generates 3!=63! = 6 permutations by the inductive ); this yields 4×6=244 \times 6 = 24 total permutations, verifiable as all distinct arrangements of four elements (e.g., starting from [0,1,2,3][0,1,2,3], the blocks fix 0, then 1, then 2, then 3 in the last position across their sub-permutations). The algorithm also achieves minimality in swaps: across all invocations, it performs exactly n!1n! - 1 swaps in total, as each new permutation after the initial one is obtained via a single adjacent or non-adjacent transposition from the previous, forming a on the of the generated by transpositions.

Complexity Analysis

Heap's algorithm exhibits a of O(nn!)O(n \cdot n!), where nn is the number of elements, because it generates all n!n! , and each permutation requires O(n)O(n) operations for processing or output, such as printing or swapping elements. The is O(n)O(n) for both recursive and iterative implementations, relying on a single of size nn for the elements and an auxiliary structure like a counter or stack of depth O(n)O(n), without storing the full set of permutations. In terms of swap efficiency, the algorithm performs exactly n!1n! - 1 swaps in total across all permutations, which is minimal for any algorithm that generates permutations by adjacent transpositions, as each successive permutation differs from the previous by a single swap. This leads to an amortized analysis of O(1)O(1) swaps per permutation on average, since the total swaps divided by n!n! permutations approaches 1 for large nn. Due to the factorial growth in the number of permutations, Heap's algorithm is practical for n up to about 12 on modern hardware, as 12! ≈ 479 million permutations can be generated in under a minute when processing sequentially without storage, though larger n lead to rapid exponential growth in time.

Implementations and Variants

Pseudocode Examples

Heap's algorithm admits both recursive and iterative implementations, each capable of generating all permutations of an array of n distinct elements through a series of targeted swaps. The recursive formulation leverages a divide-and-conquer approach, fixing the last element while permuting the prefix, and is directly derived from the original description by Heap. The iterative version employs a counter array to simulate the recursion stack, ensuring constant space beyond the input array and avoiding stack overflow for large n. Both versions assume 0-based array indexing for modern implementations, though the original presentation used 1-based indexing; the output mechanism can be adapted from printing the array to yielding permutations or invoking a callback for processing.

Recursive Pseudocode

The following pseudocode implements the recursive version, where the function generate is initially called as generate(a, n) for an array a of length n. Each recursive call processes the prefix of length k, and the base case outputs the full when k = 1. Swaps ensure adjacent transpositions generate all unique permutations without repetition.

function generate(a, k): if k == 1: output(a) // e.g., print or yield the current [permutation](/page/Permutation) return for i = [0](/page/0) to k-2: // loop over [0](/page/0) to k-2 (prefix indices) generate(a, k-1) if (k % 2 == 1): // odd k: swap first and last of current prefix swap(a[0], a[k-1]) else: // even k: swap i-th and last of current prefix swap(a[i], a[k-1]) generate(a, k-1) // final recursive call after loop

function generate(a, k): if k == 1: output(a) // e.g., print or yield the current [permutation](/page/Permutation) return for i = [0](/page/0) to k-2: // loop over [0](/page/0) to k-2 (prefix indices) generate(a, k-1) if (k % 2 == 1): // odd k: swap first and last of current prefix swap(a[0], a[k-1]) else: // even k: swap i-th and last of current prefix swap(a[i], a[k-1]) generate(a, k-1) // final recursive call after loop

To illustrate execution for n=3 with initial array a = [1, 2, 3], the trace unfolds as follows (outputs shown in comments):
  • Call generate([1,2,3], 3): k=3 (odd), loop i=0: call generate([1,2,3],2) → outputs [1,2,3], [2,1,3] (from inner for k=2); then swap a↔a on [2,1,3] → [3,1,2]
  • Then i=1: call generate([3,1,2],2) → outputs [3,1,2], [1,3,2]; swap a↔a on [1,3,2] → [2,3,1]
  • Final call generate([2,3,1],2) → outputs [2,3,1], [3,2,1]
  • The full sequence outputs: [1,2,3], [2,1,3], [3,1,2], [1,3,2], [2,3,1], [3,2,1].

Iterative Pseudocode

The iterative formulation, as refined in subsequent analyses, uses a counter array c of length n to track the number of times each position has been "used" in swaps, resetting and advancing the index i to mimic recursive unfolding. This version generates permutations in the same order as the recursive one and is invoked once for the full . The parity check (i even or odd) determines the swap partners, ensuring systematic coverage.

function generate(a, n): c = [array](/page/Array) of size n, initialized to 0 i = 0 output(a) // initial permutation while i < n: if c[i] < i: if i % 2 == 0: // even i: swap first and i-th swap(a[0], a[i]) else: // odd i: swap c[i]-th and i-th swap(a[c[i]], a[i]) output(a) c[i] += 1 i = 0 // reset to start else: c[i] = 0 i += 1 // advance

function generate(a, n): c = [array](/page/Array) of size n, initialized to 0 i = 0 output(a) // initial permutation while i < n: if c[i] < i: if i % 2 == 0: // even i: swap first and i-th swap(a[0], a[i]) else: // odd i: swap c[i]-th and i-th swap(a[c[i]], a[i]) output(a) c[i] += 1 i = 0 // reset to start else: c[i] = 0 i += 1 // advance

For n=3 with a = [1, 2, 3], the loop trace (outputs in comments) proceeds as:
  • Initial output [1,2,3]; i=0, c<0 false, i=1
  • i=1, c=0 <1 yes, odd, swap a↔a → [2,1,3]; output [2,1,3]; c=1; i=0
  • i=0 false, i=1; i=1, 1<1 false, c=0, i=2
  • i=2, c=0 <2 yes, even, swap a↔a on [2,1,3] → [3,1,2]; output [3,1,2]; c=1; i=0
  • i=0 false, i=1; i=1, 0<1 yes, odd, swap a↔a on [3,1,2] → [1,3,2]; output [1,3,2]; c=1; i=0
  • i=0 false, i=1; i=1 false, i=2
  • i=2, 1<2 yes, even, swap a↔a on [1,3,2] → [2,3,1]; output [2,3,1]; c=2; i=0
  • i=0 false, i=1; i=1, 0<1 yes, odd, swap a↔a on [2,3,1] → [3,2,1]; output [3,2,1]; c=1; i=0
  • i=0 false, i=1; i=1 false, i=2; i=2, 2<2 false, c=0, i=3 end.
  • Full sequence: [1,2,3], [2,1,3], [3,1,2], [1,3,2], [2,3,1], [3,2,1].
In both formulations, the output step can be replaced with a yield statement in generator contexts or a user-defined function to handle each , accommodating languages without built-in printing. The 0-based indexing aligns with common programming conventions, though adaptations to 1-based (e.g., swapping a and a) are straightforward for legacy systems.

Common Implementation Errors

A common error in implementing the recursive formulation of Heap's algorithm arises from incorrect placement of swaps relative to recursive calls, such as performing swaps without proper conditioning on the loop index. This can lead to additional unnecessary swaps while still generating all , but failing to minimize movements as intended. For example, an improper variant might perform extra interchanges between the kth and 0th elements for odd k, resulting in more swaps overall. This manifests for n=9 as generating all 362,880 but with 63,577 additional swaps, totaling 426,456 swaps instead of the optimal 362,879 (). Another frequent mistake involves off-by-one errors in the parity checks for even or odd values of k (the current subproblem size). The algorithm relies on swapping the last element with the first position if k is odd or with the loop index i if k is even to ensure all unique s are produced without repetition; an off-by-one shift, such as checking k % 2 == 1 instead of the correct condition, can cause the swaps to overlap incorrectly, leading to duplicate permutations or missed ones. For instance, this error might produce the identity permutation multiple times while omitting others, violating the algorithm's guarantee of exactly n! distinct outputs. The fix is to use a precise modulo 2 operation for parity (e.g., if (k & 1) for odd) and verify against known small cases like n=3, which should yield exactly 6 s. In the iterative formulation, mishandling the direction or control (often denoted as c[0..n-1], which tracks the cycling position for swaps at each level) is a prevalent issue. This must be incremented correctly after each permutation output and reset to 0 when it reaches the level index +1; errors like off-by-one in the increment or reset conditions can trap the algorithm in infinite loops by failing to advance to deeper levels or by repeatedly cycling the same sub. Alternatively, incomplete resets might halt generation prematurely, producing fewer than n! . To mitigate, always initialize the to 0, use exact bounds for increments (c = (c + 1) % (i + 1)), and test with n=4, expecting 24 without loops. Misimplementations of Heap's algorithm often result in roughly double the expected number of swaps compared to the minimal (n! - 1 total swaps across all permutations), which can be detected by instrumenting the code to count swaps and comparing against the derived from the algorithm's design. Correct implementations, as described in the original formulation, perform exactly one adjacent or specified swap per permutation transition to minimize movement. Always backtrack swaps after in the recursive version, employ modulo 2 strictly for parity decisions, and validate iterative control array updates with small inputs like n=3 or n=4 to catch these errors early.

Comparisons to Other Algorithms

Heap's algorithm and the Steinhaus–Johnson–Trotter (SJT) algorithm both generate all permutations of nn elements using exactly n!1n! - 1 swaps, minimizing changes between successive permutations to a single interchange. However, SJT restricts swaps to adjacent elements, forming a in the permutohedron graph where edges represent adjacent transpositions, which can impose additional computational overhead for tracking directions and mobility. In contrast, Heap's algorithm permits non-adjacent swaps for greater simplicity in implementation, often exchanging the first and second elements, making it faster in practice despite the lack of adjacency constraint. Compared to lexicographic generation methods, such as the algorithm underlying C++'s std::next_permutation, Heap's algorithm requires fewer swaps overall—precisely n!1n! - 1 versus potentially up to O(n)O(n) operations per permutation in lexicographic approaches, which may involve multiple exchanges or array copies to advance to the next ordered permutation. Lexicographic methods produce permutations in sorted order, facilitating applications needing sequential enumeration, but at the cost of higher per-step complexity and no guarantee of minimal changes. Heap's approach, while unordered, excels in environments where swap operations are expensive, such as hardware testing, due to its fixed single-swap transitions. Relative to recursive algorithms, which explore by branching on each position and undoing choices, Heap's algorithm is more efficient by avoiding redundant branching and directly constructing each permutation from the prior one via targeted swaps, reducing overhead in swap-costly scenarios. methods, while straightforward for partial permutations or with constraints, incur higher time due to repeated partial reconstructions, whereas Heap's recursive structure ensures exhaustive coverage with minimal redundancy. A key trade-off of Heap's algorithm is its lack of inherent ordering, unlike lexicographic or SJT methods, which may limit its use in applications requiring sorted output; modern libraries like Python's itertools.permutations often employ hybrid recursive variants optimized for speed and memory, sometimes incorporating Heap-like swap minimization for efficiency. While Heap's excels in total swaps for sequential generation, its recursive nature hinders parallelization compared to iterative alternatives like SJT, which can more readily support distributed processing.
Add your contribution
Related Hubs
User Avatar
No comments yet.