Hubbry Logo
Selection sortSelection sortMain
Open search
Selection sort
Community hub
Selection sort
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Selection sort
Selection sort
from Wikipedia
Selection sort
ClassSorting algorithm
Data structureArray
Worst-case performance comparisons, swaps
Best-case performance comparisons, swap
Average performance comparisons, swaps
Worst-case space complexity auxiliary
OptimalNo

In computer science, selection sort is an in-place comparison sorting algorithm. It has a O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.

The algorithm divides the input list into two parts: a sorted sublist of items which is built up from left to right at the front (left) of the list and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.

The time efficiency of selection sort is quadratic, so there are a number of sorting techniques which have better time complexity than selection sort.

Example

[edit]

Here is an example of this sort algorithm sorting five elements:

Sorted sublist Unsorted sublist Least element in unsorted list
() (12, 25, 64, 11, 22) 11
(11) (25, 64, 12, 22) 12
(11, 12) (64, 25, 22) 22
(11, 12, 22) (25, 64) 25
(11, 12, 22, 25) (64) 64
(11, 12, 22, 25, 64) ()
Selection sort animation. Red is current min. Yellow is sorted list. Blue is current item.

(Nothing appears changed on these last two lines because the last two numbers were already in order.)

Selection sort can also be used on list structures that make add and remove efficient, such as a linked list. In this case it is more common to remove the minimum element from the remainder of the list, and then insert it at the end of the values sorted so far. For example:

arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]
// and place it at beginning
<11> 25 12 22 64

// Find the minimum element in arr[1...4]
// and place it at beginning of arr[1...4]
11 <12> 25 22 64

// Find the minimum element in arr[2...4]
// and place it at beginning of arr[2...4]
11 12 <22> 25 64

// Find the minimum element in arr[3...4]
// and place it at beginning of arr[3...4]
11 12 22 <25> 64 

Implementations

[edit]

Below is an implementation in C.

/* a[0] to a[aLength-1] is the array to sort */
int i,j;
int aLength; // initialise to a's length

/* advance the position through the entire array */
/*   (could do i < aLength-1 because single element is also min element) */
for (i = 0; i < aLength-1; i++)
{
    /* find the min element in the unsorted a[i .. aLength-1] */

    /* assume the min is the first element */
    int jMin = i;
    /* test against elements after i to find the smallest */
    for (j = i+1; j < aLength; j++)
    {
        /* if this element is less, then it is the new minimum */
        if (a[j] < a[jMin])
        {
            /* found new minimum; remember its index */
            jMin = j;
        }
    }

    if (jMin != i) 
    {
        swap(&a[i], &a[jMin]);
    }
}

Complexity

[edit]

Selection sort is not difficult to analyze compared to other sorting algorithms, since none of the loops depend on the data in the array. Selecting the minimum requires scanning elements (taking comparisons) and then swapping it into the first position. Finding the next lowest element requires scanning the remaining elements (taking comparisons) and so on. Therefore, the total number of comparisons is

By arithmetic progression,

which is of complexity in terms of number of comparisons.

Comparison to other sorting algorithms

[edit]

Among quadratic sorting algorithms (sorting algorithms with a simple average-case of Θ(n2)), selection sort almost always outperforms bubble sort and gnome sort. Insertion sort is very similar in that after the kth iteration, the first elements in the array are in sorted order. Insertion sort's advantage is that it only scans as many elements as it needs in order to place the st element, while selection sort must scan all remaining elements to find the st element.

Simple calculation shows that insertion sort will therefore usually perform about half as many comparisons as selection sort, although it can perform just as many or far fewer depending on the order the array was in prior to sorting. It can be seen as an advantage for some real-time applications that selection sort will perform identically regardless of the order of the array, while insertion sort's running time can vary considerably. However, this is more often an advantage for insertion sort in that it runs much more efficiently if the array is already sorted or "close to sorted."

While selection sort is preferable to insertion sort in terms of number of writes ( swaps versus up to swaps, with each swap being two writes), this is roughly twice the theoretical minimum achieved by cycle sort, which performs at most n writes. This can be important if writes are significantly more expensive than reads, such as with EEPROM or Flash memory, where every write lessens the lifespan of the memory.

Selection sort can be implemented without unpredictable branches for the benefit of CPU branch predictors, by finding the location of the minimum with branch-free code and then performing the swap unconditionally.

Finally, selection sort is greatly outperformed on larger arrays by divide-and-conquer algorithms such as mergesort. However, insertion sort or selection sort are both typically faster for small arrays (i.e. fewer than 10–20 elements). A useful optimization in practice for the recursive algorithms is to switch to insertion sort or selection sort for "small enough" sublists.

Variants

[edit]

Heapsort has been described as "nothing but an implementation of selection sort using the right data structure."[1] It greatly improves the basic algorithm by using an implicit heap data structure to find and remove each lowest element in time, instead of normal selection sort's inner loop, reducing the total running time to .

A bidirectional variant of selection sort (called double selection sort or sometimes cocktail sort due to its similarity to cocktail shaker sort) finds both the minimum and maximum values in the list in every pass. This requires three comparisons per two items (a pair of elements is compared, then the greater is compared to the maximum and the lesser is compared to the minimum) rather than regular selection sort's one comparison per item, but requires only half as many passes, a net 25% savings.

Selection sort can be implemented as a stable sort if, rather than swapping in step 2, the minimum value is inserted into the first position and the intervening values shifted up. However, this modification either requires a data structure that supports efficient insertions or deletions, such as a linked list, or it leads to performing writes.

In the bingo sort variant, items are sorted by repeatedly looking through the remaining items to find the greatest value and moving all items with that value to their final location.[2] Like counting sort, this is an efficient variant if there are many duplicate values: selection sort does one pass through the remaining items for each item moved, while Bingo sort does one pass for each value. After an initial pass to find the greatest value, subsequent passes move every item with that value to its final location while finding the next value as in the following pseudocode (arrays are zero-based and the for-loop includes both the top and bottom limits, as in Pascal):

bingo(array A)

{ This procedure sorts in ascending order by
  repeatedly moving maximal items to the end. }
begin
    last := length(A) - 1;

    { The first iteration is written to look very similar to
      the subsequent ones, but without swaps. }
    nextMax := A[last];
    for i := last - 1 downto 0 do
        if A[i] > nextMax then
            nextMax := A[i];
    while (last > 0) and (A[last] = nextMax) do
        last := last - 1;

    while last > 0 do begin
        { Each main loop searches for the new nextMax while
          swapping items equal to prevMax into place. }
        prevMax := nextMax;
        nextMax := A[last];
        for i := last - 1 downto 0 do
             if A[i] > nextMax then
                 if A[i] <> prevMax then
                     nextMax := A[i];
                 else begin
                     swap(A[i], A[last]);
                     last := last - 1;
                 end
        while (last > 0) and (A[last] = nextMax) do
            last := last - 1;
    end;
end;

Thus, if on average there are more than two items with the same value, bingo sort can be expected to be faster because it executes the inner loop fewer times than selection sort.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Selection sort is a simple, in-place comparison-based that builds a sorted (or ) one element at a time by repeatedly selecting the smallest (or largest, for descending order) element from the unsorted portion of the and placing it at the beginning of the sorted portion. This process divides the input into two regions: a growing sorted subarray and a shrinking unsorted subarray, until the entire is sorted. The algorithm operates through nested loops: an outer loop iterates over each position in the array to establish the boundary between sorted and unsorted regions, while an inner loop scans the unsorted region to find the index of the minimum element, which is then swapped with the first element of the unsorted region. For an array of length n, this results in exactly n-1 passes, with the i-th pass performing up to n-i comparisons. Here is a pseudocode representation:

procedure selectionSort(array A of size n) for i from 0 to n-2 do minIndex = i for j from i+1 to n-1 do if A[j] < A[minIndex] then minIndex = j swap A[i] with A[minIndex]

procedure selectionSort(array A of size n) for i from 0 to n-2 do minIndex = i for j from i+1 to n-1 do if A[j] < A[minIndex] then minIndex = j swap A[i] with A[minIndex]

This implementation guarantees at most n swaps, regardless of the input, which is advantageous when element swaps are costly (e.g., for large data structures). However, selection sort is not stable, meaning it does not preserve the relative order of elements with equal keys. In terms of performance, selection sort performs Θ(n²) comparisons and Θ(n) swaps in the worst, average, and best cases, resulting in an overall time complexity of Θ(n²), making it quadratic and unsuitable for large inputs compared to more efficient algorithms like quicksort or mergesort. Its space complexity is Θ(1), as it requires no additional storage beyond the input array. Despite its inefficiency, selection sort is valued in educational contexts for illustrating fundamental sorting principles and is occasionally used in scenarios where simplicity and low swap counts outweigh speed.

Overview

Definition and Principle

A sorting algorithm is a procedure that takes a collection of elements and rearranges them into a specific order, such as non-decreasing, producing a permutation of the input that satisfies the desired ordering relation. These algorithms typically operate on data structures like arrays or lists and rely on comparisons or other operations to achieve the sorted result. Selection sort is an in-place comparison-based sorting algorithm that divides the input list into a sorted sublist, initially empty, and an unsorted sublist, initially the entire list. It builds the sorted sublist incrementally by repeatedly identifying the minimum element from the unsorted portion and placing it at the boundary between the sorted and unsorted parts. The algorithm assumes a total order on the elements, meaning a binary relation that is reflexive, antisymmetric, transitive, and total, allowing consistent comparisons between any pair of elements. The core principle of selection sort involves, for each position i from the start of the list up to the second-to-last element, scanning the unsorted sublist to locate the smallest element and swapping it with the element at position i to extend the sorted sublist. This process ensures that after each iteration, the prefix of the list up to i is sorted, and the remaining suffix contains the unsorted elements. Key properties of selection sort include its in-place nature, requiring only a constant amount of extra memory beyond the input array, and its reliance on comparisons without additional data structures. It is not stable, as swaps can alter the relative order of equal elements. While straightforward to implement and educational, selection sort exhibits quadratic time complexity in all cases, making it inefficient for large datasets compared to more advanced algorithms.

Historical Development

Selection sort traces its origins to the early days of electronic computing, where simple selection-based methods were developed to organize data efficiently on limited hardware. Although no single inventor is attributed, the algorithm's principles resemble manual sorting techniques employed in punch-card data processing systems of the 1920s and 1930s, such as those by , which involved iteratively selecting and separating cards based on specific criteria. This reference, while not establishing the absolute origin, marks a pivotal early documentation amid the rapid growth of computing in the mid-20th century. Donald E. Knuth provided a comprehensive analysis in The Art of Computer Programming, Volume 3: Sorting and Searching (1973), dedicating a section to "Sorting by Selection" and emphasizing its role as an accessible example for understanding comparison-based sorting. Knuth's treatment, grounded in historical context and efficiency analysis, solidified its place in algorithms textbooks, where it was presented alongside insertion and exchange sorts for instructional clarity. The core algorithm has remained largely static since its formalization, with evolution limited to pedagogical refinements rather than major redesigns. In the 1970s and 1980s, minor optimizations—such as tournament-based selection to reduce comparisons or bidirectional scanning—were explored in research papers, but these variants retained the inherent quadratic time complexity and did not gain widespread adoption beyond academic interest. Instead, selection sort's enduring legacy lies in its simplicity, making it a foundational topic in introductory computer science education worldwide since the 1970s, where it illustrates key concepts like iterative selection without the complexities of more advanced algorithms.

Algorithm Mechanics

Step-by-Step Process

Selection sort operates by repeatedly selecting the minimum element from the unsorted portion of the and placing it in its correct position within the growing sorted portion, building the sorted incrementally from the beginning. The process begins with an of n elements, where the initial sorted portion is empty (for n > 0), and the unsorted portion is the entire ; this sorted segment expands by one element per iteration until it encompasses the entire . The core iteration proceeds as follows: for each index i ranging from 0 to n-2, scans the subarray from position i to n-1, comparing elements to identify the index min_idx of the smallest element in that subarray. Upon locating the minimum, if min_idx differs from i, swaps the element at position i with the element at min_idx, thereby fixing the next smallest element in its sorted position and extending the sorted portion by one. This iteration repeats n-1 times, after which the is fully sorted in ascending order, as the last element is automatically in place once all preceding minima have been positioned. For edge cases, an empty (n=0) requires no operations and is considered sorted by default, while a single-element (n=1) is already sorted and triggers no iterations or swaps. Additionally, input in descending order demands no modifications to the algorithm, as it consistently selects the overall minimum each time, performing the same scans and swaps regardless of initial arrangement.

Pseudocode Representation

The standard pseudocode for the selection sort algorithm, as presented in foundational algorithms , is a representation that highlights its iterative process of identifying and placing the minimum element in the unsorted portion of the .

procedure selection_sort(A: [array](/page/Array) of integers) n = length(A) for i = 0 to n-2 do min_idx = i for j = i+1 to n-1 do if A[j] < A[min_idx] then min_idx = j end if end for if min_idx != i then swap A[i] and A[min_idx] end if end for end procedure

procedure selection_sort(A: [array](/page/Array) of integers) n = length(A) for i = 0 to n-2 do min_idx = i for j = i+1 to n-1 do if A[j] < A[min_idx] then min_idx = j end if end for if min_idx != i then swap A[i] and A[min_idx] end if end for end procedure

This pseudocode operates on an array AA of length nn. The outer loop, iterating from i=0i = 0 to n2n-2, progressively builds a sorted prefix of the array by fixing the position of the next smallest element at index ii. Within each iteration of the outer loop, the inner loop scans from j=i+1j = i+1 to n1n-1 to locate the index minidxmin_idx of the minimum value in the unsorted subarray starting at ii, updating minidxmin_idx whenever a smaller element is found. Finally, a conditional swap exchanges the elements at positions ii and minidxmin_idx only if they differ, thereby avoiding redundant operations when the minimum is already in place. The pseudocode can be easily adapted for sorting in descending order by reversing the comparison in the inner loop from A<A[minidx]A < A[min_idx] to A>A[minidx]A > A[min_idx], which selects the maximum element instead of the minimum for placement at the beginning of the unsorted subarray.

Practical Example

Numerical Illustration

To illustrate the selection sort algorithm, consider the input array [64, 25, 12, 22, 11]. In the first pass, the minimum value 11 is found at index 4 and swapped with the element at index 0 (64), resulting in [11, 25, 12, 22, 64]. This fixes the smallest element in position, growing the sorted prefix to length 1 while shrinking the unsorted portion to the remaining 4 elements. In the second pass, over indices 1 to 4, the minimum value 12 is at index 2 and swapped with the element at index 1 (25), yielding [11, 12, 25, 22, 64]. The sorted prefix now extends to length 2. In the third pass, over indices 2 to 4, the minimum value 22 is at index 3 and swapped with the element at index 2 (25), producing [11, 12, 22, 25, 64]. The sorted prefix grows to length 3. In the fourth pass, over indices 3 to 4, the minimum value 25 is at index 3, requiring no swap, so the remains [11, 12, 22, 25, 64]. With only one element left, the process completes, leaving the fully sorted [11, 12, 22, 25, 64]. For this of n=5n = 5, the algorithm performs 4 passes (one fewer than nn), involving a total of 10 comparisons (summing 4+3+2+14 + 3 + 2 + 1) and 3 swaps (in the first three passes). This numerical tracing demonstrates how selection sort progressively builds the sorted prefix from the beginning, selecting and placing the next smallest element each time.

Visual Step-by-Step

To visualize the selection sort process, consider an initial array represented as a horizontal sequence of rectangular bars or boxes, where each bar's height corresponds to the element's value (e.g., taller bar for 64, shorter for 11), positioned from left to right by index. The example array [64, 25, 12, 22, 11] starts with all bars in blue to indicate the fully unsorted portion, with no sorted elements yet. In the first pass (i=0), a pointer marks the current index i at the leftmost position, scanning rightward to find the minimum value (11 at index 4), highlighted in yellow or green. The unsorted portion remains blue, while an arrow or animated line illustrates the swap between the minimum (11) and the element at i (64), transitioning the bars' positions; post-swap, the leftmost bar (now 11) turns red to denote the growing sorted prefix, with the remaining bars [64, 25, 12, 22] staying blue. Subsequent passes repeat this: for i=1, scan [25, 12, 22] to find 12 (yellow/green highlight), swap with 25, redden the second bar (12), and so on, progressively expanding the red sorted region rightward while shrinking the blue unsorted tail—e.g., after the second pass: [11, 12, 25, 22, 64] (first two red) and [25, 22, 64] blue; third pass swaps 22 with 25, yielding [11, 12, 22, 25, 64] (first three red) and [25, 64] blue; fourth pass requires no swap as the minimum (25) is already in position, reddening the fourth bar and leaving the last (64) to complete the sort trivially. Key visual elements include color-coding (blue for unsorted, red for sorted, yellow/green for the current minimum, and temporary black during swaps for emphasis), pointers or labels for i (scanning the unsorted region) and min_idx (tracking the minimum's location), and directional arrows showing element exchanges to clarify the partitioning mechanism. In digital formats, this can be animated stepwise with delays (e.g., 500ms per scan or swap) to reveal the iterative selection and placement, allowing viewers to pause and inspect intermediate states; for static diagrams, side-by-side panels depict before-and-after each pass, tracing the 's evolution from [64, 25, 12, 22, 11] (all blue) to [11, 12, 22, 25, 64] (all red). Such visualizations aid intuition by demonstrating how selection sort partitions the into sorted and unsorted subarrays through repeated minimum extractions, making the algorithm's simplicity and quadratic behavior more tangible without relying on abstract code.

Implementation Details

In-Place Implementation

Selection sort operates as an , utilizing only O(1) extra space by performing all operations directly on the original through element swaps, thereby avoiding the need for additional auxiliary or structures. The core code structure employs a double-loop mechanism to maintain efficiency in movements. The outer loop runs from the first element (index 0) to the second-to-last element (index n-2), where n is the , designating each position i as the boundary between the sorted and unsorted portions. Within each outer , the inner loop scans the unsorted subarray from index i+1 to n-1, identifying the index of the minimum value (min_idx) by comparing elements sequentially. This design ensures that swaps occur only once per outer , at most, to place the smallest remaining element at position i, thereby minimizing the total number of exchanges to at most n-1. The swap mechanics are implemented using a single temporary variable to exchange elements without : a temporary variable temp is assigned the value at A, then A receives the value from A[min_idx], and finally A[min_idx] is set to temp; this swap is executed conditionally only if min_idx ≠ i, further reducing operations when the minimum is already correctly positioned.

Language-Specific Adaptations

Selection sort implementations vary across programming languages due to differences in syntax, built-in functions, and data structures, but all maintain the core in-place swapping mechanism to minimize usage. These adaptations ensure the algorithm's while leveraging language-specific features for and readability. In Python, the implementation uses list slicing and unpacking for swaps, emphasizing the language's concise syntax. The following function sorts an array in ascending order:

python

def selection_sort(arr): n = len(arr) for i in range(n-1): min_idx = i for j in range(i+1, n): if arr[j] < arr[min_idx]: min_idx = j arr[i], arr[min_idx] = arr[min_idx], arr[i]

def selection_sort(arr): n = len(arr) for i in range(n-1): min_idx = i for j in range(i+1, n): if arr[j] < arr[min_idx]: min_idx = j arr[i], arr[min_idx] = arr[min_idx], arr[i]

This code directly mirrors the pseudocode structure, with Python's dynamic typing allowing seamless operation on lists of integers or other comparable elements. For C++, the algorithm is typically implemented with vectors and standard library utilities like std::swap from <algorithm>, which provides a safe and efficient exchange operation. The example below uses a std::vector<int> and traditional for-loops:

cpp

#include <vector> #include <algorithm> void selection_sort(std::vector<int>& arr) { int n = arr.size(); for (int i = 0; i < n-1; ++i) { int min_idx = i; for (int j = i+1; j < n; ++j) { if (arr[j] < arr[min_idx]) { min_idx = j; } } std::swap(arr[i], arr[min_idx]); } }

#include <vector> #include <algorithm> void selection_sort(std::vector<int>& arr) { int n = arr.size(); for (int i = 0; i < n-1; ++i) { int min_idx = i; for (int j = i+1; j < n; ++j) { if (arr[j] < arr[min_idx]) { min_idx = j; } } std::swap(arr[i], arr[min_idx]); } }

This approach highlights C++'s emphasis on explicit memory management and performance, where indices are zero-based and the vector is passed by reference to enable in-place modification. In Java, selection sort can use primitive arrays like int[] or collections such as ArrayList<Integer>, requiring manual swap logic since no built-in swap function exists for arrays. The example uses an int[] for simplicity:

java

public static void selectionSort(int[] arr) { int n = arr.length; for (int i = 0; i < n-1; i++) { int min_idx = i; for (int j = i+1; j < n; j++) { if (arr[j] < arr[min_idx]) { min_idx = j; } } // Manual swap int temp = arr[min_idx]; arr[min_idx] = arr[i]; arr[i] = temp; } }

public static void selectionSort(int[] arr) { int n = arr.length; for (int i = 0; i < n-1; i++) { int min_idx = i; for (int j = i+1; j < n; j++) { if (arr[j] < arr[min_idx]) { min_idx = j; } } // Manual swap int temp = arr[min_idx]; arr[min_idx] = arr[i]; arr[i] = temp; } }

Java's strict typing and array immutability in some contexts necessitate this explicit temporary variable for swapping, distinguishing it from more fluid languages like Python. To handle generics or custom objects, both Java and C++ allow defining comparators for flexible sorting beyond primitive integers. In Java, the Comparator interface can be passed to the inner loop for comparing objects, enabling sorting by specific attributes (e.g., students.sort(Comparator.comparing(Student::getGPA)) adapted to selection sort). Similarly, C++ uses std::function or functor objects with the < operator overloaded via templates, as in template<typename T> void selection_sort(std::vector<T>& arr, auto cmp) { ... if (cmp(arr[j], arr[min_idx])) ... }. These adaptations support sorting heterogeneous without altering the algorithm's O(n²) nature. Selection sort is inherently iterative, avoiding to prevent in large and maintain constant , a choice preserved across these implementations.

Complexity Analysis

Time Complexity

The of selection sort is determined primarily by the number of comparisons performed in its nested loop structure, where an outer loop iterates over the from the first to the second-to-last element, and an inner loop scans the remaining unsorted portion to find the minimum value. This results in a worst-case and average-case of O(n2)O(n^2), as the algorithm performs approximately n(n1)/2n(n-1)/2 comparisons regardless of the input arrangement. To derive this, consider an array of length nn: the outer loop runs for n1n-1 iterations, and in the ii-th iteration (starting from i=1i=1), the inner loop performs nin-i comparisons to identify the minimum in the unsorted subarray. The total number of comparisons is thus the sum i=1n1(ni)=k=1n1k=(n1)n2\sum_{i=1}^{n-1} (n-i) = \sum_{k=1}^{n-1} k = \frac{(n-1)n}{2}, which simplifies asymptotically to O(n2)O(n^2). In the best case, when the input is already sorted, the algorithm still executes the full inner loops without early termination, yielding the same O(n2)O(n^2) , as no optimizations skip scans in the standard . The number of swaps is at most n1n-1, contributing only O(n)O(n) to the overall time, which is dominated by the quadratic comparisons. Asymptotically, this quadratic renders selection sort inefficient for large inputs, typically becoming impractical for exceeding 1000 elements due to the rapid growth in operations.

Space Complexity and Stability

Selection sort operates as an , utilizing only a constant amount of auxiliary space—specifically O(1)—for temporary variables during swaps, while the total remains O(n) due to the input itself. This minimal extra memory requirement makes it suitable for environments with limited resources, such as embedded systems, where avoiding additional data structures is advantageous. Despite its space efficiency, selection sort is not stable, meaning it does not preserve the relative order of elements with equal keys in the sorted output. Instability arises because the algorithm swaps the selected minimum (or maximum) element with the first position in the unsorted subarray, potentially moving one equal element past another through non-adjacent exchanges. For instance, consider an unsorted subarray starting at index k: [5_a, 5_b, 3], where 5_a and 5_b are equal values distinguished by their original positions (a before b). During the pass for position k, the minimum 3 at index k+2 is identified and swapped with 5_a at k, resulting in [3, 5_b, 5_a]. Here, the relative order of the two 5s is inverted, with 5_b now preceding 5_a. This behavior stems from the selection process choosing the first encountered minimum for swapping, without regard for the positions of subsequent equal elements. The proof of instability follows directly from this mechanism: in each iteration, the algorithm scans the unsorted portion to find the index of the minimum value, starting from the current position, and performs a direct swap regardless of distance, which can reorder equal elements encountered later in the scan. Consequently, applications requiring stability—such as sorting records by multiple keys where secondary order must be maintained—should opt for alternatives like , despite selection sort's in-place advantage. Beyond these properties, selection sort is neither adaptive nor online. It is not adaptive, as it always executes n-1 full scans of the remaining unsorted subarray, irrespective of the input's partial order, leading to no performance gain on nearly sorted data. Similarly, it is not online, requiring access to the complete upfront to perform its selection passes, unlike algorithms that can process elements incrementally as they arrive.

Comparisons

With Insertion Sort

Both selection sort and insertion sort are quadratic-time comparison-based sorting algorithms that perform in-place operations and are straightforward to implement, making them suitable for small arrays or educational purposes. They share the property of incrementally constructing a sorted prefix of the array, one element at a time, through iterative passes over the data. A key difference lies in their approaches to placing elements: selection sort identifies the minimum element in the unsorted subarray via a full linear scan and swaps it directly into its final position at the boundary of the sorted prefix, ensuring exactly one swap per pass. In contrast, insertion sort takes the next unsorted element and inserts it into the correct position within the sorted prefix by shifting larger elements rightward until the spot is found, which can require multiple shifts but no full scans of the unsorted portion. This makes insertion sort more efficient for partially sorted inputs, as it avoids unnecessary scans. Performance-wise, selection sort exhibits fixed behavior across all inputs, with approximately n22\frac{n^2}{2} comparisons and nn swaps in the average and worst cases, providing consistent predictability. , however, adapts to the data: it achieves a best-case time complexity of O(n)O(n) with only n1n-1 comparisons and no shifts for sorted input, though its worst case matches selection sort's n22\frac{n^2}{2} comparisons and incurs up to n24\frac{n^2}{4} shifts on average. Empirical studies confirm often outperforms selection sort on nearly sorted datasets due to fewer operations overall. Selection sort is typically favored for uniformly random data where its invariant comparison count minimizes variability, while is better suited for nearly sorted arrays or /streaming applications where elements are processed incrementally. Consider a reverse-sorted such as [5, 4, 3, 2, 1]: selection sort conducts complete minimum searches across the shrinking unsorted subarray in each of the four passes, yielding n(n1)2=10\frac{n(n-1)}{2} = 10 comparisons, whereas insertion sort performs extensive shifts for each of the four insertions—shifting 1, 2, 3, and 4 elements respectively—resulting in the maximum n(n1)2=10\frac{n(n-1)}{2} = 10 shifts alongside comparable comparisons.

With Bubble Sort

Selection sort and bubble sort are both straightforward, comparison-based sorting algorithms with a of O(n²) in the worst and average cases, making them suitable for introductory where they are frequently taught together to illustrate basic sorting principles. The primary difference lies in their mechanisms: bubble sort operates by making multiple passes through the , repeatedly swapping adjacent elements if they are in the wrong order to gradually "bubble" the largest (or smallest) elements to the end, whereas selection sort identifies the global minimum (or maximum) element in the unsorted portion during each pass and swaps it directly into its final position without relying on adjacent exchanges. In terms of performance, bubble sort is adaptive and can terminate early if no swaps occur in a pass, indicating the array is sorted, which improves efficiency on nearly sorted data; selection sort lacks this adaptability and always completes all passes regardless of input order. On reverse-sorted inputs, bubble sort performs up to Θ(n²) swaps due to extensive adjacent exchanges, while selection sort requires only n-1 swaps, one per pass to place each minimum element. Bubble sort benefits from optimizations like the variant, which alternates pass directions to move elements toward both ends more efficiently, whereas selection sort is more challenging to optimize due to its fixed pass structure and direct minimum selection. Bubble sort is often preferred in educational contexts for its intuitive, step-by-step adjacent swapping that visually demonstrates disorder resolution, while selection sort excels in scenarios with costly swaps, such as random data distributions, where its minimal n-1 swaps provide an advantage over bubble sort's potentially higher swap count.

Variants

Bidirectional Selection Sort

Bidirectional selection sort is a of the standard selection sort that enhances efficiency by simultaneously identifying the minimum and maximum elements in the unsorted portion of the array during each iteration. This approach alternates between placing the minimum element at the left end of the unsorted region and the maximum element at the right end, thereby shrinking the unsorted middle section from both sides until the left and right pointers meet. The algorithm initializes two pointers: low at index 0 and high at index n-1, where n is the length of the array. While low < high, it scans the subarray from low to high to locate the indices of the minimum and maximum values. The minimum is swapped into position low, and the maximum is swapped into position high. The pointers are then updated by incrementing low and decrementing high. This process repeats, effectively sorting the array by progressively fixing elements at both extremities of the remaining unsorted portion. One key benefit of bidirectional selection sort is the reduction in the number of comparisons required, which is approximately halved in the average case to about n2/4n^2/4 compared to n2/2n^2/2 for the standard selection sort, although the worst-case remains O(n2)O(n^2). This improvement arises from performing roughly half as many passes, as the unsorted region diminishes twice as quickly. Despite these gains, the algorithm introduces drawbacks such as increased implementation complexity due to the dual tracking of extremes and the need to handle potential overlaps in minimum and maximum positions. It remains unstable, like its predecessor, potentially altering the relative order of equal elements during swaps. Bidirectional selection sort is particularly useful for medium-sized arrays seeking a modest performance boost over basic selection sort without introducing the space overhead or coding intricacy of more sophisticated algorithms.

Tournament Method Variant

The tournament method variant of selection sort utilizes a binary tournament tree, also known as a winner tree, to efficiently identify the minimum element in the unsorted portion of the array. In this structure, the leaves of the complete binary tree correspond to the array elements, and each internal node stores the smaller (for min-selection) of its two children, propagating the minima upward until the root holds the global minimum. This setup mimics a single-elimination tournament where elements "compete" pairwise, with winners advancing. The sorting process begins with constructing the initial tournament tree in O(n) time through bottom-up pairwise comparisons. For each of the n passes, the minimum is extracted from the and swapped into its correct position in the sorted prefix of the . To maintain the tree for the remaining unsorted elements, the extracted leaf is replaced with a (e.g., for min-selection), and the "loser" from the final comparison along the path to the is promoted to fill the vacancy, followed by O(log n) comparisons to rebuild the affected path. This selective update avoids full tree reconstruction, enabling repeated minimum selections. This variant achieves O(n log n) in the worst, average, and best cases for both comparisons and swaps, a significant improvement over the standard selection sort's O(n²) due to the logarithmic cost of each minimum extraction after the initial build. It retains the selection sort's flavor of iteratively placing the next minimum but enhances efficiency through the , making it particularly suitable for environments where multiple comparisons at the same tree level can occur simultaneously using O(n) processors in O(log n) time per phase. The tournament method draws from heap sort concepts introduced by J. W. J. Williams in 1964 but emphasizes the tournament data structure, formalized by and Aaron Kershenbaum in their 1986 technical report on using tournament trees for sorting, which generalizes earlier binomial ideas for optimal comparison-based sorting. While it offers better asymptotic performance, the variant requires O(n) additional space for the , increasing from the O(1) auxiliary space of standard selection sort, and the setup overhead can make it less advantageous for small input sizes where simpler linear scans suffice.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.