Hubbry Logo
Gray codeGray codeMain
Open search
Gray code
Community hub
Gray code
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Gray code
Gray code
from Wikipedia

Gray code
4 3 2 1
0 0 0 0 0
1 0 0 0 1
2 0 0 1 1
3 0 0 1 0
4 0 1 1 0
5 0 1 1 1
6 0 1 0 1
7 0 1 0 0
8 1 1 0 0
9 1 1 0 1
10 1 1 1 1
11 1 1 1 0
12 1 0 1 0
13 1 0 1 1
14 1 0 0 1
15 1 0 0 0

The reflected binary code (RBC), also known as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).

For example, the representation of the decimal value "1" in binary would normally be "001", and "2" would be "010". In Gray code, these values are represented as "001" and "011". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.

Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice.[1]

Function

[edit]

Many devices indicate position by closing and opening switches. If that device uses natural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ:

Decimal Binary
... ...
3 011
4 100
... ...

The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce, the transition might look like 011001101100. When the switches appear to be in position 001, the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic, then the sequential system may store a false value.

This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers, or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance,[2][3][4][5][6] single-distance, single-step, monostrophic[7][8][5][6] or syncopic codes,[7] in reference to the Hamming distance of 1 between adjacent codes.

Invention

[edit]
Gray's patent introduces the term "reflected binary code"

In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code, or BRGC. Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943.[9][10][11] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name".[12] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".

Visualized as a traversal of vertices of a tesseract
Gray code along the number line

In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i-th least significant bit a pattern of 2i on 2i off. The most significant digit is an exception to this: for an n-bit Gray code, the most significant digit follows the pattern 2n−1 on, 2n−1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2n−2 places. The four-bit version of this is shown below:

Decimal Binary Gray
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000

For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code.[13]

In modern digital communications, Gray codes play an important role in error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise.

Despite the fact that Stibitz described this code[9][10][11] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code";[14][15] one of those also lists "minimum error code" and "cyclic permutation code" among the names.[15] A 1954 patent application refers to "the Bell Telephone Gray code".[16] Other names include "cyclic binary code",[10] "cyclic progression code",[17][10] "cyclic permuting binary"[18] or "cyclic permuted binary" (CPB).[19][20]

The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray.[11][21][22][23]

History and practical application

[edit]

Mathematical puzzles

[edit]

Reflected binary codes were applied to mathematical puzzles before they became known to engineers.

The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle, a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872.[24][11]

It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883.[25][26][27][28] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes.[29]

Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American.[30]

The code also forms a Hamiltonian cycle on a hypercube, where each bit is seen as one dimension.

Telegraphy codes

[edit]

When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875[31] or 1876,[32][33] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order,[34][35][36] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code.[11] This code became known as Baudot code[37] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932.[38][39][36]

About the same time, the German-Austrian Otto Schäffler [de][40] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874.[41][11]

Analog-to-digital signal conversion

[edit]

Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube-based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953,[12] and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code.[42]

Part of front page of Gray's patent, showing PCM tube (10) with reflected binary code in plate (15)

Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.

Position encoders

[edit]
Rotary encoder for angle-measuring devices marked in 3-bit binary-reflected Gray code (BRGC)
A Gray code absolute rotary encoder with 13 tracks. Housing, interrupter disk, and light source are in the top; sensing element and support components are in the bottom.

Gray codes are used in linear and rotary position encoders (absolute encoders and quadrature encoders) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.

For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.

Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.

In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.

Genetic algorithms

[edit]

Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms.[13] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.

Boolean circuit minimization

[edit]

Gray codes are also used in labelling the axes of Karnaugh maps since 1953[43][44][45] as well as in Händler circle graphs since 1958,[46][47][48][49] both graphical methods for logic circuit minimization.

Error correction

[edit]

In modern digital communications, 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction. For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise.

Communication between clock domains

[edit]

Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.

Cycling through states with minimal effort

[edit]

If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.

A balanced Gray code can be constructed,[50] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.

Gray code counters and arithmetic

[edit]

George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already.[9][10][11]

A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains.[51] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.

Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code,[nb 1] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous.[52]

To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code,[53] add one to it with a standard binary adder, and then convert the result back to Gray code.[54] Other methods of counting in Gray code are discussed in a report by Robert W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter.[55]

Gray code addressing

[edit]

As the execution of executable code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs.[56][57]

Constructing an n-bit Gray code

[edit]
The first few steps of the reflect-and-prefix method.
4-bit Gray code permutation

The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0, prefixing the entries in the reflected list with a binary 1, and then concatenating the original list with the reversed list.[11] For example, generating the n = 3 list from the n = 2 list:

2-bit list: 00, 01, 11, 10  
Reflected:   10, 11, 01, 00
Prefix old entries with 0: 000, 001, 011, 010,  
Prefix new entries with 1:   110, 111, 101, 100
Concatenated: 000, 001, 011, 010, 110, 111, 101, 100

The one-bit Gray code is G1 = (0,1). This can be thought of as built recursively as above from a zero-bit Gray code G0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating Gn+1 from Gn makes the following properties of the standard reflecting code clear:

  • Gn is a permutation of the numbers 0, ..., 2n − 1. (Each number appears exactly once in the list.)
  • Gn is embedded as the first half of Gn+1.
  • Therefore, the coding is stable, in the sense that once a binary number appears in Gn it appears in the same position in all longer lists; so it makes sense to talk about the reflective Gray code value of a number: G(m) = the mth reflecting Gray code, counting from 0.
  • Each entry in Gn differs by only one bit from the previous entry. (The Hamming distance is 1.)
  • The last entry in Gn differs by only one bit from the first entry. (The code is cyclic.)

These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the nth Gray code is obtained by computing . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position of codewords are inverted, the order of neighbouring blocks of codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed

000,001,010,011,100,101,110,111 → 001,000,011,010,101,100,111,110  (invert bit 0)

If bit 1 is inverted, blocks of 2 codewords change order:

000,001,010,011,100,101,110,111 → 010,011,000,001,110,111,100,101  (invert bit 1)

If bit 2 is inverted, blocks of 4 codewords reverse order:

000,001,010,011,100,101,110,111 → 100,101,110,111,000,001,010,011  (invert bit 2)

Thus, performing an exclusive or on a bit at position with the bit at position leaves the order of codewords intact if , and reverses the order of blocks of codewords if . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.

A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming is the th Gray-coded bit ( being the most significant bit), and is the th binary-coded bit ( being the most-significant bit), the reverse translation can be given recursively: , and . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.

To construct the binary-reflected Gray code iteratively, at step 0 start with the , and at step find the bit position of the least significant 1 in the binary representation of and flip the bit at that position in the previous code to get the next code . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ...[nb 2] See find first set for efficient algorithms to compute these values.

Converting to and from Gray code

[edit]

The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist.[58][53][nb 1]

typedef unsigned int uint;

// This function converts an unsigned binary number to reflected binary Gray code.
uint BinaryToGray(uint num)
{
    return num ^ (num >> 1); // The operator >> is shift right. The operator ^ is exclusive or.
}

// This function converts a reflected binary Gray code number to a binary number.
uint GrayToBinary(uint num)
{
    uint mask = num;
    while (mask) {           // Each Gray code bit is exclusive-ored with all more significant bits.
        mask >>= 1;
        num   ^= mask;
    }
    return num;
}

// A more efficient version for Gray codes 32 bits or fewer through the use of SWAR (SIMD within a register) techniques. 
// It implements a parallel prefix XOR function. The assignment statements can be in any order.
// 
// This function can be adapted for longer Gray codes by adding steps.

uint GrayToBinary32(uint num)
{
    num ^= num >> 16;
    num ^= num >>  8;
    num ^= num >>  4;
    num ^= num >>  2;
    num ^= num >>  1;
    return num;
}
// A Four-bit-at-once variant changes a binary number (abcd)2 to (abcd)2 ^ (00ab)2, then to (abcd)2 ^ (00ab)2 ^ (0abc)2 ^ (000a)2.

On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set. If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.

Special types of Gray codes

[edit]

In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).

Gray codes with n bits and of length less than 2n

[edit]

It is possible to construct binary Gray codes with n bits with a length of less than 2n, if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle.[59] OEIS sequence A290772 [60] gives the number of possible Gray sequences of length 2n that include zero and use the minimum number of bits.

n-ary Gray code

[edit]
Ternary number → ternary Gray code

0 → 000
1 → 001
2 → 002
10 → 012
11 → 011
12 → 010
20 → 020
21 → 021
22 → 022
100 → 122
101 → 121
102 → 120
110 → 110
111 → 111
112 → 112
120 → 102
121 → 101
122 → 100
200 → 200
201 → 201
202 → 202
210 → 212
211 → 211
212 → 210
220 → 220
221 → 221

222 → 222

There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n-ary Gray code, also known as a non-Boolean Gray code. As the name implies, this type of Gray code uses non-Boolean values in its encodings.

For example, a 3-ary (ternary) Gray code would use the values 0,1,2.[29] The (nk)-Gray code is the n-ary Gray code with k digits.[61] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The (nk)-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively. An algorithm to iteratively generate the (Nk)-Gray code is presented (in C):

// inputs: base, digits, value
// output: Gray
// Convert a value to a Gray code with the given base and digits.
// Iterating through a sequence of values would result in a sequence
// of Gray codes in which only one digit changes at a time.
void toGray(unsigned base, unsigned digits, unsigned value, unsigned gray[digits])
{ 
	unsigned baseN[digits];	// Stores the ordinary base-N number, one digit per entry
	unsigned i;		// The loop variable
 
	// Put the normal baseN number into the baseN array. For base 10, 109 
	// would be stored as [9,0,1]
	for (i = 0; i < digits; i++) {
		baseN[i] = value % base;
		value    = value / base;
	}
 
	// Convert the normal baseN number into the Gray code equivalent. Note that
	// the loop starts at the most significant digit and goes down.
	unsigned shift = 0;
	while (i--) {
		// The Gray digit gets shifted down by the sum of the higher
		// digits.
		gray[i] = (baseN[i] + shift) % base;
		shift = shift + base - gray[i];	// Subtract from base so shift is positive
	}
}
// EXAMPLES
// input: value = 1899, base = 10, digits = 4
// output: baseN[] = [9,9,8,1], gray[] = [0,1,7,1]
// input: value = 1900, base = 10, digits = 4
// output: baseN[] = [0,0,9,1], gray[] = [0,1,8,1]

There are other Gray code algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan,[61] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.

Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.

See also Skew binary number system, a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.

Balanced Gray code

[edit]

Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity".[50] In balanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R-ary complete Gray cycle having transition sequence ; the transition counts (spectrum) of G are the collection of integers defined by

A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have for all k. Clearly, when , such codes exist only if n is a power of 2.[62] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either or .[50] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two.[63]

For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced:[50]

0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0
0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1 1 0 0 1 1 1 0 0
0 0 0 1 1 0 0 0 0 0 1 1 1 1 1 1

whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight:[50]

1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 0
1 1 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 0 1 1
1 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 1
1 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1

We will now show a construction[64] and implementation[65] for well-balanced binary Gray codes which allows us to generate an n-digit balanced Gray code for every n. The main principle is to inductively construct an (n + 2)-digit Gray code given an n-digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of into an even number L of non-empty blocks of the form

where , , and ). This partition induces an -digit Gray code given by

If we define the transition multiplicities

to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the (n + 2)-digit Gray code induced by this partition the transition spectrum is

The delicate part of this construction is to find an adequate partitioning of a balanced n-digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit transition and splitting another block at another digit transition produces a different Gray code with exactly the same transition spectrum , so one may for example[63] designate the first transitions at digit as those that fall between two blocks. Uniform codes can be found when and , and this construction can be extended to the R-ary case as well.[64]

Long run Gray codes

[edit]

Long run (or maximum gap) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible.[66]

Monotonic Gray codes

[edit]

Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors.[67] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.

We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube into levels of vertices that have equal weight, i.e.

for . These levels satisfy . Let be the subgraph of induced by , and let be the edges in . A monotonic Gray code is then a Hamiltonian path in such that whenever comes before in the path, then .

An elegant construction of monotonic n-digit Gray codes for any n is based on the idea of recursively building subpaths of length having edges in .[67] We define , whenever or , and

otherwise. Here, is a suitably defined permutation and refers to the path P with its coordinates permuted by . These paths give rise to two monotonic n-digit Gray codes and given by

The choice of which ensures that these codes are indeed Gray codes turns out to be . The first few values of are shown in the table below.

Subpaths in the Savage–Winkler algorithm
j = 0 j = 1 j = 2 j = 3
n = 1 0, 1
n = 2 00, 01 10, 11
n = 3 000, 001 100, 110, 010, 011 101, 111
n = 4 0000, 0001 1000, 1100, 0100, 0110, 0010, 0011 1010, 1011, 1001, 1101, 0101, 0111 1110, 1111

These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O(n) time. The algorithm is most easily described using coroutines.

Monotonic codes have an interesting connection to the Lovász conjecture, which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N, where N is the number of vertices in the middle-level subgraph.[68]

Beckett–Gray code

[edit]

Another type of Gray code, the Beckett–Gray code, is named for Irish playwright Samuel Beckett, who was interested in symmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once.[69] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first.[69] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth's Art of Computer Programming.[11] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found.[70]

Snake-in-the-box codes

[edit]
Maximum lengths of snakes (Ls) and coils (Lc) in the snakes-in-the-box problem for dimensions n from 1 to 4

Snake-in-the-box codes, or snakes, are the sequences of nodes of induced paths in an n-dimensional hypercube graph, and coil-in-the-box codes,[71] or coils, are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s;[3] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.

Single-track Gray code

[edit]

Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding[72][73] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996).[74][75] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix, each column is a cyclic shift of the first column.[76]

Animated and color-coded version of the STGC rotor.

The name comes from their use with rotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1. To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.

If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.

For many years, Torsten Sillke[77] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.

Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible.[72] Although it is not possible to distinguish 2n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2n − 2n positions and that for prime n the limit is 2n − 2 positions.[78] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 28 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.

An STGC for P = 30 and n = 5 is reproduced here:

Single-track Gray code for 30 positions
Angle Code Angle Code Angle Code Angle Code Angle Code
10000 72° 01000 144° 00100 216° 00010 288° 00001
12° 10100 84° 01010 156° 00101 228° 10010 300° 01001
24° 11100 96° 01110 168° 00111 240° 10011 312° 11001
36° 11110 108° 01111 180° 10111 252° 11011 324° 11101
48° 11010 120° 01101 192° 10110 264° 01011 336° 10101
60° 11000 132° 01100 204° 00110 276° 00011 348° 10001

Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes.[79] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes, also called De Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving.[80]

9-bit, single-track Gray code, displaying one degree angular resolution.

Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams,[81][user-generated source?] based on previous work,[78] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse. This device[82] was designed by etzenseep (Florian Bauer) in September 2022.

An STGC for P = 360 and n = 9 is reproduced here:

Single-track Gray code for 360 positions
Angle Code Angle Code Angle Code Angle Code Angle Code Angle Code Angle Code Angle Code Angle Code
100000001 40° 000000011 80° 000000110 120° 000001100 160° 000011000 200° 000110000 240° 001100000 280° 011000000 320° 110000000
110000001 41° 100000011 81° 000000111 121° 000001110 161° 000011100 201° 000111000 241° 001110000 281° 011100000 321° 111000000
111000001 42° 110000011 82° 100000111 122° 000001111 162° 000011110 202° 000111100 242° 001111000 282° 011110000 322° 111100000
111000011 43° 110000111 83° 100001111 123° 000011111 163° 000111110 203° 001111100 243° 011111000 283° 111110000 323° 111100001
111000111 44° 110001111 84° 100011111 124° 000111111 164° 001111110 204° 011111100 244° 111111000 284° 111110001 324° 111100011
111001111 45° 110011111 85° 100111111 125° 001111111 165° 011111110 205° 111111100 245° 111111001 285° 111110011 325° 111100111
111011111 46° 110111111 86° 101111111 126° 011111111 166° 111111110 206° 111111101 246° 111111011 286° 111110111 326° 111101111
111011011 47° 110110111 87° 101101111 127° 011011111 167° 110111110 207° 101111101 247° 011111011 287° 111110110 327° 111101101
101011011 48° 010110111 88° 101101110 128° 011011101 168° 110111010 208° 101110101 248° 011101011 288° 111010110 328° 110101101
101011111 49° 010111111 89° 101111110 129° 011111101 169° 111111010 209° 111110101 249° 111101011 289° 111010111 329° 110101111
10° 101011101 50° 010111011 90° 101110110 130° 011101101 170° 111011010 210° 110110101 250° 101101011 290° 011010111 330° 110101110
11° 101010101 51° 010101011 91° 101010110 131° 010101101 171° 101011010 211° 010110101 251° 101101010 291° 011010101 331° 110101010
12° 101010111 52° 010101111 92° 101011110 132° 010111101 172° 101111010 212° 011110101 252° 111101010 292° 111010101 332° 110101011
13° 101110111 53° 011101111 93° 111011110 133° 110111101 173° 101111011 213° 011110111 253° 111101110 293° 111011101 333° 110111011
14° 001110111 54° 011101110 94° 111011100 134° 110111001 174° 101110011 214° 011100111 254° 111001110 294° 110011101 334° 100111011
15° 001010111 55° 010101110 95° 101011100 135° 010111001 175° 101110010 215° 011100101 255° 111001010 295° 110010101 335° 100101011
16° 001011111 56° 010111110 96° 101111100 136° 011111001 176° 111110010 216° 111100101 256° 111001011 296° 110010111 336° 100101111
17° 001011011 57° 010110110 97° 101101100 137° 011011001 177° 110110010 217° 101100101 257° 011001011 297° 110010110 337° 100101101
18° 001011001 58° 010110010 98° 101100100 138° 011001001 178° 110010010 218° 100100101 258° 001001011 298° 010010110 338° 100101100
19° 001111001 59° 011110010 99° 111100100 139° 111001001 179° 110010011 219° 100100111 259° 001001111 299° 010011110 339° 100111100
20° 001111101 60° 011111010 100° 111110100 140° 111101001 180° 111010011 220° 110100111 260° 101001111 300° 010011111 340° 100111110
21° 000111101 61° 001111010 101° 011110100 141° 111101000 181° 111010001 221° 110100011 261° 101000111 301° 010001111 341° 100011110
22° 000110101 62° 001101010 102° 011010100 142° 110101000 182° 101010001 222° 010100011 262° 101000110 302° 010001101 342° 100011010
23° 000100101 63° 001001010 103° 010010100 143° 100101000 183° 001010001 223° 010100010 263° 101000100 303° 010001001 343° 100010010
24° 000101101 64° 001011010 104° 010110100 144° 101101000 184° 011010001 224° 110100010 264° 101000101 304° 010001011 344° 100010110
25° 000101001 65° 001010010 105° 010100100 145° 101001000 185° 010010001 225° 100100010 265° 001000101 305° 010001010 345° 100010100
26° 000111001 66° 001110010 106° 011100100 146° 111001000 186° 110010001 226° 100100011 266° 001000111 306° 010001110 346° 100011100
27° 000110001 67° 001100010 107° 011000100 147° 110001000 187° 100010001 227° 000100011 267° 001000110 307° 010001100 347° 100011000
28° 000010001 68° 000100010 108° 001000100 148° 010001000 188° 100010000 228° 000100001 268° 001000010 308° 010000100 348° 100001000
29° 000011001 69° 000110010 109° 001100100 149° 011001000 189° 110010000 229° 100100001 269° 001000011 309° 010000110 349° 100001100
30° 000001001 70° 000010010 110° 000100100 150° 001001000 190° 010010000 230° 100100000 270° 001000001 310° 010000010 350° 100000100
31° 100001001 71° 000010011 111° 000100110 151° 001001100 191° 010011000 231° 100110000 271° 001100001 311° 011000010 351° 110000100
32° 100001101 72° 000011011 112° 000110110 152° 001101100 192° 011011000 232° 110110000 272° 101100001 312° 011000011 352° 110000110
33° 100000101 73° 000001011 113° 000010110 153° 000101100 193° 001011000 233° 010110000 273° 101100000 313° 011000001 353° 110000010
34° 110000101 74° 100001011 114° 000010111 154° 000101110 194° 001011100 234° 010111000 274° 101110000 314° 011100001 354° 111000010
35° 010000101 75° 100001010 115° 000010101 155° 000101010 195° 001010100 235° 010101000 275° 101010000 315° 010100001 355° 101000010
36° 010000111 76° 100001110 116° 000011101 156° 000111010 196° 001110100 236° 011101000 276° 111010000 316° 110100001 356° 101000011
37° 010000011 77° 100000110 117° 000001101 157° 000011010 197° 000110100 237° 001101000 277° 011010000 317° 110100000 357° 101000001
38° 010000001 78° 100000010 118° 000000101 158° 000001010 198° 000010100 238° 000101000 278° 001010000 318° 010100000 358° 101000000
39° 000000001 79° 000000010 119° 000000100 159° 000001000 199° 000010000 239° 000100000 279° 001000000 319° 010000000 359° 100000000
Starting and ending angles for the 20 tracks for a single-track Gray code with 9 sensors separated by 40°
Starting angle Ending angle Length
3 4 2
23 28 6
31 37 7
44 48 5
56 60 5
64 71 8
74 76 3
88 91 4
94 96 3
99 104 6
110 115 6
131 134 4
138 154 17
173 181 9
186 187 2
220 238 19
242 246 5
273 279 7
286 289 4
307 360 54

Two-dimensional Gray code

[edit]
A Gray-coded constellation diagram for rectangular 16-QAM

Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits.[83]

Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection.[84]

Excess Gray code

[edit]

If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.

Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.

When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.

Gray isometry

[edit]

The bijective mapping { 0 ↔ 00, 1 ↔ 01, 2 ↔ 11, 3 ↔ 10 } establishes an isometry between the metric space over the finite field with the metric given by the Hamming distance and the metric space over the finite ring (the usual modular arithmetic) with the metric given by the Lee distance. The mapping is suitably extended to an isometry of the Hamming spaces and . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in of ring-linear codes from .[85][86]

[edit]

There are a number of binary codes similar to Gray codes, including:

The following binary-coded decimal (BCD) codes are Gray code variants as well:

4-bit unit-distance BCD codes[nb 6]
Name Bit 0 1 2 3 4 5 6 7 8 9 Weights[nb 7] Tracks Compl. Cyclic 5s Comment
Gray BCD 4 0 0 0 0 0 0 0 0 1 1 0–3 4 (3[nb 8]) No (2, 4, 8, 16) No [110][111]
3 0 0 0 0 1 1 1 1 1 1
2 0 0 1 1 1 1 0 0 0 0
1 0 1 1 0 0 1 1 0 0 1
Paul 4 1 0 0 0 0 0 0 0 1 1 1–3 4 (3[nb 8]) No 2, 10 No [125]
3 0 0 0 0 1 1 1 1 1 1
2 0 0 1 1 1 1 0 0 0 0
1 1 1 1 0 0 1 1 0 0 1
Glixon 4 0 0 0 0 0 0 0 0 1 1 0–3 4 No 2, 4, 8, 10 (shifted +1) [122][110][111][123][124][nb 5]
3 0 0 0 0 1 1 1 1 1 0
2 0 0 1 1 1 1 0 0 0 0
1 0 1 1 0 0 1 1 0 0 0
Tompkins I 4 0 0 0 0 0 1 1 1 1 1 0–4 2 No 2, 4, 10 Yes [2][110][111]
3 0 0 0 0 1 1 1 1 1 0
2 0 0 1 1 1 1 1 0 0 0
1 0 1 1 0 0 0 1 1 0 0
O'Brien I (Watts) 4 0 0 0 0 0 1 1 1 1 1 0–3 4 9[103][104][nb 9] 2, 4, 10 Yes [109][110][111][nb 5]
3 0 0 0 0 1 1 0 0 0 0
2 0 0 1 1 1 1 1 1 0 0
1 0 1 1 0 0 0 0 1 1 0
Petherick (RAE) 4 0 0 0 0 0 1 1 1 1 1 1–3 3 9[103][104][nb 9] 2, 10 Yes [17][107][nb 4]
3 1 0 0 0 1 1 0 0 0 1
2 0 0 1 1 1 1 1 1 0 0
1 1 1 1 0 0 0 0 1 1 1
O'Brien II 4 0 0 0 0 0 1 1 1 1 1 1–3 3 9[89][103][104][nb 9] 2, 10 Yes [109][110][111][nb 4]
3 0 0 0 1 1 1 1 0 0 0
2 0 1 1 1 0 0 1 1 1 0
1 1 1 0 0 0 0 0 0 1 1
Susskind 4 0 0 0 0 0 1 1 1 1 1 1–4 3 9[nb 9] 2, 10 Yes [4]
3 0 0 1 1 1 1 1 1 0 0
2 0 1 1 1 0 0 1 1 1 0
1 1 1 1 0 0 0 0 1 1 1
Klar 4 0 0 0 0 0 1 1 1 1 1 0–4 4 (3[nb 8]) 9[nb 9] 2, 10 Yes [126][127]
3 0 0 0 1 1 1 1 0 0 0
2 0 0 1 1 1 1 1 1 0 0
1 0 1 1 1 0 0 1 1 1 0
Tompkins II 4 0 0 0 0 0 1 1 1 1 1 1–3 2 9[nb 10] 2, 10 Yes [2][110][111]
3 0 0 1 1 1 1 1 0 0 0
2 1 1 1 0 0 0 0 0 1 1
1 0 1 1 1 0 0 1 1 1 0
Excess-3 Gray 4 0 0 0 0 0 1 1 1 1 1 1–4 4 9[103][104][nb 9] 2, 10 Yes [6][103]
3 0 1 1 1 1 1 1 1 1 0
2 1 1 1 0 0 0 0 1 1 1
1 0 0 1 1 0 0 1 1 0 0

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A Gray code, also known as the reflected , is a in which any two successive values differ in only one bit position, ensuring a single-bit transition between adjacent states. This unit-distance property distinguishes it from standard binary counting, where multiple bits may change simultaneously, and makes it particularly valuable for error minimization in sequential processes. The code was developed by , a physicist and researcher at Bell Telephone Laboratories, during the 1930s and 1940s as part of efforts to improve analog-to-digital conversion techniques. Gray filed a for the reflected in 1947, which was granted on , 1953, under U.S. No. 2,632,058 for "Pulse Code Communication," where it was used to convert continuous voltage signals into digital pulses via a cathode ray tube, avoiding errors at code boundaries. Although earlier non-reflected codes with similar single-change properties appeared in as far back as the 1870s, Gray's version formalized the reflective construction method that generates the sequence systematically. Gray codes find extensive applications in and due to their error-resistant nature. In rotary encoders and position sensors, they encode angular or linear positions such that mechanical transitions produce only one-bit changes, preventing invalid intermediate readings during motion. In digital logic design, Gray codes label the axes of Karnaugh maps—graphical tools for simplifying expressions—ensuring that adjacent cells differ by exactly one variable, which aids in identifying groupings for minimal logic circuits. They are also employed in analog-to-digital converters to reduce glitches and power consumption by limiting simultaneous bit flips, as well as in error detection schemes and certain optimization algorithms where smooth transitions are essential.

Fundamentals

Definition and Properties

A Gray code is a binary encoding of numbers such that two successive values differ in exactly one bit position. The binary reflected Gray code (BRGC), often simply referred to as the standard Gray code, is a specific and prominent example of such an encoding, representing a of all 2^n distinct n-bit binary strings where each consecutive pair in the sequence differs by a single bit flip. This single-bit-change property ensures minimality in the Hamming distance between adjacent codewords, which is precisely 1, distinguishing Gray codes from standard binary representations where transitions can involve multiple bit changes. Mathematically, a Gray code for n bits consists of 2^n unique codewords that form a Hamiltonian path on the n-dimensional hypercube graph, whose vertices correspond to all possible n-bit strings and whose edges connect pairs differing in exactly one bit. In this graph-theoretic view, the codewords are the vertices visited in sequence along the path, guaranteeing that every vertex is included exactly once with adjacent visits separated by a single edge. For n ≥ 2, cyclic variants exist that form a Hamiltonian cycle, closing the path back to the starting codeword with one final bit change. Gray codes are not unique; numerous sequences satisfy the adjacent single-bit-change condition for a given n, though the BRGC is due to its recursive and ease of generation. The provides the foundational prerequisite: its 2^n vertices represent the complete set of n-bit codewords, and the Gray code ordering traverses this without revisiting vertices, emphasizing connectivity via minimal bit differences. To illustrate, the following table shows the standard 3-bit BRGC sequence, listing decimal equivalents alongside the binary codewords:
DecimalBRGC
0000
1001
2011
3010
4110
5111
6101
7100
Each transition flips exactly one bit, as seen from 000 to 001 (third bit), 001 to 011 (second bit), and so on, up to 100 back to 000 in the cyclic form (first bit).

Purpose and Advantages

The primary purpose of Gray codes is to minimize errors resulting from simultaneous bit flips during transitions in mechanical or electrical systems, particularly in applications like shaft encoders where physical movement can cause asynchronous changes in signal states. In such systems, electromechanical switches or optical sensors may not synchronize perfectly, leading to intermediate readings that do not correspond to valid states if multiple bits change at once. By ensuring that only one bit differs between consecutive codewords, Gray codes prevent these spurious outputs, making them essential for reliable position sensing and analog-to-digital conversion. A key advantage of Gray codes over standard binary representation lies in their ability to avoid invalid intermediate states during transitions. In binary counting, consecutive values often require multiple bit flips—for example, advancing from 3 (011) to 4 (100) in a 3-bit system involves changing all three bits simultaneously, which can produce erroneous intermediate values like 001, 010, or 110 if sampled mid-transition due to propagation delays. In contrast, the reflected binary Gray code for these values is 010 (3) to 110 (4), differing only in the most significant bit, ensuring that any sampled value during the transition is either the old or new valid state. This single-bit-change property directly reduces the risk of glitches and decoding errors in dynamic systems. Beyond error minimization, Gray codes offer broader benefits in terms of energy efficiency and robustness. In switching circuits, the sequential single-bit transitions lower dynamic power consumption by reducing simultaneous switching activity on bus lines, with studies showing up to 33% fewer bit switches in address buses compared to binary encoding. Additionally, their enhances tolerance to in electrical environments, as the limited bit changes make it easier to detect and correct transient faults without propagating large discrepancies. These advantages make Gray codes particularly valuable in resource-constrained or high-reliability applications.

History

Invention and Early Development

The concept of codes where successive values differ by only one unit predates the modern binary reflected Gray code, with early applications in . In 1878, French engineer employed a code—now recognized as a form of Gray code—in a demonstration of his synchronous at the Universal Exposition in , where it facilitated efficient transmission by minimizing bit transitions between characters. This precursor code, part of the , allowed for synchronized over telegraph lines and earned Baudot a for its innovation in reducing errors from mechanical switching. The formal invention of the binary reflected Gray code is attributed to Frank Gray, a researcher at Bell Laboratories. In a patent filed on November 13, 1947, and issued on March 17, 1953 (U.S. Patent 2,632,058), Gray described the "reflected binary code" as a method for pulse code communication, specifically designed to prevent spurious output errors in electromechanical devices like shaft encoders and cathode-ray tube readers. The code rearranges binary representations so that adjacent numerical values differ in only one bit position, thereby reducing the impact of single-bit errors during transmission or mechanical transitions; for example, the sequence progresses such that each step reflects the previous half-sequence prefixed with an inverted bit. This innovation was motivated by practical needs in early digital communication systems at Bell Labs, where minimizing multi-bit changes prevented amplification of noise or misalignment in pulse coding. Early development of Gray codes extended to theoretical and puzzle-solving contexts, revealing deeper combinatorial structures. The sequence of moves in the puzzle, popularized by in 1883, corresponds to a Gray code path, where each legal move changes the binary state of disk positions by exactly one bit, modeling the problem as traversals in a state graph. This connection highlights Gray codes' role in enumerating configurations with minimal changes, akin to unit-distance sequences. By the 1950s, such sequences were linked to Hamiltonian paths in the n-dimensional , where vertices represent binary strings and edges connect strings differing by one bit, providing a graph-theoretic foundation for generating Gray codes without delving into advanced combinatorial optimizations.

Historical Applications

One of the earliest applications of Gray code principles emerged in 19th-century , where French engineer incorporated a form of Gray code into his 5-bit system demonstrated in 1878. This design facilitated efficient of multiple telegraph lines over a single channel, minimizing errors by ensuring that consecutive characters typically differed by only one bit, thus reducing the likelihood of signal interference propagating multiple bit flips. Gray code concepts also found use in mathematical puzzles during the late 19th and early 20th centuries, particularly in solving mechanical brainteasers like the Chinese rings puzzle (also known as baguenaudier) and the . In these puzzles, each ring or disk position can be represented as a binary state (on/off the bar for rings, or on a specific peg for disks), and valid moves correspond to single-bit changes in the state representation, mirroring the Gray code's unit-distance property. For the Chinese rings puzzle with 3 rings, the optimal solution requires 5 moves, following a reflected binary Gray code sequence of states such as 111 → 110 → 100 → 101 → 001 → 000, where each step adheres to the puzzle's movement rules allowing single or adjacent ring manipulations. Similarly, for the with 3 disks, the 7-move solution can be mapped to a 3-bit Gray code traversal of the configuration space, where disk positions are encoded such that each legal move flips exactly one bit, progressing through states like 000 (all on source peg) to 111 (all on target peg) via intermediate single-bit changes. In the 1930s and 1940s, researchers at Bell Laboratories applied Gray code to early analog-to-digital conversion systems, particularly in shaft position indicators for rotating machinery such as antennas and servomechanisms. Frank Gray developed the code to encode angular positions on a rotating disk with concentric tracks, preventing errors during transitions between adjacent positions by limiting changes to one bit at a time, which avoided transient multi-bit ambiguities in electromechanical readers. This approach was detailed in Gray's patent filing for pulse code communication systems, which included applications to position-sensing devices and was issued in 1953. The timeline of these historical applications spans from Baudot's innovations in the 1870s–1880s, through puzzle-solving insights in the early 1900s, to ' engineering implementations in the 1930s–1940s, culminating in the 1950s with adoption in computing prototypes like early experiments.

Construction and Conversion

Generating Standard Binary Gray Codes

The standard binary reflected Gray code (BRGC) for n bits is a on the n-dimensional where consecutive codes differ by exactly one bit, with a total length of 2^n entries. This construction ensures that the sequence visits every possible n-bit binary string exactly once while minimizing transitions to a single bit flip between adjacent elements. The recursive method builds the BRGC incrementally. For the base case n=1, the sequence is simply 0 followed by 1. For n ≥ 2, the n-bit sequence is formed by taking the (n-1)-bit BRGC, prefixing each entry with 0 to create the first half, then appending the reverse (reflection) of the (n-1)-bit BRGC with each entry prefixed by 1. This reflection step introduces the single-bit change at the , maintaining the adjacency property throughout. The process can be expressed formally as: Gn=0Gn11Gn1G_n = 0 \cdot G_{n-1} \circ 1 \cdot \overline{G_{n-1}} where GkG_k denotes the k-bit BRGC, \cdot is concatenation (prefixing), \circ is list concatenation, and Gn1\overline{G_{n-1}} is the reversed (n-1)-bit BRGC. To illustrate, the sequences for small n are as follows:
nBinary Reflected Gray Code Sequence
10, 1
200, 01, 11, 10
3000, 001, 011, 010, 110, 111, 101, 100
40000, 0001, 0011, 0010, 0110, 0111, 0101, 0100, 1100, 1101, 1111, 1110, 1010, 1011, 1001, 1000
These examples demonstrate how the reflection doubles the length at each step while preserving single-bit differences, such as the transition from 001 to 011 (flipping the second bit) in n=3. An iterative provides a direct computational approach without . Start with the binary counter values from 0 to 2^n - 1, and for each i, compute the corresponding Gray code g as g = i XOR (i right-shifted by 1 bit). This generates the BRGC in natural order, leveraging the property that the Gray code bits satisfy g_j = b_j XOR b_{j+1} for the binary representation b of i (with b_{n} = 0 for the highest bit). For instance, for i=5 (binary 101), right-shift gives 010, and XOR yields 111, which is the fifth entry (0-indexed) in the n=3 sequence. This method is efficient for hardware or software implementation, producing the full 2^n-length list in O(n * 2^n) time.

Binary-to-Gray and Gray-to-Binary Conversion

The standard binary-reflected Gray code conversion from binary to Gray involves computing each Gray code bit as the exclusive-OR (XOR) of the corresponding binary bit and the binary bit immediately to its left (higher significance), with the most significant bit (MSB) unchanged. This method ensures that adjacent codes differ by exactly one bit, as originally described in the context of pulse code communication systems. Formally, for an nn-bit b=bn1bn2b1b0b = b_{n-1} b_{n-2} \dots b_1 b_0 (where bn1b_{n-1} is the MSB), the Gray code g=gn1gn2g1g0g = g_{n-1} g_{n-2} \dots g_1 g_0 is given by: gn1=bn1g_{n-1} = b_{n-1} gi=bibi+1for i=n2,,0g_i = b_i \oplus b_{i+1} \quad \text{for } i = n-2, \dots, 0 where \oplus denotes the XOR operation. For example, consider the 4-bit 1011 ( 11). The MSB g3=1g_3 = 1. Then g2=01=1g_2 = 0 \oplus 1 = 1, g1=10=1g_1 = 1 \oplus 0 = 1, and g0=11=0g_0 = 1 \oplus 1 = 0, yielding the Gray code 1110 ( 14). A step-by-step for binary-to-Gray conversion is as follows:

function binary_to_gray(binary: [integer](/page/Integer), n: int) -> [integer](/page/Integer): gray = 0 for i from 0 to n-1: if i == 0: gray |= (binary & (1 << (n-1))) // MSB unchanged else: bit = ((binary >> (n-1-i)) & 1) ^ ((binary >> (n-i)) & 1) gray |= (bit << (n-1-i)) return gray

function binary_to_gray(binary: [integer](/page/Integer), n: int) -> [integer](/page/Integer): gray = 0 for i from 0 to n-1: if i == 0: gray |= (binary & (1 << (n-1))) // MSB unchanged else: bit = ((binary >> (n-1-i)) & 1) ^ ((binary >> (n-i)) & 1) gray |= (bit << (n-1-i)) return gray

For bit-parallel implementation in software (e.g., for fixed-width ), the conversion can be achieved efficiently with a single shift and XOR operation: gray = binary ^ (binary >> 1). This works because the right-shift aligns each bit with its higher neighbor for XOR, assuming the integer width matches nn. The reverse conversion from Gray to binary uses a cumulative XOR starting from the MSB, where each binary bit is the XOR of the corresponding Gray bit and the already-computed higher binary bit. Formally: bn1=gn1b_{n-1} = g_{n-1} bi=gibi+1for i=n2,,0b_i = g_i \oplus b_{i+1} \quad \text{for } i = n-2, \dots, 0 This propagates the parity from higher bits downward. Applying this to the earlier example Gray code 1110: b3=1b_3 = 1. Then b2=11=0b_2 = 1 \oplus 1 = 0, b1=10=1b_1 = 1 \oplus 0 = 1, and b0=01=1b_0 = 0 \oplus 1 = 1, recovering the binary 1011. for Gray-to-binary conversion:

function gray_to_binary(gray: integer, n: int) -> integer: binary = 0 // Start with MSB if (gray & (1 << (n-1))): binary |= (1 << (n-1)) for i from n-2 downto 0: higher_bit = (binary >> (i+1)) & 1 bit = ((gray >> i) & 1) ^ higher_bit if bit: binary |= (1 << i) return binary

function gray_to_binary(gray: integer, n: int) -> integer: binary = 0 // Start with MSB if (gray & (1 << (n-1))): binary |= (1 << (n-1)) for i from n-2 downto 0: higher_bit = (binary >> (i+1)) & 1 bit = ((gray >> i) & 1) ^ higher_bit if bit: binary |= (1 << i) return binary

Bit-parallel Gray-to-binary conversion requires a loop over bits due to the dependency chain, but can be optimized using parallel prefix computation (e.g., via carry-lookahead-like structures in hardware) for O(logn)O(\log n) delay. Edge cases include the all-zero code, which maps to itself under both conversions since no XOR operations alter the bits. The binary-reflected Gray code is cyclic, meaning the code for 2n12^n - 1 (all 1s in binary, typically 100...0 in Gray for n>1n>1) and the all-zero code differ by exactly one bit (the MSB), facilitating wrap-around in applications like rotary encoders without additional logic.

Applications

Rotary Encoders and Position Sensing

Rotary encoders are electromechanical devices that convert the angular position of a rotating shaft into digital output signals, with Gray code serving as a preferred encoding scheme in absolute rotary encoders to ensure precise and reliable position feedback. In these encoders, a code disk or ring features multiple concentric tracks, each representing one bit of the Gray code , where the pattern of reflective or transmissive sectors corresponds to the binary-reflected Gray code values for each angular position. As the shaft rotates, stationary sensors read the tracks to output the current absolute position without requiring a homing or external reference, enabling immediate determination of the shaft's orientation upon startup. Absolute rotary encoders using Gray code differ from incremental types, which generate quadrature pulses to track relative motion and speed but lose absolute position information during power loss. For example, an 8-bit Gray code absolute encoder provides 256 unique positions across a full 360-degree , yielding a resolution of about 1.4 degrees per step and supporting applications demanding high precision, such as servo . The reflected binary Gray code is inherently cyclic, with the first (000...0) and last (100...0) codes differing by only a single bit, which ensures seamless transition at the full boundary without introducing invalid intermediate states. In position sensing applications like and computer (CNC) machines, Gray code encoders monitor joint angles or axis positions to enable accurate trajectory planning and . The key advantage lies in the single-bit transition property: during rotation, only one track changes state at a time, minimizing the risk of erroneous multi-bit readings that could occur due to slight mechanical misalignment or sensor timing variations. This property enhances in real-world environments; if or track wear temporarily disrupts one sensor, the output code typically maps to an adjacent valid position rather than a distant invalid one, limiting position errors to the nearest step. Implementations of Gray code rotary encoders include optical and magnetic variants. Optical encoders employ an LED light source and array to read transparent or reflective patterns on the disk, offering high resolution but sensitivity to contaminants like . Magnetic encoders, using a multipole magnetized disk and Hall-effect or magnetoresistive sensors, provide greater robustness against , , and wear, making them suitable for harsh industrial settings in and CNC systems where optical types might fail prematurely.

Digital Error Correction and Minimization

In minimization, Karnaugh maps (K-maps) leverage the adjacency property of Gray codes to simplify the grouping of minterms into product terms, ensuring that neighboring cells differ by only one variable, which facilitates the application of simplification rules like the consensus theorem. This Gray code ordering along the map's axes—such as using 00, 01, 11, 10 for two variables—prevents discontinuities that would occur with standard binary labeling, allowing designers to visually identify larger implicants without missing essential prime implicants. For a three-variable K-map, the rows and columns are labeled with Gray codes for variables A, B, and C, where the map resembles a for wrapping around edges; for instance, in minimizing the function F(A, B, C) = Σ(0, 1, 4, 5), the 1s in cells corresponding to minterms m0 (000), m1 (001), m4 (100), and m5 (101) form a group of four adjacent cells, simplifying to \overline{B}, demonstrating how single-bit adjacency reduces the number of literals in the sum-of-products expression. Gray codes enable detection in digital counters by ensuring that valid state transitions differ by exactly one bit, making any multi-bit immediately detectable as an invalid state, particularly for single-bit flips caused by noise or faults. In asynchronous or synchronous counters, if a single-bit occurs during a transition, the resulting code will not match any legitimate adjacent state, allowing the system to flag and potentially reset the counter without propagating the error further. To extend this for multi-bit , parity bits can be appended to Gray code words; for example, an even-parity extension computes the parity over the Gray-encoded bits, enabling detection of up to two errors or correction of one, as the of 1 between valid codes combined with parity ensures erroneous patterns deviate sufficiently for identification. This approach is particularly useful in fault-tolerant digital systems where reliability is paramount. In ASIC and FPGA designs, the single-bit transition property of Gray codes—characterized by a of 1 between consecutive states—significantly reduces dynamic switching power by minimizing simultaneous bit flips in state machines, buses, and address pointers, which can account for up to 70% of total power in high-frequency circuits. By encoding FSM states or counter outputs in Gray code, the average number of toggling bits per cycle drops from multiple (in binary) to one, lowering capacitive switching proportional to CV²f, where C is and f is ; implementations in FPGAs have shown power savings of 15-30% in UART controllers and time-to-digital converters without loss. This technique is widely adopted in low-power VLSI for applications like embedded processors, where even small reductions in transition activity yield substantial energy efficiency gains. Gray codes integrate with to generate efficient test patterns in VLSI testing, forming Hamiltonian paths on hypercubes that cover all possible bit combinations with minimal transitions, ideal for (BIST) circuits to detect stuck-at faults or bridging defects. In this context, a based on Gray code ordering ensures each n-bit pattern appears exactly once in a cyclic sequence of length 2^n, with only one bit changing per step, reducing test time and power during scan-chain loading compared to pseudo-random patterns. For example, in FPGA BIST, Gray-code de Bruijn sequences have been used to achieve 100% fault coverage for with up to 50% less switching activity than (LFSR)-based methods, making them suitable for at-speed testing in modern SoCs.

Communication Systems and Clock Domains

In systems with multiple clock domains, such as field-programmable gate arrays (FPGAs) or system-on-chip (SoC) designs, signals crossing between domains risk when multiple bits toggle simultaneously during . Gray codes address this by encoding counters or pointers such that consecutive values differ by only one bit, ensuring that even if a synchronizer captures a metastable state in one bit, the overall value remains a valid Gray code sequence that can be decoded correctly in the receiving domain. This approach is commonly applied to asynchronous FIFO implementations, where Gray-encoded read and write pointers are transferred across clock boundaries to reliably detect full or empty conditions. For instance, in a fast-to-slow clock domain FIFO, the write pointer in Gray code is synchronized using double flip-flop stages; because only one bit changes per increment, any affects at most one bit, allowing the receiving domain to interpret the pointer as either the previous or current valid state without generating false flags. The technique ensures robust operation in multi-clock environments, with the Gray code's single-bit transition property preventing invalid intermediate combinations that could lead to data loss or corruption. In designing asynchronous FIFOs for clock domain crossing, Gray code pointers are generated from binary counters and synchronized across domains using at least two flip-flops (typically a two-stage synchronizer) to mitigate metastability. The pointers are often converted back to binary in the receiving domain for comparison to determine status flags. For the full flag, the FIFO is considered full if the next write pointer (in binary) matches the synchronized read pointer, accounting for wrap-around by checking if the most significant bits differ while lower bits align. Similarly, the empty flag is asserted when the synchronized write pointer matches the next read pointer. This binary comparison method, combined with Gray code's single-bit change, limits potential errors from asynchronous sampling to ±1 position, ensuring reliable operation without overruns or underruns. Multi-stage synchronizers (2+ flip-flops) further resolve any residual metastability, restricting bit errors to one and allowing the system to tolerate timing variations across domains. In serial communication protocols, Gray codes reduce bit error impacts by mapping adjacent data symbols to codewords that differ in a single bit, confining transmission errors to affect only neighboring states during demodulation and thus improving overall reliability. For example, in UART-based systems, employing Gray coding in finite state machines or data encoding minimizes propagation of noise-induced flips, particularly in low-power FPGA implementations where bit transitions are optimized to lower dynamic power while maintaining error resilience. This is especially beneficial in FPGA bus handshaking scenarios, such as Avalon or AXI interfaces crossing clock domains, where Gray-encoded control signals ensure stable handshake acknowledgments despite asynchronous timing variations. The single-bit change property of Gray codes further mitigates in high-speed interfaces by limiting the probability of synchronizers entering metastable states across multiple bits simultaneously. In DDR memory interfaces, for instance, Gray code pointers in memory controllers' FIFO buffers prevent pointer collisions during generation and transfer across clock domains; a 65-bit Gray code for a 64-deep FIFO allows rollover without invalid states, as demonstrated in Xilinx Spartan-6 MIG designs supporting DDR2/DDR3, where this encoding ensures reliable burst operations even under varying clock skews. Gray codes also appear in specific protocol fields to enhance error detection in networked communication. In the Controller Area Network with Flexible Data-rate (), the Stuff Bit Count (SBC) field consists of three Gray-coded bits followed by a , which counts inserted stuffing bits to verify frame integrity and detect errors from common in automotive buses; this Gray encoding allows efficient single-bit error localization, improving the protocol's robustness over classical CAN. Similarly, in Ethernet implementations, Gray codes can encode address buses to reduce switching activity and error susceptibility during multi-gigabit transfers, though their use is more prevalent in custom PHY designs for minimizing crosstalk-induced bit flips.

Computational and Algorithmic Uses

In genetic algorithms, Gray codes are employed to represent chromosomes as binary strings where successive values differ by only one bit, the and mitigating the Hamming cliff problem that can occur with standard binary encoding. This approach ensures that small —such as single-bit flips—result in proportionally small changes to the encoded values, facilitating more gradual exploration of the search space and improving convergence rates compared to binary representations in certain optimization tasks. For instance, in solving the traveling salesman problem (TSP), Gray code encoding of tour permutations minimizes Hamming distances between similar solutions, enhancing the efficiency of genetic operators like crossover and in evolutionary algorithms. Gray codes enable efficient state cycling in computational simulations by providing a sequence where transitions between states require minimal changes, typically one bit, which reduces computational overhead in exhaustive enumerations. In VLSI testing simulations, Gray code-based test pattern generators produce sequences with single-bit differences, lowering power dissipation during fault simulation and improving test coverage with fewer transitions than binary counters. Additionally, Gray counters support loopless generation for enumerating combinatorial objects, such as n-tuples over an m-ary alphabet, where each successive code is produced in constant amortized time without , making them suitable for large-scale simulations in algorithm design and verification. In arithmetic computations, representations facilitate adders and subtracters that minimize carry effects through their single-bit transition , allowing software implementations to perform operations with reduced intermediate bit flips compared to binary arithmetic. For example, algorithms for adding two Gray-coded numbers can leverage the code's structure to propagate carries more locally, avoiding the ripple effects common in binary addition, though full efficiency often requires hybrid conversion steps. Gray codes are also applied in addressing schemes, such as encoding cache line indices, to decrease bus switching activity; shifted Gray encoding on buses can reduce transitions by up to 90% in sequential accesses, optimizing power consumption in embedded systems and cache-oblivious algorithms. Modern computational approaches to Gray code-related puzzles emphasize efficient algorithmic solutions, such as loopless generation methods that extend historical problems like the Tower of Hanoi into generalized enumeration tasks. These algorithms generate full Gray code lists for combinatorial structures—e.g., binary reflected Gray codes for state transitions—in O(1) time per code, enabling scalable solving of optimization puzzles involving Hamiltonian paths on hypercubes without redundant computations.

Variants

Non-Binary and Short-Length Gray Codes

Gray codes extend beyond binary representations to n-ary systems, where codewords consist of digits from a base k>2k > 2, and consecutive codewords differ in exactly one position by a value of 1. These k-ary Gray codes preserve the adjacency property in the k-ary hypercube graph, where vertices represent all possible k^n codewords and edges connect those differing by one digit increment or decrement. The standard construction for k-ary reflected Gray codes follows a recursive approach analogous to the binary case. For n digits in base k, the sequence begins with the (n-1)-digit code prefixed by 0, followed by the reflected (n-1)-digit code prefixed by 1 (with digits adjusted by adding 1 modulo k in the reflection for consistency), and continues this pattern up to prefix k-1, incorporating reflections to ensure single-digit changes. This method generates a through the k-ary n-cube. When k is even, the resulting code is cyclic; for odd k, it typically is not without modification. A representative example is the 2-digit ternary (k=3) reflected Gray code: 00, 01, 02, 12, 11, 10, 20, 21, 22. Each transition alters exactly one digit by 1, such as from 02 to 12 (first digit from 0 to 1) or from 10 to 20 (first digit from 1 to 2). These codes find applications in multi-level signaling and higher-radix position encoders, adapting the minimal-transition benefit to non-binary alphabets. Short-length Gray codes address scenarios requiring of fewer than k^n states, forming Hamiltonian paths in (induced subgraphs) of the k-ary with m < k^n vertices. These partial Gray codes ensure consecutive elements in the subset differ by a single digit change, useful for partial encoding in resource-constrained systems like sparse data representations or limited-range sensors. For instance, in binary cases, a short-length code might traverse a subset of 5 vertices in the 3-cube instead of all 8, minimizing transitions for the selected states. Excess Gray codes are a variant for absolute encoders with resolutions not powers of 2, using more bits than log2r\lceil \log_2 r \rceil (where r is the resolution) and an offset to maintain single-bit changes, including in the cyclic transition from last to first state. For example, a 360-position encoder might use 9 bits (512 states) offset by 76, coding positions 76 to 435. They are used in applications like shaft encoders for reliable position feedback. Single-track variants of Gray codes, often derived from excess or standard forms, use a single to encode multiple positions via cyclic shifts, enabling compact implementations in linear position sensors. They can be generated using linear feedback shift registers (LFSRs) for efficient hardware realization, where the feedback polynomial ensures the cyclic property and bounded changes across shifts. For example, an LFSR-based single-track for n positions produces a where each track is a rotated version of the base , supporting linear feedback for continuous encoding without multiple physical tracks.

Specialized Geometric and Balanced Codes

Balanced Gray codes are a variant of binary reflected Gray codes designed to ensure an equal number of transitions from 0 to 1 and from 1 to 0 across the entire . This balance is achieved by constructing the code such that for an n-bit , the total number of each transition type is 2^{n-1}, making it particularly useful in applications requiring symmetric bit flip behavior to minimize cumulative errors in sequential processing. Existence of such codes has been proven for all positive integers n, with recursive constructions extending smaller balanced codes to larger dimensions. Long-run balanced variants further incorporate extended sequences of identical bits, allowing for prolonged stability in certain states while maintaining the overall transition equilibrium. Monotonic Gray codes represent sequences where the number of 1s in consecutive codewords changes by at most one, ensuring a gradual progression through Hamming weights in the . These codes are valuable for enumerating combinatorial objects like subsets or combinations in a smooth manner, avoiding abrupt jumps in density. A specialized form, the Beckett-Gray code, lists all binary strings of n by traversing the such that the runs of 1s appear in strictly increasing lengths from 1 to n, simulating a patterned walk inspired by Samuel Beckett's in the play Quad. This structure not only preserves the single-bit change property but also enforces a monotonic increase in run lengths, providing a ordering for generating combinations where each successive set differs by adding or removing elements in a controlled . Geometric Gray codes optimize paths in specific topologies, such as the snake-in-the-box problem, which seeks the longest induced path in an n-dimensional without chords—ensuring no shortcuts between non-adjacent vertices in the path. For a 4-bit (Q_4 with 16 vertices), the maximum snake length is 8 codewords, forming a path that visits half the vertices while maximizing distance without inducing extra edges. In two dimensions, Gray codes adapted for toroidal addressing generate Hamiltonian cycles in toroidal grids by using mixed-radix representations, where successive codes differ in one coordinate by ±1, enabling efficient routing and embedding in wrap-around network topologies. Single-track Gray codes extend this geometry to one-dimensional circular arrangements, realizable via a single binary track for absolute position encoding, historically applied in rotary drum mechanisms where multiple concentric tracks would be impractical due to mechanical constraints. These specialized codes find application in coil winding, where snake-in-the-box paths guide non-intersecting wire layouts in multi-layer inductors or transformers, maximizing coil length within a bounded volume while minimizing electromagnetic interference from adjacent turns.

Recent Developments in Combinatorial Gray Codes

Recent advancements in combinatorial Gray codes have focused on refining constructions for specialized properties and extending their utility in emerging computational paradigms. A comprehensive survey by Torsten Mütze, updated in 2024, provides an overview of progress in generating Gray codes for various combinatorial objects, including de Bruijn sequences and universal cycles that enumerate subsets or permutations with minimal transitions. This update highlights advancements in loop-free algorithms for restricted growth functions and ordered trees, building on earlier work to achieve more efficient listings for polytopes and other structures. Additionally, contributions from researchers like Mansour, Nassar, and Vajnovszki have refined Gray code listings for e-restricted growth functions, emphasizing universal cycles that cover all relevant objects in a cyclic manner with adjacent changes. A notable 2024 development addresses skew-tolerant Gray codes, which ensure that transitions between consecutive codewords occur in adjacent bit positions within the . The construction by Benny Applebaum and others introduces asymptotically non-vanishing skew-tolerant Gray codes for s of dimension nn, achieving a skew parameter of Ω(logn/n)\Omega(\log n / n) with high probability. This represents an exponential improvement over prior constructions, which were limited to vanishing skew approaching zero, enabling more robust encodings for applications requiring localized changes in high-dimensional spaces. In the domain of optical measurements, ternary Gray codes have enhanced phase unwrapping for 3D shape reconstruction using binary defocusing techniques. A 2021 algorithm by and colleagues employs ternary Gray coding with fewer binary patterns—specifically six for 256 unique orders—reducing acquisition time while maintaining sub-pixel accuracy in fringe projection profilometry. Building on this, a 2024 method introduces complementary Gray codes to further mitigate errors from defocusing and noise. By decomposing the quaternary pattern into complementary binary pairs and integrating reliability maps, the approach achieves error rates below 0.1% for complex surfaces, outperforming ternary methods in dynamic scenes. Emerging applications leverage combinatorial Gray codes in quantum-inspired and AI optimization. In parameterized quantum circuits, a gradient-free uses Gray code sequences to systematically explore parameter spaces, reducing convergence time by up to 50% compared to methods for variational quantum eigensolvers. Similarly, Gray code encodings have improved Hamiltonian formulations for problems like the traveling salesman, enabling space-efficient representations on fewer qubits and facilitating hybrid quantum-classical solvers since 2022. These integrations highlight Gray codes' role in bridging classical with quantum for scalable AI-driven optimization.

Mathematical Foundations

Gray Code Isometry

A Gray code can be viewed as a distance-preserving mapping that embeds a linear ordering of symbols into the vertices of a , where the in the corresponds to a predefined metric on the symbols, such as the Lee distance for non-binary alphabets. In the binary case, this reduces to ordering the 2n2^n binary strings such that consecutive elements differ in exactly one bit position, forming a in the nn-dimensional QnQ_n. More generally, for an alphabet of size q=2kq=2^k, a Gray code provides an from the qq-ary nn-cube equipped with an appropriate metric (e.g., Lee or Manhattan distance) to the knkn-dimensional binary under . The key mathematical property is the preservation of adjacency: for a sequence g0,g1,,gqn1g_0, g_1, \dots, g_{q^n-1} constituting the Gray code, the Hamming distance satisfies dH(gi,gi+1)=1d_H(g_i, g_{i+1}) = 1 (in the binary case) or a constant multiple of the symbol distance (in generalized cases), ensuring the embedding maintains local distances along the linear order. This establishes an isometry between the path graph PqnP_{q^n} (with vertices ordered linearly and edges between consecutive indices) and the induced path subgraph of the target hypercube, where distances are measured along the embedding path rather than the full graph metric. In the non-binary setting, such as for q=4q=4 over Z4n\mathbb{Z}_4^n with Lee distance dLd_L, the Gray map ϕ:Z4nF22n\phi: \mathbb{Z}_4^n \to \mathbb{F}_2^{2n} defined componentwise by ϕ(0)=(0,0)\phi(0)=(0,0), ϕ(1)=(0,1)\phi(1)=(0,1), ϕ(2)=(1,1)\phi(2)=(1,1), ϕ(3)=(1,0)\phi(3)=(1,0) satisfies dH(ϕ(x),ϕ(y))=dL(x,y)d_H(\phi(\mathbf{x}), \phi(\mathbf{y})) = d_L(\mathbf{x}, \mathbf{y}) for all x,yZ4n\mathbf{x}, \mathbf{y} \in \mathbb{Z}_4^n, making it a true isometry between the metric spaces. A proof sketch for the binary reflected Gray code relies on the closed-form expression g(i)=ii/2g(i) = i \oplus \lfloor i/2 \rfloor (where \oplus denotes bitwise XOR), which ensures bit-flip minimality between g(i)g(i) and g(i+1)g(i+1). When i+1i+1 is obtained by adding 1 to ii in binary (flipping a of trailing 1s to 0s and the next 0 to 1), the XOR with the right-shifted version aligns the changes such that all but one bit remain unchanged in the result, as the shift propagates the carry effect to cancel multiple flips. This is verified by induction on nn: the base case n=1n=1 holds trivially, and the recursive reflection preserves the property by mirroring the (n1)(n-1)-code and prefixing with a fixed bit, flipping only the prefix bit at the junction. For extensions to nn-ary (or qq-ary) isometries, similar componentwise mappings are constructed for rings like Zpk\mathbb{Z}_{p^k} or finite pp-groups, preserving a generalized Lee metric to via tensor product constructions or iterative Gray maps, as in type-I and type-II Gray maps for pp-groups. In , these isometries are optimal for embeddings because they allow linear codes over rings (e.g., Z4\mathbb{Z}_4-linear Kerdock codes) to be equivalently represented as nonlinear binary codes with preserved minimum , enabling analysis of duality and nonlinearity through the isometric image in the . This facilitates constructions where the Lee minimum translates directly to Hamming minimum , optimizing error-correcting capabilities without metric distortion. Gray codes, particularly the reflected binary variant, differ fundamentally from natural binary representations in their structure and utility. In natural binary encoding, increments between consecutive integers often result in multiple bit flips—for instance, the transition from 3 (011) to 4 (100) changes all three bits—potentially causing errors in hardware applications like position encoders. The reflected binary Gray code, by contrast, ensures that successive codewords differ in exactly one bit position, such as 011 to 010 or 100 to 110, minimizing transition errors and improving reliability in mechanical or optical sensing systems. This single-bit adjacency property positions Gray codes as a sequential ordering tool, distinct from natural binary's direct numerical mapping. Hamming codes represent a class of error-correcting s that incorporate parity bits to detect and correct single-bit s or detect double-bit s within data blocks, achieving a minimum of 3 through systematic linear construction. Developed for reliable data transmission in computing systems, these codes add via parity checks rather than reordering for adjacency, allowing localization and recovery but at the cost of increased code length. In comparison, Gray codes prioritize minimal bit changes between neighboring codewords to prevent propagation in dynamic environments like rotary encoders, without inherent error-correction mechanisms; thus, while both leverage concepts, Hamming codes focus on through verification, whereas Gray codes emphasize transition efficiency. De Bruijn sequences offer a cyclic arrangement of symbols where every possible substring of fixed length appears exactly once, enabling compact representation of all k-ary strings of length n in a sequence of length k^n. This contrasts with Gray codes' linear or cyclic listings of all 2^n binary strings, where the emphasis is on single-bit differences between consecutive full codewords rather than overlapping substrings. Although both structures facilitate exhaustive enumeration with controlled changes—Gray via hypercube traversals and de Bruijn via graph cycles—they serve divergent purposes: de Bruijn for applications like sequence design in or testing, and Gray for low-error . Beyond these, Gray codes connect to cyclic codes in through the Gray map, which transforms non-binary cyclic codes into binary equivalents while approximately preserving minimum distance, aiding in the analysis of error-correcting properties for higher-radix systems. Additionally, loopless generation techniques for Gray codes enable constant-time computation of the next codeword in a sequence, avoiding recursive and supporting efficient algorithmic enumeration of combinatorial objects. These links underscore Gray codes' role in minimal-transition orderings, setting them apart from parity-based correction in Hamming schemes or substring coverage in de Bruijn constructions, with Gray optimized for scenarios demanding smooth bit-level progressions.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.