Hubbry Logo
logo
Counter machine
Community hub

Counter machine

logo
0 subscribers
Read side by side
from Wikipedia

A counter machine or counter automaton is an abstract machine used in a formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines. A counter machine comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer, and a list of (usually sequential) arithmetic and control instructions for the machine to follow. The counter machine is typically used in the process of designing parallel algorithms in relation to the mutual exclusion principle. When used in this manner, the counter machine is used to model the discrete time-steps of a computational system in relation to memory accesses. By modeling computations in relation to the memory accesses for each respective computational step, parallel algorithms may be designed in such a matter to avoid interlocking, the simultaneous writing operation by two (or more) threads to the same memory address.

Counter machines with three counters can compute any partial recursive function of a single variable. Counter machines with two counters are Turing complete: they can simulate any appropriately-encoded Turing machine. Counter machines with only a single counter can recognize a proper superset of the regular languages and a subset of the deterministic context free languages.[1]

Basic features

[edit]

For a given counter machine model the instruction set is tiny—from just one to six or seven instructions. Most models contain a few arithmetic operations and at least one conditional operation (if condition is true, then jump). Three base models, each using three instructions, are drawn from the following collection. (The abbreviations are arbitrary.)

  • CLR (r): CLeaR register r. (Set r to zero.)
  • INC (r): INCrement the contents of register r.
  • DEC (r): DECrement the contents of register r.
  • CPY (rj, rk): CoPY the contents of register rj to register rk leaving the contents of rj intact.
  • JZ (r, z): IF register r contains Zero THEN Jump to instruction z ELSE continue in sequence.
  • JE (rj, rk, z): IF the contents of register rj Equals the contents of register rk THEN Jump to instruction z ELSE continue in sequence.

In addition, a machine usually has a HALT instruction, which stops the machine (normally after the result has been computed).

Using the instructions mentioned above, various authors have discussed certain counter machines:

  • set 1: { INC (r), DEC (r), JZ (r, z) }, (Minsky (1961, 1967), Lambek (1961))
  • set 2: { CLR (r), INC (r), JE (rj, rk, z) }, (Ershov (1958), Peter (1958) as interpreted by Shepherdson–Sturgis (1964); Minsky (1967); Schönhage (1980))
  • set 3: { INC (r), CPY (rj, rk), JE (rj, rk, z) }, (Elgot–Robinson (1964), Minsky (1967))

The three counter machine base models have the same computational power since the instructions of one model can be derived from those of another. All are equivalent to the computational power of Turing machines. Due to their unary processing style, counter machines are typically exponentially slower than comparable Turing machines.

Alternative names, alternative models

[edit]

The counter machine models go by a number of different names that may help to distinguish them by their peculiarities. In the following the instruction "JZDEC ( r )" is a compound instruction that tests to see if a register r is empty; if so then jump to instruction Iz, else if not then DECrement the contents of r:

  • Minsky machine, because Marvin Minsky (1961) formalized the model. Abacus machine, the name Lambek (1961) gave to his simplification of the Melzak (1961) model, and what Boolos–Burgess–Jeffrey (1974) call it. Lambek machine, an alternative name Boolos–Burgess–Jeffrey (1974) gave to the abacus machine. Usually uses instruction set (1), but instruction-execution is not default-sequential so the additional parameter 'z' appears to specify the next instruction after INC and as the alternative in JZDEC:
    { INC ( r, z ), JZDEC ( r, ztrue, zfalse) }
  • Program machine, program computer, the names Minsky (1967) gave the model because, like a computer its instructions proceed sequentially unless a conditional jump is successful. Uses (usually) instruction set (1) but may be augmented similar to the Shepherson–Sturgis model. JZDEC is often split apart:
    { INC ( r ), DEC ( r ), JZ ( r, ztrue )}
  • Successor machine, because it uses the 'successor operation' of, and closely resembles, the Peano axioms. Used as a base for the successor RAM model. Uses instruction set (2) by e.g. Schönhage as a base for his RAM0 and RAM1 models that lead to his pointer machine SMM model,[2][3] also discussed briefly by Van Emde Boas:[4][5]
    { CLR ( r ), INC ( r ), JE ( rj, rk, z ) }
  • Elgot–Robinson model, used to define their RASP model (1964). This model requires one empty register at the start (e.g. [r0] =0). (They augmented the same model with indirect addressing by use of an additional register to be used as an "index" register.)
    { INC (r), CPY ( rs, rd ), JE ( rj, rk, z ) }
  • Shepherdson–Sturgis machine, because these authors formally stated their model in an easily accessible exposition (1963). Uses instruction set (1) augmented with additional convenience instructions (JNZ is "Jump if Not Zero, used in place of JZ):
    { INC ( r ), DEC ( r ), CLR ( r ), CPY ( rj, rk ), JNZ ( r, z ), J ( z ) }
  • Other counter machines: Minsky (1967) demonstrates how to build the three base models (program/Minsky/Lambek-abacus, successor, and Elgot–Robinson) from the superset of available instructions described in the lead paragraph of this article. The Melzak (1961) model is quite different from the above because it includes 'add' and 'subtract' rather than 'increment' and 'decrement'. The proofs of Minsky (1961, 1967) that a single register will suffice for Turing equivalence requires the two instructions { MULtiply k, and DIV k } to encode and decode the Gödel number in the register that represents the computation. Minsky shows that if two or more registers are available then the simpler INC, DEC etc. are adequate (but the Gödel number is still required to demonstrate Turing equivalence; also demonstrated in Elgot–Robinson 1964).

Formal definition

[edit]

A counter machine consists of:

  1. Labeled unbounded integer-valued registers: a finite (or infinite in some models) set of registers r0 ... rn each of which can hold any single non-negative integer (0, 1, 2, ... - i.e. unbounded). The registers do their own arithmetic; there may or may not be one or more special registers e.g. "accumulator" (See Random-access machine for more on this).
  2. A state register that stores/identifies the current instruction to be executed. This register is finite and separate from the registers above; thus the counter machine model is an example of the Harvard architecture
  3. List of labelled, sequential instructions: A finite list of instructions I0 ... Im. The program store (instructions of the finite-state machine) is not in the same physical "space" as the registers. Usually, but not always, like computer programs the instructions are listed in sequential order; unless a jump is successful, the default sequence continues in numerical order. Each of the instructions in the list is from a (very) small set, but this set does not include indirection. Historically most models drew their instructions from this set:
{ Increment (r), Decrement (r), Clear (r); Copy (rj,rk), conditional Jump if contents of r=0, conditional Jump if rj=rk, unconditional Jump, HALT }
Some models have either further atomized some of the above into no-parameter instructions, or combined them into a single instruction such as "Decrement" preceded by conditional jump-if-zero "JZ ( r, z )" . The atomization of instructions or the inclusion of convenience instructions causes no change in conceptual power, as any program in one variant can be straightforwardly translated to the other.
Alternative instruction-sets are discussed in the supplement Register-machine models.

Example: COPY the count from register #2 to #3

[edit]

This example shows how to create three more useful instructions: clear, unconditional jump, and copy.

Afterward rs will contain its original count (unlike MOVE which empties the source register, i.e., clears it to zero).

The basic set (1) is used as defined here:

Instruction Effect on register "j" Effect on Instruction Counter Register ICR Summary
INC ( j ) [j] +1 → j [IC] +1 → IC Increment contents of register j; next instruction
DEC ( j ) [j] -1 → j [IC] +1 → IC Decrement contents of register j; next instruction
JZ ( j, z) IF [j] = 0 THEN Iz → IC ELSE [IC] +1 → IC If contents of register j=0 then instruction z else next instruction
HALT

Initial conditions

[edit]

Initially, register #2 contains "2". Registers #0, #1 and #3 are empty (contain "0"). Register #0 remains unchanged throughout calculations because it is used for the unconditional jump. Register #1 is a scratch pad. The program begins with instruction 1.

Final conditions

[edit]

The program HALTs with the contents of register #2 at its original count and the contents of register #3 equal to the original contents of register #2, i.e.,

[2] = [3].

Program high-level description

[edit]

The program COPY ( #2, #3) has two parts. In the first part the program moves the contents of source register #2 to both scratch-pad #1 and to destination register #3; thus #1 and #3 will be copies of one another and of the original count in #2, but #2 is cleared in the process of decrementing it to zero. Unconditional jumps J (z) are done by tests of register #0, which always contains the number 0:

[#2] →#3; [#2] →#1; 0 →#2

In the second the part the program moves (returns, restores) the contents of scratch-pad #1 back to #2, clearing the scratch-pad #1 in the process:

[#1] →#2; 0 →#1

Program

[edit]

The program, highlighted in yellow, is shown written left-to-right in the upper right.

A "run" of the program is shown below. Time runs down the page. The instructions are in yellow, the registers in blue. The program is flipped 90 degrees, with the instruction numbers (addresses) along the top, the instruction mnemonics under the addresses, and the instruction parameters under the mnemonics (one per cell):

1 2 3 4 5 6 7 8 9 10 ← Instruction number (address)
JZ DEC INC INC JZ JZ DEC INC JZ H ← Instruction
2 2 3 1 0 1 1 2 0 ← Register number
6 1 10 6 ← Jump-to instruction number
step IC Inst # reg # J-addr reg0 reg1 reg2 reg3 reg4 IC
start 0 0 2 0 0 1 move [#2] to #1 and #3:
1 1 JZ 2 6 0 0 2 0 0 1→2 JZ Jump fails: register #2 contains 2
2 2 DEC 2 0 0 0 2→1 0 0 2→3 DEC Decrement register #2 from 2 to 1
3 3 INC 3 0 0 0 1 0→1 0 3→4 INC Increment register #3 from 0 to 1
4 4 INC 1 0 0 0→1 1 1 0 4→5 INC Increment register #1 from 0 to 1
5 5 JZ 0 1 0 1 1 1 0 5→1 JZ U-Jump: register #0 is empty
6 1 JZ 2 6 0 1 1 1 0 1→2 JZ Jump fails: register #2 contains 1
7 2 DEC 2 0 0 1 1→0 1 0 2→3 DEC Decrement register #2 from 1 to 0
8 3 INC 3 0 0 1 0 1→2 0 3→4 INC Increment register #3 from 1 to 2
9 4 INC 1 0 0 1→2 0 2 0 4→5 INC Increment register #1 from 1 to 2
10 5 JZ 0 1 0 2 0 2 0 5→1 JZ U-Jump: register #0 is empty
11 1 JZ 2 6 0 2 0 2 0 1→6 JZ Jump !: register #2 is empty
move [1] to 2:
12 6 JZ 1 10 0 2 0 2 0 6→7 JZ Jump fails: register #1 contains 2
13 7 DEC 1 0 0 2→1 0 2 0 7→8 DEC Decrement register #1 from 2 to 1
14 8 INC 2 0 0 1 0→1 2 0 8→9 INC Increment register #2 from 0 to 1
15 9 JZ 0 6 0 1 1 2 0 9→6 JZ U-Jump: register #0 is empty
16 6 JZ 1 10 0 1 1 2 0 6→7 JZ Jump fails: register #1 contains 1
17 7 DEC 1 0 0 1→0 1 2 0 7→8 DEC Decrement register #1 from 1 to 0
18 8 INC 2 0 0 0 1→2 2 0 8→9 INC Increment register #2 from 1 to 2
19 9 JZ 0 6 0 0 2 2 0 9→6 JZ U-Jump: register #0 is empty
20 6 JZ 1 10 0 0 2 2 0 6→10 JZ Jump !: register #1 is empty
21 10 H 0 0 0 0 2 2 0 10→10 H HALT

The partial recursive functions: building "convenience instructions" using recursion

[edit]

The example above demonstrates how the first basic instructions { INC, DEC, JZ } can create three more instructions—unconditional jump J, CLR, CPY. In a sense CPY used both CLR and J plus the base set. If register #3 had had contents initially, the sum of contents of #2 and #3 would have ended up in #3. So to be fully accurate CPY program should have preceded its moves with CLR (1) and CLR (3).

However, we do see that ADD would have been possible, easily. And in fact the following is summary of how the primitive recursive functions such as ADD, MULtiply and EXPonent can come about.[6]

  • Beginning instruction set: { DEC, INC, JZ, H }
  • Define unconditional "Jump J (z)" in terms of JZ ( r0, z ) given that r0 contains 0.
{ J, DEC, INC, JZ, H }
  • Define "CLeaR ( r ) in terms of the above:
{ CLR, J, DEC, INC, JZ, H }
  • Define "CoPY ( rj, rk )" while preserving contents of rj in terms of the above:
{ CPY, CLR, J, DEC, INC, JZ, H }
The above is the instruction set of Shepherdson–Sturgis (1963).
  • Define "ADD ( rj, rk, ri )", (perhaps preserving the contents of rj, and rk ), by use of the above:
{ ADD, CPY, CLR, J, DEC, INC, JZ, H }
  • Define " MULtiply ( rj, rk, ri )" (MUL) (perhaps preserving the contents of rj, rk), in terms of the above:
{ MUL, ADD, CPY, CLR, J, DEC, INC, JZ, H }
  • Define "EXPonential ( rj, rk, ri )" (EXP) (perhaps preserving the contents of rj, rk ) in terms of the above,
{ EXP, MUL, ADD, CPY, CLR, J, DEC, INC, JZ, H }

In general, we can build any partial- or total- primitive recursive function that we wish, by using the same methods. Indeed, Minsky (1967), Shepherdson–Sturgis (1963) and Boolos–Burgess–Jeffrey (1974) give demonstrations of how to form the five primitive recursive function "operators" (1-5 below) from the base set (1).

But what about full Turing equivalence? We need to add the sixth operator—the μ operator—to obtain the full equivalence, capable of creating the total- and partial- recursive functions:

  1. Zero function (or constant function)
  2. Successor function
  3. Identity function
  4. Composition function
  5. Primitive recursion (induction)
  6. μ operator (unbounded search operator)

The authors show that this is done easily within any of the available base sets (1, 2, or 3) (an example can be found at μ operator). This means that any mu recursive function can be implemented as a counter machine,[7] despite the finite instruction set and program size of the counter machine design. However, the required construction may be counter-intuitive, even for functions that are relatively easy to define in more complex register machines such as the random-access machine. This is because the μ operator can iterate an unbounded number of times, but any given counter machine cannot address an unbounded number of distinct registers due to the finite size of its instruction list.

For instance, the above hierarchy of primitive recursive operators can be further extended past exponentiation into higher-ordered arrow operations in Knuth's up-arrow notation. For any fixed , the function is primitive recursive, and can be implemented as a counter machine in a straightforward way. But the function is not primitive recursive. One may be tempted to implement the up-arrow operator using a construction similar to the successor, addition, multiplication, and exponentiation instructions above, by implementing a call stack so that the function can be applied recursively on smaller values of . This idea is similar to how one may implement the function in practice in many programming languages. However, the counter machine cannot use an unbounded number of registers in its computation, which would be necessary to implement a call stack that can grow arbitrarily large. The up-arrow operation can still be implemented as a counter machine since it is mu recursive, however the function would be implemented by encoding an unbounded amount of information inside a finite number of registers, such as by using Gödel numbering.

Problems with the counter machine model

[edit]
The problems are discussed in detail in the article Random-access machine. The problems fall into two major classes and a third "inconvenience" class:

(1) Unbounded capacities of registers versus bounded capacities of state-machine instructions: How will the machine create constants larger than the capacity of its finite-state machine?

(2) Unbounded numbers of registers versus bounded numbers of state-machine instructions: How will the machine access registers with address-numbers beyond the reach/capability of its finite-state machine?

(3) The fully reduced models are cumbersome:

Shepherdson and Sturgis (1963) are unapologetic about their 6-instruction set. They have made their choice based on "ease of programming... rather than economy" (p. 219 footnote 1).

Shepherdson and Sturgis' instructions ( [r] indicates "contents of register r"):

    • INCREMENT ( r ) ; [r] +1 → r
    • DECREMENT ( r ) ; [r] -1 → r
    • CLEAR ( r ) ; 0 → r
    • COPY ( rs to rd ) ; [rs] → rd
    • JUMP-UNCONDITIONAL to instruction Iz
    • JUMP IF [r] =0 to instruction Iz

Minsky (1967) expanded his 2-instruction set { INC (z), JZDEC (r, Iz) } to { CLR (r), INC (r), JZDEC (r, Iz), J (Iz) } before his proof that a "Universal Program Machine" can be built with only two registers (p. 255ff).

Two-counter machines are Turing equivalent (with a caveat)

[edit]

For every Turing machine, there is a 2CM that simulates it, given that the 2CM's input and output are properly encoded. This is proved in Minsky's book (Computation, 1967, p. 255-258), and an alternative proof is sketched below in three steps. First, a Turing machine can be simulated by a finite-state machine (FSM) equipped with two stacks. Then, two stacks can be simulated by four counters. Finally, four counters can be simulated by two counters. The two counters machine use the set of instruction { INC ( r, z ), JZDEC ( r, ztrue, zfalse) }.

Step 1: A Turing machine can be simulated by two stacks.

[edit]

A Turing machine consists of an FSM and an infinite tape, initially filled with zeros, upon which the machine can write ones and zeros. At any time, the read/write head of the machine points to one cell on the tape. This tape can be conceptually cut in half at that point. Each half of the tape can be treated as a stack, where the top is the cell nearest the read/write head, and the bottom is some distance away from the head, with all zeros on the tape beyond the bottom. Accordingly, a Turing machine can be simulated by an FSM plus two stacks. Moving the head left or right is equivalent to popping a bit from one stack and pushing it onto the other. Writing is equivalent to changing the bit before pushing it.

Step 2: A stack can be simulated by two counters.

[edit]

A stack containing zeros and ones can be simulated by two counters when the bits on the stack are thought of as representing a binary number (the topmost bit on the stack being the least significant bit). Pushing a zero onto the stack is equivalent to doubling the number. Pushing a one is equivalent to doubling and adding 1. Popping is equivalent to dividing by 2, where the remainder is the bit that was popped. Two counters can simulate this stack, in which one of the counters holds a number whose binary representation represents the bits on the stack, and the other counter is used as a scratchpad. To double the number in the first counter, the FSM can initialize the second counter to zero, then repeatedly decrement the first counter once and increment the second counter twice. This continues until the first counter reaches zero. At that point, the second counter will hold the doubled number. Halving is performed by decrementing one counter twice and increment the other once, and repeating until the first counter reaches zero. The remainder can be determined by whether it reached zero after an even or an odd number of steps, where the parity of the number of steps is encoded in the state of the FSM.

Step 3: Four counters can be simulated by two counters.

[edit]

As before, one of the counters is used as scratchpad. The other holds an integer whose prime factorization is 2a3b5c7d. The exponents a, b, c, and d can be thought of as four virtual counters that are being packed (via Gödel numbering) into a single real counter. If the real counter is set to zero then incremented once, that is equivalent to setting all the virtual counters to zero. If the real counter is doubled, that is equivalent to incrementing a, and if it's halved, that's equivalent to decrementing a. By a similar procedure, it can be multiplied or divided by 3, which is equivalent to incrementing or decrementing b. Similarly, c and d can be incremented or decremented. To check if a virtual counter such as c is equal to zero, just divide the real counter by 5, see what the remainder is, then multiply by 5 and add back the remainder. That leaves the real counter unchanged. The remainder will have been nonzero if and only if c was zero.

As a result, an FSM with two counters can simulate four counters, which are in turn simulating two stacks, which are simulating a Turing machine. Therefore, an FSM plus two counters is at least as powerful as a Turing machine. A Turing machine can easily simulate an FSM with two counters, therefore the two machines have equivalent power.

The caveat: *If* its counters are initialised to N and 0, then a 2CM cannot calculate 2N

[edit]

This result, together with a list of other functions of N that are not calculable by a two-counter machine — when initialised with N in one counter and 0 in the other — such as N2, sqrt(N), log2(N), etc., appears in a paper by Schroeppel (1972). The result is not surprising, because the two-counter machine model was proved (by Minsky) to be universal only when the argument N is appropriately encoded (by Gödelization) to simulate a Turing machine whose initial tape contains N encoded in unary; moreover, the output of the two-counter machine will be similarly encoded. This phenomenon is typical of very small bases of computation whose universality is proved only by simulation (e.g., many Turing tarpits, the smallest-known universal Turing machines, etc.).

The proof is preceded by some interesting theorems:

  • "Theorem: A three-counter machine can simulate a Turing machine" (p. 2, also cf Minsky 1967:170-174)
  • "Theorem: A 3CM [three-counter machine] can compute any partial recursive function of one variable. It starts with the argument [i.e. N] in a counter, and (if it halts) leaves the answer [i.e. F(N)] in a counter." (p. 3)
  • "Theorem: A counter machine can be simulated by a 2CM [two-counter machine], provided an obscure coding is accepted for the input and output" [p. 3; the "obscure coding" is: 2W3X5Y7Z where the simulated counters are W, X, Y, Z]
  • "Theorem: Any counter machine can be simulated by a 2CM, provided an obscure coding is accepted for the input and output." (p. 3)
    • "Corollary: the halting problem for 2CMs is unsolvable.
    • "Corollary: A 2CM can compute any partial recursive function of one argument, provided the input is coded as 2N and the output (if the machine halts) is coded as 2answer." (p. 3)
  • "Theorem: There is no two counter machine that calculates 2N [if one counter is initialised to N]." (p. 11)

With regard to the second theorem that "A 3CM can compute any partial recursive function" the author challenges the reader with a "Hard Problem: Multiply two numbers using only three counters" (p. 2). The main proof involves the notion that two-counter machines cannot compute arithmetic sequences with non-linear growth rates (p. 15) i.e. "the function 2X grows more rapidly than any arithmetic progression." (p. 11).

A practical example of calculation by counting

[edit]

The Friden EC-130 calculator had no adder logic as such. Its logic was highly serial, doing arithmetic by counting. Internally, decimal digits were radix-1 — for instance, a six was represented by six consecutive pulses within the time slot allocated for that digit. Each time slot carried one digit, least significant first. Carries set a flip-flop which caused one count to be added to the digit in the next time slot.[citation needed]

Adding was done by an up-counter, while subtracting was done by a down-counter, with a similar scheme for dealing with borrows.[citation needed]

The time slot scheme defined six registers of 13 decimal digits, each with a sign bit. Multiplication and division were done basically by repeated addition and subtraction. The square root version, the EC-132, effectively subtracted consecutive odd integers, each decrement requiring two consecutive subtractions. After the first, the minuend was incremented by one before the second subtraction.[citation needed]

See also

[edit]

References

[edit]

Bibliography

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A counter machine is an abstract model of computation consisting of a finite-state controller and a finite number of unbounded counters that store non-negative integers, with instructions that allow incrementing or decrementing counters (the latter only if non-zero), unconditional jumps between instructions, and conditional jumps based on whether a specific counter is zero.[1][2] These machines process inputs sequentially and can simulate any Turing machine computation when equipped with at least two counters, establishing their Turing completeness.[3][4] Independently introduced in 1961 by Z. A. Melzak, J. Lambek, and Marvin Minsky as part of work on recursive unsolvability in Turing machine theory, counter machines provide a simple yet powerful framework for studying computability and complexity, often used to prove undecidability results due to their straightforward structure.[3][5][6] Variants include one-way counter machines, which process inputs in real-time without rewinding, and multi-counter machines with restrictions on counter operations, leading to hierarchies of computational power where k-counter machines recognize strictly more languages than (k-1)-counter machines for certain classes.[2][7] Counter machines are equivalent to other universal models like register machines under specific encodings, with applications in formal language theory for analyzing counter languages and in verification problems for systems with bounded counters.[1][8] Their minimal instruction set—typically limited to increment, decrement-if-nonzero with jump, and unconditional jump—facilitates proofs of equivalence to Turing machines via simulation techniques that encode tape contents into counter values.[5][4]

Fundamentals

Basic Features

A counter machine is an abstract computational model featuring a finite but unbounded set of registers, each capable of storing non-negative integers of arbitrary size, along with a program counter that sequences the execution of a finite program consisting of labeled instructions stored in a separate, read-only memory space. This separation of program instructions from data registers exemplifies a Harvard architecture, distinguishing it from models like the von Neumann architecture where code and data share the same memory. The machine operates by sequentially fetching and executing instructions pointed to by the program counter, which advances unless explicitly altered by a jump. The core instructions of a counter machine are minimal and focused on basic arithmetic and control flow: increment (INC), which adds 1 to a specified register and proceeds to the next instruction; decrement-if-non-zero (DEC), which subtracts 1 from a register only if its value is greater than zero and jumps accordingly, or skips the decrement and jumps elsewhere if zero; and jump-if-zero (JZ), which tests a register for zero and branches to a different instruction based on the result. These operations, often combined in the DEC instruction to include implicit zero-testing, enable the machine to perform conditional branching and simple arithmetic. Numbers in the registers are represented in unary form, where the value is encoded by the count of units, leading to computations that are conceptually simple but inefficient for large values compared to binary representations in other models.[9] Despite their simplicity, counter machines with as few as two to seven instructions—such as the canonical set of INC and DEC with zero-test—are computationally equivalent to Turing machines, capable of simulating any Turing-computable function, though with reduced efficiency due to the unary encoding. For instance, simulating a Turing machine running in time $ t $ on a counter machine requires $ O(t^2) $ steps, as operations like copying or comparing large counter values necessitate linear-time loops over their unary contents. This equivalence underscores the model's theoretical power while highlighting its practical limitations in speed relative to binary-based models like random-access machines.[10][11]

Historical Development

The counter machine model was first introduced by Marvin Minsky in 1961 as a simplified abstract device capable of simulating any Turing machine, thereby establishing its Turing-completeness for studying recursive unsolvability problems.[12] In his seminal paper, Minsky described the model using a finite set of registers holding non-negative integers and basic operations including increment, decrement, and conditional transfer based on zero-testing, positioning it as an accessible alternative to tape-based machines for theoretical analysis.[13] Concurrently, John Lambek developed a similar formalism in 1961, referring to it as an "abacus machine" with instructions for incrementing, decrementing registers, and branching on zero, proving its equivalence to Turing computability while emphasizing its pedagogical value for infinite computation on finite instructions. This work built on earlier ideas of register-based computation but refined the model for proving properties of recursive functions. Shortly thereafter, J.C. Shepherdson and H.E. Sturgis provided a rigorous formalization in 1963, defining register machines with clear syntax for instructions like clear, increment, decrement, and zero-test, and demonstrating that they compute precisely the partial recursive functions. These early contributions were influenced by prior models of random-access computation, such as those implicit in von Neumann's 1940s architecture descriptions, where registers allow direct access without sequential scanning, paving the way for counter machines as restricted variants focused on counter operations.[14] Throughout the 1960s and 1970s, the model evolved as a tool for investigating primitive recursive functions—those computable by bounded iterations—and their extension to partial recursive functions via unbounded minimization, with key proofs establishing equivalence to other foundational systems like Turing machines and lambda calculus.[15] This period saw counter machines adopted in computability theory texts for their intuitive structure in analyzing function classes and undecidability.

Variants

Alternative Names

Counter machines are also known by several synonymous terms in the theoretical computer science literature, reflecting their conceptual roots in modeling computation with unbounded registers. The term "register machine" is frequently used interchangeably, particularly in early foundational works, where it describes abstract devices with an unbounded number of registers holding non-negative integers and supporting basic operations like increment, decrement, and zero-testing.[16] However, the broader "register machine" concept can encompass variants with additional operations such as direct addition or subtraction of register values, distinguishing it from the stricter counter machine model limited to unit increments and decrements.[1] Another common designation is "Minsky machine," named after Marvin Minsky's 1961 formalization of a two-counter model capable of simulating Turing machines, which established the computational equivalence of such devices.[13] Similarly, "abacus machine" originates from Joachim Lambek's 1961 paper, where he simplified Z. A. Melzak's earlier proposal into an "infinite abacus" with unary operations on counters, emphasizing its analogy to a mechanical computing device with beads or counters.[17] In some European theoretical texts, the model is referred to as a "counter automaton," highlighting its automaton-like structure augmented with counters for universal computation. The terminology evolved during the 1960s through seminal papers: Minsky's work introduced the counter-based framework, Lambek's abacus variant provided a simplified exposition, and Shepherdson and Sturgis's 1963 model popularized "register machine" for programming recursive functions, with "counter machine" gaining prominence in later analyses of Turing equivalence.[3] Modern references often standardize on "counter machine" to denote the basic unbounded variant, while retaining historical names for specific contributions.

Alternative Models

The Minsky register machine, introduced by Marvin Minsky, operates with a minimal set of instructions: increment a specified register by one (optionally jumping to a label), conditional decrement of a register (decrement if non-zero and proceed or jump to one label, else jump to another based on zero), and halt execution.[10] This design emphasizes simplicity in register operations while enabling simulation of more complex computations through conditional branching on zero values.[18] The Shepherdson-Sturgis model extends the basic counter machine framework by incorporating a clear instruction that sets a register to zero, alongside successor (increment), transfer (copying the contents of one register to another as a primitive operation), and unconditional jump operations. Developed to facilitate the computation of partial recursive functions, this variant prioritizes ease of programming recursive algorithms over minimalism, allowing direct representation of primitive recursive operations like zeroing and copying registers. In contrast, the abacus machine, as formalized by Boolos, Burgess, and Jeffrey, focuses on successor and predecessor operations as core primitives, where the successor adds one to a register and the predecessor subtracts one only if the register is non-zero, with control flow directed via labeled instructions. This model draws inspiration from manual calculation devices, modeling computation through bead-like manipulations on an abstract abacus, and uses non-sequential instruction execution to simulate general-purpose processing. The program counter machine variant treats the program counter itself as a manipulable register, enabling explicit instructions to increment, decrement, or set the counter to values stored in other registers, thus allowing dynamic control flow and self-modifying programs within the counter framework.[19] Regarding Turing equivalence, all these counter machine variants achieve full Turing completeness with at least two counters, as demonstrated by Minsky's simulation of a Turing machine using two registers, though some models like the abacus machine may require additional counters for certain optimizations in equivalence proofs. Variations in minimal counter requirements arise from differences in instruction sets, but two counters suffice across models to compute any partial recursive function.

Formal Aspects

Components and Architecture

A counter machine is defined by a finite (but arbitrary) number of registers, typically denoted as $ R_1, R_2, \dots, R_k $, where each register holds a non-negative integer value representing the count of some abstract "tokens" or units. These registers provide the machine's memory, allowing arbitrary storage capacity since the number of registers can be chosen as large as needed and each can store arbitrarily large integers. The model assumes registers start at zero unless initialized otherwise, and operations are restricted to those preserving non-negativity.[20][21] The program of a counter machine consists of a finite sequence of labeled instructions, stored in a separate memory space from the registers, which aligns with a Harvard architecture separating code from data. Each instruction is uniquely identified by a label (often a numerical index), enabling jumps and sequential execution. This fixed program dictates the machine's behavior, with no mechanism for self-modification in the basic model. The program counter, an implicit or explicit component, points to the label of the currently executing instruction and advances or branches based on the instruction's semantics.[20][3] The state of a counter machine at any instant is the tuple comprising the current program counter value and the contents of all registers, formally represented as $ (pc, r_1, r_2, \dots, r_k) $, where $ pc $ is the program counter and each $ r_i $ is the non-negative integer in register $ R_i $. This configuration fully captures the machine's internal status, determining the next transition. Computation begins from an initial state with the program counter at the first instruction and proceeds until reaching a halt instruction.[21][22] Input to the counter machine is provided by loading the input value into a designated register, such as $ R_1 $, with all other registers initialized to zero, while the program counter starts at the entry point. Output is similarly extracted from a specific register, for example $ R_2 $, upon halting, with the final state indicating the computed result. This register-based input/output mechanism simplifies the model by avoiding auxiliary storage like tapes.[20][21]

Instruction Set and Execution

A counter machine operates using a small set of primitive instructions that manipulate the contents of its registers, which store non-negative integers. The standard instruction set consists of five basic operations: increment (INC i), which adds 1 to register R_i; decrement (DEC i), which subtracts 1 from R_i if R_i > 0 and otherwise skips the instruction without changing the register; zero-test jump (JZ i, j), which branches to instruction label j if R_i = 0 and otherwise proceeds to the next instruction; unconditional jump (JMP j), which branches to instruction label j; and unconditional halt (HALT), which terminates the computation. These instructions enable basic arithmetic and conditional control flow without support for indirect addressing or operations beyond single-unit increments and decrements.[2] Execution proceeds in a sequential fetch-execute cycle, where the machine maintains a program counter pointing to the current instruction in a fixed finite program. Starting from the initial counter position (typically instruction 1), the machine fetches the instruction, applies its effect to the registers and control flow, and advances the program counter to the next position or to a specified label for jumps. Conditional instructions like DEC i and JZ i, j introduce branching based on the register value, allowing the machine to test for zero or positive contents without explicit comparison operators. The process continues until the HALT instruction is reached or no applicable transition exists, at which point the computation stops and the final register values represent the output. Formally, the execution can be modeled by a transition function δ that maps the current state—comprising the program counter p and register values (r_1, ..., r_k)—and the instruction at p to a new state. For example, for INC i at p, δ((p, r), INC i) = (p+1, r with r_i incremented); for DEC i at p, δ((p, r), DEC i) = (p+1, r with r_i decremented) if r_i > 0, or (p+1, r) otherwise; for JZ i, j at p, δ((p, r), JZ i, j) = (j, r) if r_i = 0, or (p+1, r) otherwise; for JMP j at p, δ((p, r), JMP j) = (j, r); and the HALT instruction yields no successor state, ending the run. This deterministic transition captures the machine's step-by-step behavior, with the overall computation being the transitive closure of these single steps. Registers serve as the primary components for data storage, interacting solely through these instructions during execution.

Examples

Copying Register Contents

One common basic operation in counter machines is copying the contents of one register to another while preserving the original value, which requires an auxiliary register to temporarily track the count. This example copies the value $ n $ from register R₂ to register R₃, utilizing register R₁ as auxiliary storage, under the initial conditions R₁ = 0, R₂ = $ n $, R₃ = 0, yielding final conditions R₁ = 0, R₂ = $ n $, R₃ = $ n $. The high-level process begins by ensuring R₃ and R₁ are cleared (though assumed zero here). It then enters a loop that executes $ n $ times: decrement R₂, increment R₃, and increment R₁, transferring the count from R₂ to both R₃ and R₁ while reducing R₂ to zero. A second loop then reverses this for R₁ and R₂, decrementing R₁ and incrementing R₂ $ n $ times until R₁ returns to zero, thereby restoring R₂ without affecting R₃. This non-destructive copy leverages the auxiliary register to avoid permanent alteration of the source. The following program implements this using the standard counter machine instructions: INC $ k $ (increment register $ k $, proceed to next), DEC $ k $ (decrement register $ k $, proceed to next), JZ $ k $, $ L $ (if register $ k $ = 0 jump to label $ L $, else proceed), and JUMP $ L $ (unconditional jump to label $ L $). Although the clears for R₃ and R₁ are included for completeness, they are redundant given the initial conditions.
L0: JZ 3, L1        // Clear R₃ if nonzero
    DEC 3
    JUMP L0
L1: JZ 1, L2        // Clear R₁ if nonzero
    DEC 1
    JUMP L1
L2: JZ 2, L3        // Test R₂; if 0, proceed to restore
    DEC 2
    INC 3
    JUMP L4
L4: INC 1
    JUMP L2          // Loop back to test R₂
L3: JZ 1, end       // Test R₁; if 0, halt
    DEC 1
    INC 2
    JUMP L5
L5: JUMP L3         // Loop back to test R₁
end:                // Halt
This program executes in $ O(n) $ steps.

Computing Basic Functions

Counter machines compute basic arithmetic functions such as addition and subtraction through sequences of increment (INC), decrement (DEC), and zero-jump (JZ) instructions, simulating these operations without dedicated arithmetic operators. These instructions allow the machine to iteratively build or reduce register values by looping based on counter contents. Consider the task of adding the contents of registers R₂ (initially m) and R₃ (initially n) into R₄ (initially 0), while preserving the original values in R₂ and R₃. This can be achieved by first copying R₂ to R₄ using the copy subroutine from the "Copying Register Contents" section, which sets R₄ = m without altering R₂. Next, add n from R₃ to R₄ using an auxiliary temporary register R₅ (initially 0), ensuring R₃ remains unchanged. The addition proceeds via a nested loop structure: an outer conceptual loop over m (handled by the prior copy) combined with an inner loop over n to increment R₄ n times. The detailed steps for the inner addition (adding R₃ to R₄ via R₅) are:
  1. Copy R₃ to R₅ (preserving R₃, setting R₅ = n).
  2. Label start: JZ R₅, end (jump to end if R₅ = 0).
  3. INC R₄ (increment the target sum).
  4. DEC R₅ (decrement the temporary counter).
  5. Jump to start (loop back).
  6. Label end: (R₅ now 0, R₄ = m + n).
This loop executes exactly n times, incrementing R₄ each iteration until R₅ reaches zero. The overall result is R₄ = m + n, with R₂ and R₃ restored to their initial values, and R₅ = 0. Subtraction can be extended similarly: to compute R₄ = m - n (assuming m ≥ n), first copy R₂ to R₄ (R₄ = m), then copy R₃ to R₅ (R₅ = n), and replace the inner INC R₄ with DEC R₄ in the loop above. This decrements R₄ exactly n times, yielding R₄ = m - n while preserving inputs. The JZ ensures the loop halts precisely when the temporary counter empties, avoiding invalid decrements.

Computational Capabilities

Simulating Partial Recursive Functions

Partial recursive functions are those computable functions from the natural numbers to the natural numbers (or undefined) that can be generated from basic functions via composition, primitive recursion, and minimization, potentially failing to terminate on some inputs.[15] Counter machines with three or more counters, often termed three-counter machines (3CMs), can simulate the computation of all partial recursive functions by encoding the μ-recursive schema through iterative loops that implement the minimization operator. In this model, the basic instructions—increment (INC), decrement (DEC, which subtracts 1 if the counter is positive, otherwise leaves it unchanged), and jump-if-zero (JZ)—allow for the construction of auxiliary operations needed to mimic recursion. For instance, setting a counter to zero can be achieved by repeatedly decrementing until a JZ test halts the loop, while the successor function is directly provided by INC; these compositions enable the machine to handle projections and basic arithmetic primitives like addition in a single sentence of execution.[20] The simulation proceeds by allocating counters for input arguments (e.g., in the first few counters), the result (in an output counter), and temporary storage for recursion depth or search variables. To implement primitive recursion, a loop iterates over the recursion parameter, updating the result using the base and step functions encoded as subroutines. For the μ-operator, which finds the least y such that a function g(x, y) = 0 (or diverges if no such y exists), the machine employs an unbounded search loop: it increments a search counter y, computes g in auxiliary space, and tests for zero via JZ; if zero is reached, it outputs y and halts, otherwise continues indefinitely to reflect non-termination. This loop-based minimization, combined with composition, ensures all partial recursive functions are computable.[20] The seminal result establishing this equivalence is due to Shepherdson and Sturgis, who proved that a register machine (equivalent to a counter machine) with three registers computes precisely the class of partial recursive functions, using the above constructions to simulate the full μ-recursive schemata without requiring unbounded registers beyond the fixed three.[20] This demonstrates that counter machines with a small fixed number of counters suffice for universal partial computability, distinguishing them from models needing more resources for full Turing equivalence.[23]

Turing Equivalence with Two Counters

A two-counter machine is Turing complete, meaning it can simulate the computation of any Turing machine. This equivalence was established by Marvin Minsky, who demonstrated that any Turing machine can be simulated by a two-counter machine.[10] The simulation proceeds in three main steps. First, any Turing machine can be simulated by a machine equipped with two stacks, where one stack represents the portion of the tape to the left of the head and the other represents the portion to the right, using push and pop operations to manage tape movements and updates.[10] Second, each stack can be simulated using two counters: one counter tracks the current position or length of the stack, while the other encodes the stack's contents through a numerical representation that allows push and pop operations to be performed via increment, decrement, and zero-testing instructions.[10] Third, the four counters required for the two stacks are reduced to two by employing a pairing function to combine pairs of counter values into single counters. A common such function is the Cantor pairing function, defined as
π(x,y)=(x+y)(x+y+1)2+y, \pi(x, y) = \frac{(x + y)(x + y + 1)}{2} + y,
which injectively maps pairs of natural numbers to natural numbers, enabling the encoding and decoding of multiple counter states within two registers.[10] Overall, this construction encodes the entire state of the Turing machine—including its tape contents, head position, and internal state—into the values of the two counters, allowing the two-counter machine to replicate the Turing machine's behavior step by step.[10]

Limitations

Model Problems

Counter machines utilize unary representation for their counters, where numerical values are stored as the count of increments from zero. This encoding inherently leads to significant inefficiencies in time complexity. Specifically, operations on numbers requiring n bits of precision demand time proportional to O(2^n) in counter machines, as the counter must be incremented or decremented that many times, whereas binary models such as random access machines (RAMs) achieve the same operations in O(n time or better.[2] The model's instruction set further exacerbates its cumbersomeness, necessitating verbose programs for even basic tasks. For example, computing multiplication involves nested loops that repeatedly add one operand to a result counter for each unit in the other operand, producing code lengths that scale poorly with input sizes. Additionally, the reliance on direct register operations precludes native support for indirect addressing, hindering the efficient handling of complex data structures like arrays or pointers without cumbersome workarounds via auxiliary counters.[24][25] Unbounded registers, essential for theoretical universality, pose practical challenges for implementation, as physical devices cannot accommodate arbitrarily large counter values without finite memory constraints. Compared to RAM models, counter machines lack random access capabilities, forcing sequential modifications that amplify runtime overhead for non-local operations. Despite these limitations, counter machines with two or more counters achieve Turing equivalence, underscoring their theoretical power at the expense of efficiency.[24][25]

Initialization Caveat

A significant limitation of the two-counter machine (2CM) arises in its handling of standard input encoding, where one counter is initialized to a value NN and the other to 0. Under these conditions, a 2CM cannot compute the doubling function, producing counters set to 2N2N and 0 upon halting.[26] This caveat, established by Schroeppel in 1972, stems from the inherent structure of 2CM operations, which include incrementing or decrementing a counter by 1 or testing for zero. Simulations within a 2CM preserve specific parity or modulo properties of the counter states relative to the initial configuration; achieving 2N2N would require disrupting this input encoding in a manner incompatible with the model's finite instruction set and register constraints.[26] Formally, the input is encoded solely in the first counter as NN, with the second counter at 0, and the halting state must yield exactly 2N2N in one counter and 0 in the other, without auxiliary registers for temporary storage. The available instructions cannot universally effect this transformation while maintaining non-negative counter values and avoiding invalid decrements.[26] As a result, the Turing completeness of the 2CM—its ability to simulate any Turing machine—applies only to scenarios permitting arbitrary initial counter configurations, rather than the constrained fixed-input format typical for function computation.[26] This restriction can be circumvented by employing a three-counter machine (3CM), which offers additional register space to facilitate value copying and accumulation, thereby enabling direct computation of functions like doubling with the standard input encoding.[27]

Practical Illustrations

Calculation by Counting

One illustrative computation in counter machines is the integer square root, computed via successive approximation by accumulating sums of odd numbers until exceeding the input value. This method leverages the mathematical identity that the square of an integer nn equals the sum of the first nn odd numbers: 1+3++(2n1)=n21 + 3 + \dots + (2n-1) = n^2. To find N\lfloor \sqrt{N} \rfloor for input NN, the machine increments a guess counter while adding the next odd number to an accumulator, stopping when the addition would exceed NN. This approach demonstrates arithmetic through repeated unit operations, aligning with the unary representation inherent in counter machines where register values denote counts of increments.[28] The program uses three main counters: one for the input NN (or a copy thereof), one for the current guess gg (initialized to 0), and one for the accumulator ss (initialized to 0). An auxiliary counter handles the current odd increment o=2g+1o = 2g + 1, computed by copying gg twice and adding 1 via increments. The core loop tests whether s+oNs + o \leq N by temporarily adding oo to a copy of ss and comparing via subtraction or zero-checks; if true, update ss and increment gg, else halt with gg as the result.[28] A high-level sketch in counter machine instructions (using INC for increment, DEC for decrement, JZ for jump if zero, and labels for control flow) proceeds as follows:
; Assume registers: R0 = N (input), R1 = g (guess=0), R2 = s (sum=0), R3 = temp, R4 = o (odd)
LOOP:  ; Compute o = 2*g + 1
       ; Copy R1 to R3 (add R1 to R3=0, loop DEC R1 INC R3 JNZ, restore R1 if needed)
       ; Copy R3 to R4 (similar, 2*g)
       ; INC R4  ; +1 for odd
       ; Test if s + o > N: Copy R2 to R3, add R4 to R3 (loop DEC R4 INC R3 JNZ, restore R4)
       ; Copy R0 to R5, DEC R5 loop R3 times (simulate subtract R3 from R0 copy)
       ; If R5 !=0 after, then s+o > N, JZ done
       ; Else: add o to s (loop DEC R4 INC R2 JNZ)
       ; INC R1  ; g++
       ; Goto LOOP
DONE: ; Output R1
Step-by-step execution for N=9N=9: Initialize g=0g=0, s=0s=0. First iteration: o=1o=1, s+1=19s+1=1 \leq 9, so s=1s=1, g=1g=1. Second: o=3o=3, s+3=49s+3=4 \leq 9, s=4s=4, g=2g=2. Third: o=5o=5, s+5=99s+5=9 \leq 9, s=9s=9, g=3g=3. Fourth: o=7o=7, s+7=16>9s+7=16 > 9, halt with g=3g=3 (since 32=93^2=9). For N=10N=10, halts at g=3g=3 as 16>1016>10. This process requires nested loops for additions and tests, with time complexity linear in N\sqrt{N} iterations, each involving O(N)O(\sqrt{N}) steps for arithmetic.[28] The outcome yields the floor of the square root without direct multiplication or division, relying solely on counting increments and decrements to simulate accumulation. Each step embodies the unary nature of counter machines, where numerical values are built and compared through unit-by-unit operations, highlighting how complex functions emerge from primitive counting.[28]

Historical Implementations

The Zuse Z3, completed in 1941 by German engineer Konrad Zuse, represented an early physical embodiment of counter-like operations through its use of electromechanical relays functioning as binary counters and registers.[29] Built with approximately 2,300 relays implementing a 22-bit word length, the Z3 performed arithmetic and control functions via relay-based increment and decrement mechanisms, serving as a precursor to more advanced register machines despite its limited number of registers.[30] This relay implementation allowed for programmed calculations, such as solving differential equations, highlighting the practical application of bounded counter structures in early computing hardware.[29] In the 1960s, the Friden EC-130 electronic calculator utilized transistorized counters to perform arithmetic operations, effectively simulating the increment and decrement loops central to counter machine models.[31] Introduced in 1964 as one of the first fully solid-state desktop calculators, the EC-130 employed a four-counter design with 5-bit registers to handle addition, subtraction, multiplication, and division through sequential counter updates, demonstrating how mechanical and electronic counters could execute basic computational instructions without vacuum tubes.[32] Its reliance on counter-based logic for reverse Polish notation processing underscored the transition from purely mechanical to electronic realizations of counter operations in commercial devices.[31] Early stored-program computers like the EDSAC (Electronic Delay Storage Automatic Calculator), operational from 1949 at the University of Cambridge, incorporated partial counter-like operations in their microcode for managing memory addressing and arithmetic sequencing.[33] EDSAC's instruction set included add and subtract commands that manipulated an accumulator and short registers, akin to increment/decrement actions in looped computations, enabling the execution of subroutines for numerical analysis tasks. These operations, executed at around 600 instructions per second using mercury delay lines for storage, illustrated how counter principles influenced the micro-level control in vacuum-tube based systems.[34] While no dedicated modern hardware directly implements unbounded counter machines due to their inefficiency in handling large values—often requiring exponential resources compared to binary representations—the conceptual influence persists in embedded systems through timer/counter peripherals in microcontrollers.[35] For instance, devices like those in ARM-based embedded processors use hardware counters for event timing and interrupt generation, echoing the foundational increment/decrement paradigm but optimized for practical, fixed-register constraints.[35] In practice, these historical implementations were limited by finite registers, contrasting with the theoretical unbounded counters of the model and restricting them to specific arithmetic tasks rather than general computability.[30]

References

User Avatar
No comments yet.