Hubbry Logo
Stochastic computingStochastic computingMain
Open search
Stochastic computing
Community hub
Stochastic computing
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Stochastic computing
Stochastic computing
from Wikipedia

Stochastic computing is a collection of techniques that represent continuous values by streams of random bits. Complex computations can then be computed by simple bit-wise operations on the streams. Stochastic computing is distinct from the study of randomized algorithms.

Motivation and a simple example

[edit]

Suppose that is given, and we wish to compute . Stochastic computing performs this operation using probability instead of arithmetic.

Specifically, suppose that there are two random, independent bit streams called stochastic numbers (i.e. Bernoulli processes), where the probability of a 1 in the first stream is , and the probability in the second stream is . We can take the logical AND of the two streams.

1 0 1 1 0 1 ...
1 1 0 1 1 0 ...
1 0 0 1 0 0 ...

The probability of a 1 in the output stream is . By observing enough output bits and measuring the frequency of 1s, it is possible to estimate to arbitrary accuracy.

The operation above converts a fairly complicated computation (multiplication of and ) into a series of very simple operations (evaluation of ) on random bits. To put in another perspective, assuming the truth table of an AND gate. Conventional interpretation is that the output is true if and only if input A and B are true. However, if the table is interpreted vertically, (0011) AND (0101) is (0001), i.e., 1/2 x 1/2 = 1/4, which is exactly an arithmetic multiplication. As the information is presented in probability distribution, probability multiplication is literally an AND operation.

A B Out
0 0 0
0 1 0
1 0 0
1 1 1

More generally speaking, stochastic computing represents numbers as streams of random bits and reconstructs numbers by calculating frequencies. The computations are performed on the streams and translate complicated operations on and into simple operations on their stream representations. (Because of the method of reconstruction, devices that perform these operations are sometimes called stochastic averaging processors.) In modern terms, stochastic computing can be viewed as an interpretation of calculations in probabilistic terms, which are then evaluated with a Gibbs sampler. It can also be interpreted as a hybrid analog/digital computer.

History

[edit]
A photograph of the RASCEL stochastic computer.
The RASCEL stochastic computer, circa 1969

Stochastic computing was first introduced in a pioneering paper by John von Neumann in 1953.[1] However, the theory could not be fully developed until advances in computing of the 1960s,[2] [3] mostly through a series of simultaneous and parallel efforts in the US[4] and the UK.[5] By the late 1960s, attention turned to the design of special-purpose hardware to perform stochastic computation. A host[6] of these machines were constructed between 1969 and 1974; RASCEL[7] is pictured in this article.

Despite the intense interest in the 1960s and 1970s, stochastic computing ultimately failed to compete with more traditional digital logic, for reasons outlined below. The first (and last) International Symposium on Stochastic Computing[8] took place in 1978; active research in the area dwindled over the next few years.

Although stochastic computing declined as a general method of computing, it has shown promise in several applications. Research has traditionally focused on certain tasks in machine learning and control.[9] [10] Somewhat recently, interest has turned towards stochastic decoding, which applies stochastic computing to the decoding of error correcting codes.[11] More recently, stochastic circuits have been successfully used in image processing tasks such as edge detection [12] and image thresholding.[13] Recent advancement in stochastic circuits also shows promising speed and energy efficiency advantages in artificial intelligence (AI) hardware acceleration on edge computing.

Strengths and weaknesses

[edit]

Although stochastic computing was a historical failure, it may still remain relevant for solving certain problems. To understand when it remains relevant, it is useful to compare stochastic computing with more traditional methods of digital computing.

Strengths

[edit]

Suppose we wish to multiply two numbers each with bits of precision. Using the typical long multiplication method, we need to perform operations. With stochastic computing, we can AND together any number of bits and the expected value will always be correct. (However, with a small number of samples the variance will render the actual result highly inaccurate).

Moreover, the underlying operations in a digital multiplier are full adders, whereas a stochastic computer requires only an AND gate. Additionally, a digital multiplier would naively require input wires, whereas a stochastic multiplier would require only two input wires[citation needed]. (If the digital multiplier serialized its output, however, it would also require only two input wires.)

Additionally, stochastic computing is robust against noise; if a few bits in a stream are flipped, those errors will have no significant impact on the solution.

Furthermore, stochastic computing elements can tolerate skew in the arrival time of the inputs. Circuits work properly even when the inputs are misaligned temporally. As a result, stochastic systems can be designed to work with inexpensive locally generated clocks instead of using a global clock and an expensive clock distribution network.[14]

Finally, stochastic computing provides an estimate of the solution that grows more accurate as we extend the bit stream. In particular, it provides a rough estimate very rapidly. This property is usually referred to as progressive precision, which suggests that the precision of stochastic numbers (bit streams) increases as computation proceeds. [15] It is as if the most significant bits of the number arrive before its least significant bits; unlike the conventional arithmetic circuits where the most significant bits usually arrive last. In some iterative systems the partial solutions obtained through progressive precision can provide faster feedback than through traditional computing methods, leading to faster convergence.

Weaknesses

[edit]

Stochastic computing is, by its very nature, random. When we examine a random bit stream and try to reconstruct the underlying value, the effective precision can be measured by the variance of our sample. In the example above, the digital multiplier computes a number to bits of accuracy, so the precision is . If we are using a random bit stream to estimate a number and want the standard deviation of our estimate of the solution to be at least , we would need samples. This represents an exponential increase in work. In certain applications, however, the progressive precision property of stochastic computing can be exploited to compensate this exponential loss.

Second, stochastic computing requires a method of generating random biased bit streams. In practice, these streams are generated with pseudo-random number generators. Unfortunately, generating (pseudo-)random bits is fairly costly (compared to the expense of, e.g., a full adder). Therefore, the gate-level advantage of stochastic computing is typically lost.

Third, the analysis of stochastic computing assumes that the bit streams are independent (uncorrelated). If this assumption does not hold, stochastic computing can fail dramatically. For instance, if we try to compute by multiplying a bit stream for by itself, the process fails: since , the stochastic computation would yield , which is not generally true (unless 0 or 1). In systems with feedback, the problem of decorrelation can manifest in more complicated ways. Systems of stochastic processors are prone to latching, where feedback between different components can achieve a deadlocked state.[16] A great deal of effort must be spent decorrelating the system to attempt to remediate latching.

Fourth, although some digital functions have very simple stochastic counterparts (such as the translation between multiplication and the AND gate), many do not. Trying to express these functions stochastically may cause various pathologies. For instance, stochastic decoding requires the computation of the function . There is no single bit operation that can compute this function; the usual solution involves producing correlated output bits, which, as we have seen above, can cause a host of problems.

Other functions (such as the averaging operator require either stream decimation or inflation. Tradeoffs between precision and memory can be challenging.

Stochastic decoding

[edit]

Although stochastic computing has a number of defects when considered as a method of general computation, there are certain applications that highlight its strengths. One notable case occurs in the decoding of certain error correcting codes.

In developments unrelated to stochastic computing, highly effective methods of decoding LDPC codes using the belief propagation algorithm were developed. Belief propagation in this context involves iteratively reestimating certain parameters using two basic operations (essentially, a probabilistic XOR operation and an averaging operation).

In 2003, researchers realized that these two operations could be modeled very simply with stochastic computing.[17] Moreover, since the belief propagation algorithm is iterative, stochastic computing provides partial solutions that may lead to faster convergence. Hardware implementations of stochastic decoders have been built on FPGAs. [18] The proponents of these methods argue that the performance of stochastic decoding is competitive with digital alternatives.

Deterministic Methods to Stochastic Computing

[edit]

Deterministic methods of SC has been developed to perform completely accurate computation with SC circuits.[19] The essential principle of these methods is that every bit of one bit-streams interacts with every bit of the other bit-streams exactly once. To produce completely accurate result with these methods, the operation must run for the product of the length of input bit-streams. Deterministic methods are developed based on unary bit-streams,[20][21] pseudo-random bit-streams,[22] and low-discrepancy bit-streams.[23]

Variants of stochastic computing

[edit]

There are a number of variants of the basic stochastic computing paradigm. Further information can be found in the referenced book by Mars and Poppelbaum.

Bundle Processing involves sending a fixed number of bits instead of a stream. One of the advantages of this approach is that the precision is improved. To see why, suppose we transmit bits. In regular stochastic computing, we can represent a precision of roughly different values, because of the variance of the estimate. In bundle processing, we can represent a precision of . However, bundle processing retains the same robustness to error of regular stochastic processing.

Ergodic Processing involves sending a stream of bundles, which captures the benefits of regular stochastic and bundle processing.

Burst Processing encodes a number by a higher base increasing stream. For instance, we would encode 4.3 with ten decimal digits as

4444444555

since the average value of the preceding stream is 4.3. This representation offers various advantages: there is no randomization since the numbers appear in increasing order, so the PRNG issues are avoided, but many of the advantages of stochastic computing are retained (such as partial estimates of the solution). Additionally, it retains the linear precision of bundle and ergodic processing.

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Stochastic computing is a computational paradigm that encodes numerical values as probabilities within random bit-streams, where a number between 0 and 1 is represented by the proportion of 1s in the stream, enabling arithmetic operations to be performed using simple digital logic gates such as AND for and multiplexers for scaled . This approach, which mimics probabilistic processes with digital hardware, was independently conceived in the early by researchers including William G. Thistle, Sergio Ribeiro under Ted Poppelbaum at the University of Illinois, and Brian Gaines at Standard Telecommunication Laboratories in the UK, with foundational publications appearing in 1967. Early implementations, such as Gaines's ADDIE module, demonstrated potential for applications in image processing, , and learning machines, though limited by the era's technology. Despite initial promise, stochastic computing saw limited adoption due to challenges like long computation times required for precision—exponentially increasing with bit length—and errors from bit-stream correlations that degrade accuracy. Its key advantages include low hardware complexity (e.g., multipliers using single gates versus multi-bit adders in binary computing), high (a single bit flip alters value minimally), compact area, low power consumption, and inherent support for parallelism, making it suitable for error-prone nanoscale devices. As of 2024, stochastic computing has experienced renewed interest, particularly for energy-efficient hardware in and , where it enables convolutional neural networks like VGG16 and ResNet18 with up to 90% energy savings and accuracies exceeding 90% on benchmarks such as MNIST. Modern advancements, including delta-sigma modulator-based dividers and dynamic stochastic sequences for , address prior limitations in latency and precision, expanding applications to low-density parity-check decoding in communications standards like IEEE 802.11n, image processing tasks such as , and control systems. Further advances in 2025 include all-in-memory implementations using resistive RAM (ReRAM) for improved efficiency. However, ongoing challenges persist in managing correlations, extending the value range beyond [0,1], and achieving real-time performance comparable to deterministic methods.

Introduction

Motivation

Stochastic computing represents numerical values as streams of random bits, where the probability of a bit being 1 corresponds to the value being encoded, allowing complex operations like to be performed using simple logic gates such as AND gates. This approach contrasts with deterministic binary by leveraging probabilistic bit streams generated from sources or pseudorandom sequences, enabling hardware implementations that require minimal components compared to traditional multi-bit arithmetic units. For instance, multiplying two independent bit streams representing probabilities p1p_1 and p2p_2 yields a stream with probability p1×p2p_1 \times p_2 when processed through an , demonstrating the paradigm's elegance in reducing . The primary motivations for developing stochastic computing stem from its potential to achieve low-cost, fault-tolerant computation in resource-constrained settings, where conventional digital systems struggle with high precision demands and error sensitivity. By avoiding the need for elaborate carry-propagate adders or multipliers, stochastic circuits can be built with significantly fewer transistors, making them ideal for low-power devices and unreliable hardware environments. Inherent fault tolerance arises from the probabilistic nature of the representations, where errors in individual bits average out over long streams, allowing robust operation even with noisy components or transient faults—up to 20% error rates in some applications without substantial performance degradation. This makes it particularly suitable for early neural networks, image processing, and control systems operating in noisy conditions, such as those involving large-scale parallel data from sensors. In the , stochastic computing gained initial promise as a solution for and systems, addressing the limitations of deterministic binary computing in handling imprecise, high-dimensional data streams akin to human . For example, a value of p=0.5p = 0.5 can be encoded as a bit stream with 50% 1s, but accuracy depends on the independence of streams; correlated bits introduce bias, necessitating techniques like sequential delays to ensure uncorrelated inputs and maintain reliable probabilistic averaging. This foundational appeal highlighted its role in enabling real-time, parallel computations in environments where exactness is less critical than scalability and resilience.

Basic Principles

Stochastic computing represents numerical values as the probability that a bit in a random binary is 1, where for a of length NN, the is p=k/Np = k/N and kk is the number of 1s. This probabilistic encoding contrasts sharply with deterministic binary computing, which uses fixed-position bits to represent exact values with positional weights. In stochastic computing, all bits in the carry equal weight, enabling computations that are inherently tolerant to bit errors but reliant on statistical properties for accuracy. The approach assumes familiarity with basic probability, as values are treated as expectations over Bernoulli trials rather than precise fixed-point numbers. A fundamental requirement for reliable operations in stochastic computing is that input bit streams remain uncorrelated, meaning the occurrence of a 1 in one stream does not influence the other. Correlation between streams can introduce systematic biases and amplify errors in computations, as the probabilistic ensures that joint probabilities align with expected outcomes. To achieve this, streams are typically generated using independent random number sources, such as linear feedback shift registers, and any necessary isolation is provided through simple delay elements like flip-flops. The model in stochastic computing processes these bit s using basic logic gates to produce new streams that represent the results of operations, such as scaling or . For instance, an applied to two independent streams yields a output stream whose probability reflects a scaled product, while multiplexers or scaled adders handle by probabilistically selecting or combining bits. These operations leverage the streams' statistical nature, transforming inputs into outputs that converge to the correct probabilistic value over time, without requiring complex arithmetic hardware. Error analysis in stochastic computing reveals that the estimated value from a finite stream exhibits variance approximately equal to p(1p)/Np(1-p)/N, reflecting the binomial nature of the bit counts. This variance decreases as 1/N1/\sqrt{N}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.