Hubbry Logo
Canonical normal formCanonical normal formMain
Open search
Canonical normal form
Community hub
Canonical normal form
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Canonical normal form
Canonical normal form
from Wikipedia

In Boolean algebra, any Boolean function can be expressed in the canonical disjunctive normal form (CDNF),[1] minterm canonical form, or Sum of Products (SoP or SOP) as a disjunction (OR) of minterms. The De Morgan dual is the canonical conjunctive normal form (CCNF), maxterm canonical form, or Product of Sums (PoS or POS) which is a conjunction (AND) of maxterms. These forms can be useful for the simplification of Boolean functions, which is of great importance in the optimization of Boolean formulas in general and digital circuits in particular.

Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the algebraic normal form (also called Zhegalkin or Reed–Muller).

Minterms

[edit]

For a boolean function of variables , a minterm is a product term in which each of the variables appears exactly once (either in its complemented or uncomplemented form). Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator (logical AND). A minterm gives a true value for just one combination of the input variables, the minimum nontrivial amount. For example, a b' c, is true only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 1.

Indexing minterms

[edit]

There are 2n minterms of n variables, since a variable in the minterm expression can be in either its direct or its complemented form—two choices per variable. Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form () and 0 to the complemented form (); the minterm is then . For example, minterm is numbered 1102 = 610 and denoted .

Minterm canonical form

[edit]

Given the truth table of a logical function, it is possible to write the function as a "sum of products" or "sum of minterms". This is a special form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit u of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:

ci x y u(ci,x,y)
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 0
1 1 1 1

Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write u as a sum of minterms and . If we wish to verify this: evaluated for all 8 combinations of the three variables will match the table.

Maxterms

[edit]

For a boolean function of n variables , a maxterm is a sum term in which each of the n variables appears exactly once (either in its complemented or uncomplemented form). Thus, a maxterm is a logical expression of n variables that employs only the complement operator and the disjunction operator (logical OR). Maxterms are a dual of the minterm idea, following the complementary symmetry of De Morgan's laws. Instead of using ANDs and complements, we use ORs and complements and proceed similarly. It is apparent that a maxterm gives a false value for just one combination of the input variables, i.e. it is true at the maximal number of possibilities. For example, the maxterm a′ + b + c′ is false only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 0.

Indexing maxterms

[edit]

There are again 2n maxterms of n variables, since a variable in the maxterm expression can also be in either its direct or its complemented form—two choices per variable. The numbering is chosen so that the complement of a minterm is the respective maxterm. That is, each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The maxterm convention assigns the value 0 to the direct form and 1 to the complemented form . For example, we assign the index 6 to the maxterm (110) and denote that maxterm as M6. The complement is the minterm , using de Morgan's law.

Maxterm canonical form

[edit]

If one is given a truth table of a logical function, it is possible to write the function as a "product of sums" or "product of maxterms". This is a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit co of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:

ci x y co(ci,x,y)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1

Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write co as a product of maxterms and . If we wish to verify this:

evaluated for all 8 combinations of the three variables will match the table.

Minimal PoS and SoP forms

[edit]

It is often the case that the canonical minterm form is equivalent to a smaller SoP form. This smaller form would still consist of a sum of product terms, but have fewer product terms and/or product terms that contain fewer variables. For example, the following 3-variable function:

a b c f(a,b,c)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1

has the canonical minterm representation , but it has an equivalent SoP form . In this trivial example, it is obvious that , and the smaller form has both fewer product terms and fewer variables within each term. The minimal SoP representations of a function according to this notion of "smallest" are referred to as minimal SoP forms. In general, there may be multiple minimal SoP forms, none clearly smaller or larger than another.[2] In a similar manner, a canonical maxterm form can be reduced to various minimal PoS forms.

While this example was simplified by applying normal algebraic methods [], in less obvious cases a convenient method for finding minimal PoS/SoP forms of a function with up to four variables is using a Karnaugh map. The Quine–McCluskey algorithm can solve slightly larger problems. The field of logic optimization developed from the problem of finding optimal implementations of Boolean functions, such as minimal PoS and SoP forms.

Application example

[edit]

The sample truth tables for minterms and maxterms above are sufficient to establish the canonical form for a single bit position in the addition of binary numbers, but are not sufficient to design the digital logic unless your inventory of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The values are defined as voltage states, one near ground and one near the DC supply voltage Vcc, e.g. +5 VDC. If the higher voltage is defined as the 1 "true" value, a NOR gate is the simplest possible useful logical element.

Specifically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their collectors tied together and linked to Vcc through a load impedance. Each base is connected to an input signal, and the common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistor's emitter to its collector, causing current to flow through the load impedance, which brings the collector voltage (the output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current flows, and the voltage-divider effect with the load impedance imposes on the collector point a high voltage very near to Vcc.

The complementing property of these gate circuits may seem like a drawback when trying to implement a function in canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing function, which is required frequently in digital logic.

This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplified by supposing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs).

Canonical and non-canonical consequences of NOR gates

[edit]

A set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3 input variables ci, x, and y, always produce minterms, never maxterms—that is, of the 8 gates required to process all combinations of 3 input variables, only one has the output value 1. That's because a NOR gate, despite its name, could better be viewed (using De Morgan's law) as the AND of the complements of its input signals.

The reason this is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the complement of the like-indexed minterm, and vice versa.

In the minterm example above, we wrote but to perform this with a 4-input NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is,

Truth tables
ci x y M0 M3 M5 M6 AND u(ci,x,y)
0 0 0 0 1 1 1 0 0
0 0 1 1 1 1 1 1 1
0 1 0 1 1 1 1 1 1
0 1 1 1 0 1 1 0 0
1 0 0 1 1 1 1 1 1
1 0 1 1 1 0 1 0 0
1 1 0 1 1 1 0 0 0
1 1 1 1 1 1 1 1 1
ci x y m0 m3 m5 m6 NOR u(ci,x,y)
0 0 0 1 0 0 0 0 0
0 0 1 0 0 0 0 1 1
0 1 0 0 0 0 0 1 1
0 1 1 0 1 0 0 0 0
1 0 0 0 0 0 0 1 1
1 0 1 0 0 1 0 0 0
1 1 0 0 0 0 1 0 0
1 1 1 0 0 0 0 1 1

In the maxterm example above, we wrote but to perform this with a 4-input NOR gate we need to notice the equality to the NOR of the same minterms. That is,

Truth tables
ci x y M0 M1 M2 M4 AND co(ci,x,y)
0 0 0 0 1 1 1 0 0
0 0 1 1 0 1 1 0 0
0 1 0 1 1 0 1 0 0
0 1 1 1 1 1 1 1 1
1 0 0 1 1 1 0 0 0
1 0 1 1 1 1 1 1 1
1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
ci x y m0 m1 m2 m4 NOR co(ci,x,y)
0 0 0 1 0 0 0 0 0
0 0 1 0 1 0 0 0 0
0 1 0 0 0 1 0 0 0
0 1 1 0 0 0 0 1 1
1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 0 1 1
1 1 0 0 0 0 0 1 1
1 1 1 0 0 0 0 1 1

Design trade-offs considered in addition to canonical forms

[edit]

One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that all 3 of the input variables have to appear in both their direct and complement forms. There's no difficulty about the addends x and y in this respect, because they are static throughout the addition and thus are normally held in latch circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a pair of gates cross-coupled to make a flip-flop: the output of each is wired as one of the inputs to the other.) There is also no need to create the complement form of the sum u. However, the carry out of one bit position must be passed as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this is to pass co through a 1-input NOR gate and label the output co′, but that would add a gate delay in the worst possible place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical form of co′ (out of the opposite minterms as co) solves this problem.

Truth tables
ci x y M3 M5 M6 M7 AND co'(ci,x,y)
0 0 0 1 1 1 1 1 1
0 0 1 1 1 1 1 1 1
0 1 0 1 1 1 1 1 1
0 1 1 0 1 1 1 0 0
1 0 0 1 1 1 1 1 1
1 0 1 1 0 1 1 0 0
1 1 0 1 1 0 1 0 0
1 1 1 1 1 1 0 0 0
ci x y m3 m5 m6 m7 NOR co'(ci,x,y)
0 0 0 0 0 0 0 1 1
0 0 1 0 0 0 0 1 1
0 1 0 0 0 0 0 1 1
0 1 1 1 0 0 0 0 0
1 0 0 0 0 0 0 1 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 1 0 0

The trade-off to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger gate). If we'd just used that 1-input gate to complement co, there would have been no use for the minterm , and the gate that generated it could have been eliminated. Nevertheless, it is still a good trade.

Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning NOR gates into the functions specified. A NOR gate is made into an OR gate by passing its output through a 1-input NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However, this approach not only increases the number of gates used, but also doubles the number of gate delays processing the signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile.

Top-down vs. bottom-up design

[edit]

We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. That's the "top-down" way to design the digital circuit for this function, but is it the best way? The discussion has focused on identifying "fastest" as "best," and the augmented canonical form meets that criterion flawlessly, but sometimes other factors predominate. The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and finally compare the results.

The bottom-up development involves noticing that u = ci XOR (x XOR y), where XOR means eXclusive OR [true when either input is true but not when both are true], and that co = ci x + x y + y ci. One such development takes twelve NOR gates in all: six 2-input gates and two 1-input gates to produce u in 5 gate delays, plus three 2-input gates and one 3-input gate to produce co′ in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus three 4-input NOR gates to produce u, co and co′ in 2 gate delays. If the circuit inventory actually includes 4-input NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the sum digit u considerably faster. The fanout comparison is tabulated as:

Variables Top-down Bottom-up
x 4 1
x' 4 3
y 4 1
y' 4 3
ci 4 1
ci' 4 3
M or m 4@1,4@2 N/A
x XOR y N/A 2
Misc N/A 5@1
Max 4 3

The description of the bottom-up development mentions co′ as an output but not co. Does that design simply never need the direct form of the carry out? Well, yes and no. At each stage, the calculation of co′ depends only on ci′, x′ and y′, which means that the carry propagation ripples along the bit positions just as fast as in the canonical design without ever developing co. The calculation of u, which does require ci to be made from ci′ by a 1-input NOR, is slower but for any word length the design only pays that penalty once (when the leftmost sum digit is developed). That's because those calculations overlap, each in what amounts to its own little pipeline without affecting when the next bit position's sum bit can be calculated. And, to be sure, the co′ out of the leftmost bit position will probably have to be complemented as part of the logic determining whether the addition overflowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate count and/or fanout are paramount!

We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the interested reader, assisted by one more algebraic formula: u = ci(x XOR y) + ci′(x XOR y)′]′. Decoupling the carry propagation from the sum formation in this way is what elevates the performance of a carry-lookahead adder over that of a ripple carry adder.

Application in digital circuit design

[edit]

One application of Boolean algebra is digital circuit design, with one goal to minimize the number of gates and another to minimize the settling time.

There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement only four of them: conjunction (AND), disjunction (inclusive OR), and the respective complements of those (NAND and NOR).

Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR, whose output is true only when all 3 inputs are false.[3][page needed][4]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a canonical normal form, often simply called a , is a standardized, unique representation of objects within a given , ensuring that each distinct object corresponds to exactly one such form for purposes of comparison, simplification, and analysis. This concept provides a "clear-cut, one-to-one way to describe every object in a class," as formalized in the study of symbolic computation and algebraic identities. While normal forms in general standardize representations without necessarily guaranteeing uniqueness, canonical normal forms impose additional structure to achieve this bijective mapping, making them essential tools across diverse mathematical domains. In Boolean algebra and propositional logic, the canonical disjunctive normal form (DNF)—also known as the minterm canonical form—expresses any Boolean function as a disjunction (OR) of conjunctions (ANDs) of literals, with each conjunction corresponding precisely to a row in the function's truth table where the output is true. This form is unique up to the ordering of terms and ensures a complete, non-redundant description derived directly from the truth table, facilitating equivalence checks and circuit optimization. Similarly, the canonical conjunctive normal form (CNF) reverses this structure, representing the function as a conjunction of disjunctions (clauses), each capturing a false output row.

Fundamentals

Definition and overview

In Boolean algebra, a Boolean function represents a mapping from a set of binary input values to a binary output, where each input is composed of one or more variables that can take values 0 or 1. For a function with nn variables, there are 2n2^n possible input combinations, each of which determines whether the function evaluates to 1 or 0. A canonical normal form provides a standardized and unique representation of any Boolean function, expressed either as a sum-of-products (SOP) using all relevant minterms or as a product-of-sums (POS) using all relevant maxterms, thereby explicitly accounting for every possible combination of the variables. This form ensures completeness by including terms for all variables in each product or sum, distinguishing it from general normal forms that may omit variables or include redundant terms. Unlike non-canonical normal forms, which can vary in structure and allow for omissions or redundancies leading to multiple equivalent expressions, the canonical form is unique up to the ordering of variables and terms, facilitating direct comparison of function equivalence. The general SOP canonical form is given by
F=mi,F = \sum m_i,
where the mim_i are the minterms corresponding to the input combinations for which F=1F = 1. Similarly, the POS canonical form is
F=Mj,F = \prod M_j,
where the MjM_j are the maxterms corresponding to the input combinations for which F=0F = 0. Minterms and maxterms, which form the building blocks of these expressions, are detailed in subsequent sections.

Historical development

The foundations of canonical normal forms in Boolean algebra emerged in the mid-19th century alongside George Boole's formalization of logic as an algebraic system in his 1854 work An Investigation of the Laws of Thought, which laid the groundwork for expressing logical expressions systematically. However, the explicit development and popularization of canonical forms for Boolean functions occurred in the mid-20th century within digital logic and switching theory. Claude Shannon's 1938 master's thesis, A Symbolic Analysis of Relay and Switching Circuits, marked a pivotal advancement by applying Boolean algebra to electrical circuits and introducing the disjunctive normal form as a standard representation for relay networks, bridging theoretical logic to practical engineering. During the 1930s and 1940s, canonical forms gained traction in switching theory for designing relay-based circuits, reflecting the era's focus on electromechanical computing devices. Post-World War II, these concepts were formalized in computer science literature, notably through Willard V. Quine's 1952 paper "The Problem of Simplifying Truth Functions" and Edward J. McCluskey's 1956 paper on Boolean function minimization, both of which utilized canonical sum-of-products representations as a starting point for optimization algorithms. Concurrently, Maurice Karnaugh's 1953 introduction of the map method for combinational logic synthesis further embedded canonical forms in simplification techniques, emphasizing their role in efficient circuit design. The term "canonical" in this context was borrowed from 19th-century mathematics, where it denoted unique standard representations, such as Camille Jordan's canonical form for matrices in the 1870s. By the 1970s, canonical normal forms shifted from theoretical constructs to practical tools in very-large-scale integration (VLSI) design, enabling automated logic synthesis in integrated circuits as detailed in Carver Mead and Lynn Conway's influential 1980 textbook Introduction to VLSI Systems. From the 1960s onward, their uniqueness property was increasingly recognized in artificial intelligence for automated theorem proving, where conjunctive and disjunctive normal forms facilitate satisfiability checking and resolution-based inference.

Canonical Sum-of-Products Form

Minterms

In , a minterm is defined as a product (logical AND) of all nn variables in a , where each variable appears exactly once, either in its complemented form (denoted by a prime or overbar, e.g., AA') or uncomplemented form (e.g., AA), corresponding to a specific input that evaluates the function to true. This representation ensures that the minterm is true only for that exact of variable values and false otherwise, serving as the fundamental building block for expressing Boolean functions in their most disaggregated product form. A key property of minterms is that each one uniquely corresponds to a single row in the of the where the output is 1, capturing precisely those input conditions under which the function holds. For a function with nn variables, there are exactly 2n2^n possible minterms, as each variable can be either complemented or not, exhausting all possible input combinations. Minterms are typically notated using literals, which are the variables or their complements; for example, with two variables AA and BB, the minterms are expressed as ABA'B', ABA'B, ABAB', and ABAB. These literals are combined via the AND operation to form the product term. To illustrate, consider a two-variable case: the minterm m0=ABm_0 = A'B' is true only when both A=0A = 0 and B=0B = 0; m1=ABm_1 = A'B when A=0A = 0 and B=1B = 1; m2=ABm_2 = AB' when A=1A = 1 and B=0B = 0; and m3=ABm_3 = AB when A=1A = 1 and B=1B = 1. Minterms constitute the atomic conjunctions in the (DNF) of a , where the full expression is a disjunction (logical OR) of selected minterms corresponding to the function's true outputs.

Indexing minterms

In functions with nn variables, minterms are systematically indexed using numbers ranging from to 2n12^n - 1, where each index corresponds to a unique binary representation that determines the literal form of the minterm. The binary digits of the index indicate whether each variable is included in its uncomplemented or complemented form: a bit value of 1 includes the uncomplemented variable, while a bit value of 0 includes its complement. For instance, with three variables AA, BB, and CC (where AA is the most significant bit), the index 4 in binary is 100, yielding the minterm m4=ABCm_4 = A B' C', as the first bit (1) selects AA, the second bit (0) selects BB', and the third bit (0) selects CC'. This indexing enables compact notation for expressing the canonical sum-of-products form using the summation symbol Σ\Sigma, where the function is written as F(A,B,C,)=m(i1,i2,,ik)F(A, B, C, \dots) = \sum m(i_1, i_2, \dots, i_k), listing the decimal indices of the minterms that evaluate to 1. For example, if a function F(A,B,C)F(A, B, C) is true for input combinations corresponding to indices 0, 2, and 3, it is denoted as F(A,B,C)=m(0,2,3)F(A, B, C) = \sum m(0, 2, 3), equivalent to the disjunction m0+m2+m3m_0 + m_2 + m_3. This shorthand directly maps to the rows of a truth table where the output is 1. The decimal indexing follows the natural binary order, facilitating alignment with truth table rows and serving as a convenient shorthand in digital design tools for specifying functions without expanding full product terms. It also aligns with the positioning of minterms in Karnaugh maps, where cell labels reflect these binary-derived indices, though the map uses for adjacency. Formally, for an index kk with binary representation bn1bn2b0b_{n-1} b_{n-2} \dots b_0, the corresponding minterm is: mk=i=0n1(xi if bi=1 else xi)m_k = \prod_{i=0}^{n-1} \left( x_i \text{ if } b_i = 1 \text{ else } x_i' \right) where xix_i are the variables ordered from least to most significant bit. This system underpins the construction of canonical sum-of-products expressions.

Minterm canonical form

The minterm canonical form, also referred to as the canonical sum-of-products (SOP) form, is constructed directly from a function's truth table by identifying all input combinations where the function output is 1 and forming the corresponding minterms, then summing these minterms using the OR operation. Each minterm in this sum is a complete product term that incorporates every variable of the function exactly once, either in its uncomplemented or complemented form, based on the binary value of the input row (1 for uncomplemented, 0 for complemented); this guarantees no variables are omitted, providing a fully expanded and standardized representation. The full expression for the minterm canonical form is written as F=miF = \sum m_i, where the summation is over all indices ii corresponding to the rows with output 1, and mim_i denotes the minterm associated with the binary encoding of ii (with the least significant bit typically assigned to the first variable). This notation compactly captures the (DNF) while explicitly linking back to the for verification and in logic circuits. Every Boolean function has exactly one unique minterm canonical form, up to reordering of the summands, owing to the fact that the set of minterms where the function is 1 uniquely determines its truth table and thus its algebraic representation in this expanded SOP structure. To illustrate, consider the two-variable exclusive-OR function F(A,B)=ABF(A,B) = A \oplus B, whose truth table yields 1 for inputs (0,1) and (1,0), corresponding to minterms m1m_1 and m2m_2; the canonical form is thus F=m1+m2=AB+ABF = m_1 + m_2 = A'B + AB'. For a three-variable example, if F(x,y,z)=m(2,3,6,7)F(x,y,z) = \sum m(2,3,6,7) based on the truth table, the expanded form is F(x,y,z)=xyz+xyz+xyz+xyz,F(x,y,z) = x'yz' + x'yz + xyz' + xyz, where each term fully specifies the variable polarities for the respective minterm indices (2: 010, 3: 011, 6: 110, 7: 111).

Canonical Product-of-Sums Form

Maxterms

In Boolean algebra, a maxterm is defined as a sum (logical OR) of all nn variables in a function, where each variable appears exactly once, either in its uncomplemented form or complemented form (denoted by a prime or overbar), such that the maxterm evaluates to false (0) for one specific input combination where the overall function is false. This structure ensures that the maxterm represents the negation of a single minterm, capturing the conditions under which the function does not hold. Each maxterm corresponds precisely to one row in the of an nn-variable where the output is 0, and there are exactly 2n2^n possible maxterms for nn variables, forming the complete set of atomic disjunctions that can express any such function's falsifying assignments. These maxterms serve as the fundamental building blocks for the canonical product-of-sums (POS) form, also known as (CNF) in propositional logic, where the function is the product (logical AND) of selected maxterms. Maxterms are typically denoted using literals, such as AA, AA' (or Aˉ\bar{A}) for variable AA, combined with the OR operator. For example, with two variables AA and BB, the four maxterms are:
  • M0=A+BM_0 = A + B (true except when A=0A=0, B=0B=0)
  • M1=A+BM_1 = A + B' (true except when A=0A=0, B=1B=1)
  • M2=A+BM_2 = A' + B (true except when A=1A=1, B=0B=0)
  • M3=A+BM_3 = A' + B' (true except when A=1A=1, B=1B=1)
    These examples illustrate how each maxterm excludes exactly one input combination by being false only for that case.

Indexing maxterms

Maxterms in a canonical product-of-sums expression for an n-variable are indexed numerically from 0 to 2n12^n - 1. The binary representation of each index k determines the literals in the corresponding maxterm MkM_k: for each variable xix_i (with i from 0 to n-1, typically LSB to MSB), a bit value of 0 indicates the uncomplemented literal xix_i, while a bit value of 1 indicates the complemented literal xix_i'. Thus, Mk=i=0n1liM_k = \sum_{i=0}^{n-1} l_i, where li=xil_i = x_i' if the i-th bit of k is 1, and li=xil_i = x_i otherwise. For n=3 variables A (MSB), B, C (LSB), the index 3 corresponds to binary 011, yielding M3=A+B+CM_3 = A + B' + C'. This maxterm evaluates to 0 only for the input combination A=0, B=1, C=1 (binary 011), aligning with the row in the where the function value is 0 for that position. The product-of-maxterms form is compactly notated as F=ΠM(list)F = \Pi M(list), where list enumerates the indices of the selected maxterms; for instance, F(A,B,C)=Π(1,5,6)F(A,B,C) = \Pi(1,5,6) denotes F=M1M5M6F = M_1 \cdot M_5 \cdot M_6. This notation facilitates concise specification of the POS expression. This indexing scheme directly mirrors the truth table rows where the function equals 0, enabling straightforward selection of maxterms from the function's specification. It proves advantageous for dual representations, as the maxterm indices for F correspond to the minterm indices for the complementary function Fˉ\bar{F}, with the specific relation k=2n1mk = 2^n - 1 - m accounting for the inverted binary encoding of literals between minterms and maxterms. Such duality supports transformations and verifications in analysis. The approach is detailed in standard treatments of digital logic design, where it underpins the construction of .

Maxterm canonical form

The maxterm canonical form, also known as the canonical product-of-sums expression, represents a as the logical product (AND) of all maxterms corresponding to the input combinations where the function output is 0. This form ensures that every maxterm includes all variables of the function, either in complemented or uncomplemented form, with no omissions, guaranteeing a complete and standardized representation. Construction begins with the of the : for each row where the output is 0, identify the binary index jj of that input combination, and select the maxterm MjM_j, which is a sum (OR) of literals that evaluates to 0 precisely for that input. The is then the product of these selected maxterms, ensuring the overall expression is 0 exactly when the function is 0 and 1 otherwise. The full mathematical expression for a function FF with nn variables is F(x1,x2,,xn)=jZMj,F(x_1, x_2, \dots, x_n) = \prod_{j \in Z} M_j, where ZZ denotes the set of indices jj (from 0 to 2n12^n - 1) for which F=0F = 0 in the , and each MjM_j is the maxterm defined by the binary representation of jj. This representation is unique for any given , up to the commutative property of the product, as it exhaustively captures the function's zero locations without or . For example, consider the two-variable exclusive-OR function F(A,B)=ABF(A, B) = A \oplus B, with outputs of 0 at inputs (0,0) and (1,1), corresponding to indices 0 and 3. The is F(A,B)=M0M3=(A+B)(A+B)F(A, B) = M_0 \cdot M_3 = (A + B)(A' + B'). To expand for three variables, suppose F(x,y,z)F(x, y, z) has outputs of 0 at indices 0, 1, 2, and 4 (binary 000, 001, 010, 100). The is F(x,y,z)=Mj=(x+y+z)(x+y+z)(x+y+z)(x+y+z),F(x, y, z) = \prod M_j = (x + y + z)(x + y + z')(x + y' + z)(x' + y + z), where each maxterm is constructed by complementing literals according to the 1-bits in the index jj.

Properties and Transformations

Uniqueness and equivalence

The canonical sum-of-products () form of a FF with nn variables is unique because it is constructed as the disjunction of exactly those minterms corresponding to the input combinations where F=1F = 1, ensuring an exhaustive and non-redundant representation that directly mirrors the function's without any omissions or duplicates. Similarly, the canonical product-of-sums (POS) form is unique, comprising the conjunction of precisely those maxterms where F=0F = 0, as each maxterm includes all variables in complemented or uncomplemented form, making the overall product distinct for the given function. This uniqueness stems from the atomic nature of minterms and maxterms in : for nn variables, there are exactly 2n2^n minterms, each true for one unique input assignment, allowing the canonical SOP to include only the relevant subset without overlap or redundancy. The canonical forms are logically equivalent to the original function FF because they evaluate to the same output for all 2n2^n possible inputs; the SOP covers precisely the cases where F=1F = 1 by activating the corresponding minterm, while the POS enforces F=0F = 0 only at the specified maxterms. To verify equivalence formally via , consider any input vector x=(x1,,xn)\mathbf{x} = (x_1, \dots, x_n). In the canonical SOP, exactly one minterm mxm_{\mathbf{x}} (the product of literals matching x\mathbf{x}) evaluates to 1, while all others are 0; thus, the disjunction equals 1 if and only if mxm_{\mathbf{x}} is included, which by construction occurs precisely when F(x)=1F(\mathbf{x}) = 1. A parallel argument holds for the POS, where exactly one maxterm evaluates to 0 for x\mathbf{x}, making the conjunction 0 if and only if that maxterm is included (i.e., when F(x)=0F(\mathbf{x}) = 0). In canonical forms, Boolean properties such as (duplicating terms) and absorption (one term subsuming another) do not apply for simplification, as the full expansion into distinct minterms or maxterms eliminates any literal overlaps or redundancies present in non-canonical expressions. The two forms are inter-related through : the canonical POS for FF is equivalent to the complemented and dualized canonical SOP for FF', preserving . Notably, both forms exhibit exponential size complexity of O(2n)O(2^n) in the worst case, underscoring the inherent in exhaustive representations for large nn.

Conversion between forms

Converting between the canonical sum-of-products () form and the canonical product-of-sums (POS) form relies on the dual relationship inherent in . The canonical POS form of a function FF is equivalent to the canonical SOP form of its complement FF', obtained by applying De Morgan's theorem to revert to the POS of FF. This duality arises because minterms of FF correspond to the rows where F=1F = 1, while maxterms of FF correspond to the rows where F=0F = 0 in the . To convert from canonical SOP to POS, first identify the minterms of FF' by taking the complement of the indices used in the SOP of FF (i.e., the indices where F=0F = 0). These minterms of FF' directly become the maxterms of FF, as the maxterm indices for FF are the positions where F=0F = 0. The resulting POS form is the product of these maxterms. The reverse conversion from POS to SOP follows by identifying the maxterms of FF' (complement indices) and treating them as minterms of FF. For example, consider a two-variable function in canonical SOP form: F=m(1,2)=AB+ABF = \sum m(1, 2) = A'\overline{B} + A B'. The minterms are at indices 1 and 2, so F=0F = 0 at indices 0 and 3. Thus, the POS form is F=M(0,3)=(A+B)(A+B)F = \prod M(0, 3) = (A + B)(A' + B'). In general, if the form is FSOP=miF_{\text{SOP}} = \sum m_i for indices iIi \in I, then the POS form is FPOS=MjF_{\text{POS}} = \prod M_j where jIj \notin I (the complement set of indices). This conversion preserves the canonicity of the forms, ensuring each represents the complete disjunctive or without simplification, though the resulting expression may not be minimal. It is particularly useful for verifying duality or checking equivalence between representations.

Minimization Techniques

Minimal SOP and POS forms

Minimal sum-of-products () and product-of-sums (POS) forms represent the most compact equivalent expressions for a in their respective normal forms, achieved by eliminating redundant terms and literals from the canonical representations. These minimal forms retain the of the original function but use fewer product terms in SOP (or sum terms in POS) and reduced literal counts overall, making them preferable for practical implementations where efficiency is key. In contrast to canonical SOP and POS forms, which exhaustively include all relevant minterms or maxterms and can require up to 2n2^n terms for nn variables, minimal forms significantly reduce complexity by merging overlapping terms through simplification principles. This reduction directly translates to fewer logic gates and interconnections in digital circuits, lowering costs, power consumption, and propagation delays. For instance, forms ensure universality but are inefficient for synthesis, whereas minimal forms optimize hardware resource utilization without altering functionality. Key properties of minimal SOP and POS forms include the possibility of multiple equivalent minimal expressions for the same function, as different combinations of terms may achieve the same literal or term count minimization. Minimality is generally evaluated by the total number of literals (counting each variable appearance) or the number of terms, with the former often prioritized in to minimize gate inputs. Additionally, while canonical forms are unique up to term ordering, minimal forms may not be, requiring selection criteria based on implementation constraints. The challenge of obtaining the absolute minimal form is computationally intensive, as the exact minimization problem for Boolean functions in SOP or POS is NP-hard. A representative example illustrates this reduction: the SOP expression F(A,B)=AB+AB+ABF(A, B) = A'B' + A'B + AB' simplifies to the minimal SOP F(A,B)=A+BF(A, B) = A' + B'. Here, the original three-term form with five literals collapses to two terms with two literals by applying absorption and consensus theorems, demonstrating how redundancies in canonical expansions are removed. In terms of scale, for functions with nn variables, canonical forms scale exponentially with O(2n)O(2^n) potential terms, whereas minimal forms can achieve linear O(n)O(n) in the best case, highlighting their practical superiority despite the hardness of optimization.

Simplification methods

Simplification methods for canonical normal forms in Boolean algebra aim to reduce the sum-of-products (SOP) or product-of-sums (POS) expressions to more compact forms while preserving functionality. Algebraic techniques, such as the consensus theorem, provide a foundational approach by identifying and eliminating redundant terms. The consensus theorem states that for variables AA, BB, and CC, the expression AB+AC+BC=AB+ACAB + A'C + BC = AB + A'C, allowing the removal of the consensus term BCBC without altering the function's output. This theorem facilitates simplification by adding temporary consensus terms to enable absorption or further reduction, as demonstrated in examples where intermediate terms are introduced to absorb others. Graphical methods like Karnaugh maps (K-maps) offer a visual aid for grouping adjacent minterms that differ by one literal, enabling intuitive simplification for up to four or five variables. Introduced by , K-maps arrange minterms in a grid where adjacency reflects single-variable changes, allowing groups of 2, 4, or 8 ones to form prime implicants by eliminating shared literals. For more variables, the tabular Quine-McCluskey algorithm systematically combines minterms into implicants through iterative pairing and selection of prime implicants, suitable for computer implementation beyond manual limits. Developed from Willard V. Quine's theoretical framework on simplification and extended by Edward J. McCluskey, this method ensures exhaustive coverage by generating all prime implicants and selecting a minimal set. The general process involves identifying groups of minterms that share common literals to form larger implicants, then covering all required minterms with a minimal set of prime implicants to avoid . Prime implicants are the largest possible groups that cannot be further expanded, and the covering step selects a that fully represents the function with the fewest terms. For instance, consider a three-variable function F(A,B,C)=m(0,1,3,7)F(A, B, C) = \sum m(0, 1, 3, 7), where the minterms are ABCA'B'C', ABCA'B'C, ABCA'BC, and ABCABC. Using a K-map:
B'C'B'CBCBC'
A'1110
A0010
Grouping the two cells in the top row (minterms 0 and 1) yields ABA'B', and the two cells in the BC column (minterms 3 and 7) yield BCBC, simplifying to F=AB+BCF = A'B' + BC. This reduction reduces the number of terms from four to two and the total literals from twelve to four in the canonical form. For complex functions, heuristic algorithms like , developed in the 1980s by Robert Brayton and colleagues, automate minimization in (EDA) tools by iteratively expanding, covering, and reducing implicants while handling don't-care conditions to optimize for area and speed. improves upon exact methods by providing near-optimal results efficiently for large variable counts. The resulting simplified form can be expressed as F=pkF = \sum p_k where each pkp_k is a selected prime implicant covering the function.

Applications

Digital circuit design

In digital circuit design, canonical sum-of-products (SOP) forms directly map to two-level AND-OR gate implementations, where each minterm is realized as an AND gate feeding into a multi-input OR gate, providing a straightforward hardware realization of Boolean functions derived from truth tables. Similarly, canonical product-of-sums (POS) forms correspond to OR-AND structures, with maxterms implemented using OR gates followed by an AND gate, enabling dual representations for the same logic function. These forms are commonly employed in hardware description languages (HDLs) such as Verilog for initial behavioral specifications, allowing designers to express complete logic coverage before optimization. The systematic nature of canonical forms facilitates in synthesis tools, particularly for field-programmable arrays (FPGAs), where they enable algorithmic mapping of logic to lookup tables (LUTs) and ensure exhaustive coverage of input combinations during place-and-route processes. This approach also verifies the completeness of a by confirming that all minterms or maxterms align with the specified , reducing errors in early-stage validation. However, the exponential growth in the number of terms—with up to 2n2^n minterms for nn variables—renders canonical forms impractical for circuits with more than a few inputs, often necessitating subsequent minimization to control count and delay. In application-specific integrated circuit (ASIC) design, canonical forms support formal verification techniques, such as logic equivalence checking, by providing a unique, exhaustive representation that allows tools to compare pre- and post-optimization netlists against a reference specification using canonical structures like reduced ordered binary decision diagrams (ROBDDs). Specifically, programmable logic array (PLA) and programmable array logic (PAL) devices leverage canonical SOP forms, as their architectures—with programmable AND planes generating product terms and OR planes summing them—naturally accommodate unminimized minterm expansions for implementing sum-of-products logic without additional restructuring. This integration highlights the foundational role of canonical forms in programmable logic, though practical designs typically proceed to minimization for efficiency.

Implementation examples

A canonical sum-of-products (SOP) form can be directly implemented using AND gates for each minterm followed by a single OR gate to combine them. For example, consider the function F(A,B,C)=m(1,3,5)=ABC+ABC+ABCF(A, B, C) = \sum m(1, 3, 5) = A'B'C + A'BC + AB'C', which requires three 3-input AND gates—one for each minterm—and a 3-input OR gate. This two-level structure realizes the function, but high fan-in requirements for AND and OR gates (e.g., more than four inputs) often necessitate cascading multiple smaller gates, increasing propagation delay and complexity. Implementing a SOP form using only NOR gates involves applying De Morgan's theorems to transform the expression, typically resulting in a non-standard multi-level circuit rather than the direct two-level AND-OR structure. For instance, the SOP F=XY+XZF = XY + XZ can be rewritten using double complementation as F=(X+Y)(X+Z)F = \overline{(X' + Y')(X' + Z')}, forming a product-of-sums inverted structure, which requires NOR gates for the OR terms and an outer NOR for the AND equivalent—yielding three levels instead of two. A pure NOR-only realization often demands additional inversion levels, elevating the gate count and delay compared to native OR-AND. While hybrid NAND-NOR configurations can approximate SOP more efficiently, restricting to NOR gates alone complicates the canonical mapping. Dually, a canonical product-of-sums (POS) form can be implemented using OR gates for each maxterm followed by a single AND gate, and when realized with only NAND gates, it mirrors the SOP-NOR transformation via De Morgan's theorems for efficiency in NAND-dominant technologies. For example, the POS F=(A+B+C)(A+B+C)F = (A + B + C')(A' + B' + C) simplifies in some cases, but in general, it corresponds to the function that is zero at specific inputs (e.g., 001 and 110), expressible as F=ABC+ABCF = \overline{A'B'C + ABC'}. This form allows implementation as a complemented SOP using a two-level NAND-NAND structure, with an additional inverter if the uncomplemented output is required, often achieving two levels with complemented inputs in NAND-optimized technologies. This duality makes POS with NANDs suitable for technologies where NAND gates are optimized, reducing transistor count in complementary setups. In technology, canonical POS forms can favor balanced PMOS and NMOS arrangements by distributing series and parallel configurations across OR-AND levels, potentially improving drive strength symmetry compared to unbalanced implementations. Conversely, 1960s TTL logic preferred canonical forms due to the foundational use of NAND gates in early TTL families like the 7400 series, enabling straightforward two-level NAND-NAND realizations. For a concrete illustration, consider the canonical SOP for a two-variable XOR function, F(A,B)=AB+AB=m(1,2)F(A, B) = A'B + AB' = \sum m(1, 2). This requires two 2-input AND gates (one for ABA' \land B, one for ABA \land B') feeding into a 2-input OR gate. A text-based diagram of the gate structure is as follows:

Inputs: A ───┬─── NOT ─── A' ─── AND ───┐ │ │ │ B ──────────────────────┼─── OR ─── F │ │ └─── AND ───────────────────┘ B' (from NOT on B)

Inputs: A ───┬─── NOT ─── A' ─── AND ───┐ │ │ │ B ──────────────────────┼─── OR ─── F │ │ └─── AND ───────────────────┘ B' (from NOT on B)

This direct mapping highlights the minimal two-level realization, with inverters for complemented literals.

trade-offs

While normal forms, such as sum-of-products () or product-of-sums (POS), guarantee a unique and correct representation of any , they often incur high implementation costs in terms of gate count, propagation delay, and power consumption compared to minimal forms. For instance, a can require up to 2n2^n product terms for an nn-variable function, each with nn literals, leading to significantly more gates than a minimized that eliminates redundancies. This literal explosion increases area and wiring complexity, though it simplifies verification by providing a standardized form for equivalence checking against specifications. In contrast, minimal forms reduce these costs but introduce verification challenges, as multiple equivalent expressions may exist, complicating proof of functional correctness. Key factors influencing these trade-offs include delay and power efficiency, particularly in resource-constrained environments. Two-level implementations offer minimal logical depth but suffer from high at the output , resulting in longer delays due to increased capacitive loading; multi-level minimal forms, while potentially deeper, distribute for faster signal in practice. For battery-powered devices, where power is paramount, minimal forms are preferred as they lower dynamic power through fewer transistors and reduced switching activity, extending operational life without sacrificing much functionality. Alternatives to canonical forms often favor non-canonical structures for in large functions, such as multiplexers (MUXes) that select outputs based on control signals or look-up tables (LUTs) in field-programmable arrays (FPGAs), which precompute any without explicit minterm expansion. These approaches enable compact implementations for complex logic, bypassing the exponential growth of canonical representations. methodologies also reflect these choices: top-down synthesis starts from behavioral specifications and maps to canonical -level nets for reliability, whereas bottom-up assembly builds from pre-optimized libraries toward specs, prioritizing over strict canonicity. A specific trade-off arises in technology-specific designs, such as NOR-only implementations, where universality is achieved but at the cost of increased literals and gates to emulate AND/OR operations, fitting early integrated circuits like resistor-transistor logic (RTL) that favored NOR for simpler fabrication.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.