Recent from talks
Nothing was collected or created yet.
Algorithms + Data Structures = Programs
View on WikipediaAlgorithms + Data Structures = Programs[1] is a 1976 book written by Niklaus Wirth covering some of the fundamental topics of system engineering, computer programming, particularly that algorithms and data structures are inherently related. For example, if one has a sorted list one will use a search algorithm optimal for sorted lists.
Key Information
The book is one of the most influential computer science books of its time and, like Wirth's other work, has been used extensively in education.[2]
The Turbo Pascal compiler written by Anders Hejlsberg was largely inspired by the Tiny Pascal compiler in Niklaus Wirth's book.
Chapter outline
[edit]Later editions
[edit]A revised edition was published in 1985 with the title Algorithms and Data Structures, 288 pages. It used Modula-2 instead of Pascal. There is a later version available in digital form which uses Oberon. Chapter 5 has been replaced with a chapter titled "Key Transformations (Hashing)".
References
[edit]- ^ a b Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. ISBN 978-0-13-022418-7. 0130224189.
- ^ Citations collected by the ACM
External links
[edit]- ETH Zurich / N. Wirth / Books / Compilerbau: Algorithms + Data Structures = Programs (archive.org link)
- N. Wirth, Algorithms and Data Structures (1985 edition, updated for Oberon in August 2004. Pdf at ETH Zurich) (archive.org link)
- Wirth, Niklaus (2004) [updated 2014]. Algorithms and Data Structures (PDF). Oberon version with corrections and authorized modifications. Institute for Nuclear Research, Moscow: Fyodor Tkachov.
Algorithms + Data Structures = Programs
View on GrokipediaOverview
Publication History
Algorithms + Data Structures = Programs was originally published in 1976 by Prentice-Hall, Englewood Cliffs, New Jersey.[3] The first edition bears the ISBN 0130224189. This seminal work emerged from Niklaus Wirth's academic environment at the time. The book was written during Wirth's professorship at ETH Zurich, where he joined as a full professor of computer science in 1968 and remained until his retirement in 1999.[4] It was significantly influenced by his earlier efforts in developing the Pascal programming language, which he designed between 1968 and 1969 and formally reported in 1970.[5] Pascal's structured approach to programming provided a foundational framework for the book's emphasis on systematic algorithm and data structure design. Subsequent to the 1976 edition, the book saw printings in 1976 and a revised edition in 1985.[6] The work quickly gained recognition as a classic text in computer science education, shaping curricula on algorithms and data structures for decades.[7]Author Background
Niklaus Emil Wirth was born on February 15, 1934, in Winterthur, Switzerland.[8] He earned a degree in electronics engineering from the Swiss Federal Institute of Technology (ETH Zurich) in 1959, followed by a master's degree from Laval University in Quebec in 1960, and a PhD in electrical engineering and computer science from the University of California, Berkeley, in 1963, where his dissertation focused on the PL/360 compiler for the IBM System/360.[9] After an assistant professorship at Stanford University from 1963 to 1967 and a position at the University of Zurich from 1967 to 1968, Wirth joined ETH Zurich as a full professor of informatics in 1968, a role he held until his retirement in 1999. He died on January 1, 2024.[4] During this period, he developed several influential programming languages, including ALGOL W in 1966 as a simplified successor to ALGOL 60, Euler in 1968 for teaching purposes, and Pascal in 1970 to promote structured programming education.[10] Wirth's motivation for writing Algorithms + Data Structures = Programs stemmed from the widespread adoption of unstructured programming practices in the 1960s and 1970s, which led to increasingly error-prone and unmaintainable code as software systems grew more complex following the introduction of hardware like the IBM System/360.[6] He sought to advocate for modularity and systematic design principles to address the emerging software crisis, emphasizing that effective programs arise from the careful integration of algorithms and data structures rather than ad hoc coding.[6] This work was part of Wirth's broader effort to instill disciplined programming methods amid rising computational demands and limited hardware resources, such as sequential storage on tapes and disks.[6] Wirth's perspectives were shaped by his mentorship under Friedrich L. Bauer, a pioneer in compiler design and a key figure in early European computing, as well as the structured programming paradigm introduced by ALGOL 60, which influenced his designs for subsequent languages.[11] Bauer's work on stack-based processing and language hierarchies provided foundational guidance during Wirth's formative years in the field.[11]Core Thesis
The core thesis of Niklaus Wirth's Algorithms + Data Structures = Programs is captured in its titular equation, which serves as a foundational mantra asserting that robust programs arise from the deliberate combination of algorithms and data structures rather than ad hoc coding alone. This perspective frames programs as engineered solutions where algorithmic logic provides the computational steps for problem resolution, and data structures ensure efficient representation and access to information, enabling scalability and performance in real-world applications. Wirth's argument underscores that neglecting either component results in brittle or inefficient software, positioning the book as a call to elevate programming to a disciplined science focused on synergy over isolated implementation.[6] At its heart, the thesis delineates algorithms as the "what"—the procedural sequences defining how to transform inputs into outputs—and data structures as the "how"—the organizational frameworks that facilitate manipulation, retrieval, and storage with minimal overhead. Without this integration, programs cannot handle growing complexity or resource constraints effectively, as pure procedural code without optimized data may lead to exponential time costs, while unstructured data undermines algorithmic efficacy. Wirth illustrates this interdependence through examples emphasizing that scalability emerges only when both elements are refined in tandem, a principle drawn from structured programming paradigms that prioritize verifiable and modular design.[6] Philosophically, the thesis rejects a "code-only" mindset prevalent in early computing, advocating instead for layered abstractions—procedural for control flow and data-oriented for encapsulation—to tame software complexity. This approach aligns with Wirth's broader advocacy for systematic program development, where abstraction levels allow programmers to manage intricate systems without descending into low-level details prematurely. The preface emphasizes that programs are concrete formulations of abstract algorithms based on particular data representations and structures.[6]Key Concepts
Role of Algorithms
In Niklaus Wirth's framework, algorithms form the procedural core of programming, defined as finite sequences of unambiguous instructions designed to solve specific problems by transforming input into output, independent of underlying hardware details. These sequences must satisfy three essential criteria: correctness, ensuring the output meets the problem specifications for all valid inputs; efficiency, balancing computational resources with performance needs; and verifiability, allowing formal or informal proofs of behavior. Wirth stresses that algorithms abstract the logical steps of computation, making them portable across systems while emphasizing rigorous design to avoid errors common in ad-hoc coding.[12] A central principle in algorithm design is stepwise refinement, a top-down approach where the overall task is decomposed into successively more detailed subtasks through explicit decisions, refining both the procedure and its data handling in parallel. This method promotes clarity by starting with a high-level outline and iteratively adding specifics, such as control structures or conditional branches, until the algorithm is fully specified in a programming language. Wirth introduced this technique to counter the complexity of large programs, enabling developers to manage design choices systematically and revise them without overhauling the entire structure. Data structures serve as enablers for algorithm efficiency by optimizing data access patterns during these refinements.[13] To ensure correctness, Wirth advocates the use of invariants—logical assertions that hold true at key points, such as the start, end, or each iteration of a loop—facilitating proofs that the algorithm preserves desired properties throughout execution. Invariants act as checkpoints for verification, transforming informal reasoning into structured arguments; for instance, they confirm that partial results remain valid as the algorithm progresses, reducing the risk of subtle bugs in repetitive operations. This practice aligns with Wirth's broader goal of making algorithms not just functional but provably reliable, drawing from structured programming principles.[14] Efficiency in algorithms is evaluated qualitatively through time and space measures, with a focus on worst-case scenarios to guarantee performance bounds under challenging inputs. Time complexity assesses the growth in computational steps relative to input size, such as the number of comparisons or assignments, while space complexity examines auxiliary storage needs beyond the input itself. Wirth prioritizes these metrics to guide practical choices, noting that inefficient algorithms may succeed on small inputs but fail at scale, and he analyzes them via counting operations rather than asymptotic notation prevalent in later works.[12] The development of sorting algorithms, such as straight insertion sort, exemplifies these principles through stepwise refinement. At the highest level, the task is to rearrange an array into non-decreasing order by successively inserting each element into a growing sorted prefix. Refinement Step 1: Outline the incremental buildInitialize an empty sorted prefix. For each subsequent element, insert it into the correct position in the prefix by shifting larger elements rightward. Refinement Step 2: Specify the loop and search
Use a forward loop over the array indices starting from the second element. For each new element , search backward from to find the insertion point, shifting elements as needed. Refinement Step 3: Detail the inner loop with invariant
The invariant states: After processing the first elements (for to ), is sorted, and all elements in are less than or equal to those in , with temporarily held outside. The inner while loop maintains a sub-invariant that holds shifted values greater than , terminating when the correct spot is found or the beginning is reached. The resulting pseudocode in Wirth's notation (adapted to modern readability) is:
procedure straightinsertion(a, n);
var i, j: integer; x: real;
begin
for i := 2 to n do begin
x := a[i];
j := i - 1;
while (j >= 1) and (a[j] > x) do begin
a[j + 1] := a[j];
j := j - 1
end;
a[j + 1] := x
end
end
procedure straightinsertion(a, n);
var i, j: integer; x: real;
begin
for i := 2 to n do begin
x := a[i];
j := i - 1;
while (j >= 1) and (a[j] > x) do begin
a[j + 1] := a[j];
j := j - 1
end;
a[j + 1] := x
end
end
Role of Data Structures
In the context of Niklaus Wirth's framework, data structures serve as mechanisms to store and organize data in ways that facilitate efficient access, modification, and manipulation, extending beyond primitive arrays to encompass abstract data types that abstract implementation details from users. These structures are essential for supporting algorithmic operations by providing the foundational organization of information, allowing programs to handle complexity through modular design. The book delineates key categories of data structures, including linear types such as arrays (contiguous, fixed-size sequences), lists (dynamic sequences via pointers), stacks (last-in, first-out access), and queues (first-in, first-out access), alongside non-linear types like trees (hierarchical relations) and graphs (arbitrary connections). A central emphasis is placed on the trade-off between linked storage, which uses pointers for flexible, non-contiguous allocation, and contiguous storage, which relies on sequential memory blocks for simplicity but limits dynamism. Core principles guiding data structure selection include encapsulation, which promotes modularity by bundling data representation with operations to hide internal details and enhance reusability. Structures are chosen based on the dominant operations; for instance, stacks enable the simulation of recursion by managing activation records in a disciplined manner, avoiding the need for explicit recursive calls in some contexts. A illustrative example is the linked list, a dynamic linear structure defined recursively in Pascal-like pseudocode, where each element points to the next:type
List = ^Node;
Node = record
data: integer; (* or appropriate type *)
next: List;
end;
type
List = ^Node;
Node = record
data: integer; (* or appropriate type *)
next: List;
end;
procedure traverse(L: List);
var p: List;
begin
p := L;
while p <> nil do begin
(* process p^.data *)
p := p^.next
end
end
procedure traverse(L: List);
var p: List;
begin
p := L;
while p <> nil do begin
(* process p^.data *)
p := p^.next
end
end
Integration in Programming
In Wirth's methodology, programs are synthesized by combining algorithms and data structures into cohesive modules, where data structures primarily define the interfaces and representational invariants, while algorithms implement the operational behaviors on those structures. This integration is facilitated through the concept of abstract data types (ADTs), which encapsulate both the data representation and the associated operations, promoting reusability and clarity in program design. Wirth emphasizes that effective programming requires viewing ADTs as black boxes, with external interfaces specifying only the necessary inputs, outputs, and behaviors, thereby decoupling the internal implementation details from the rest of the program. A key technique in this integration is modular decomposition coupled with information hiding, which involves breaking down programs into independent modules that conceal internal data structures and algorithms from other parts. This approach, inspired by principles of structured programming, allows developers to refine modules incrementally without affecting the overall system. A representative example is the symbol table ADT, commonly used in compilers, which combines a hash table as the underlying data structure for efficient storage and retrieval with search and insertion algorithms for managing symbols. In this ADT, operations such as insert, delete, and search are provided via a well-defined interface, hiding the hash table's collision resolution and the linear search fallback for unresolved hashes; for instance, each declaration adds a symbol to the table, and scope exits trigger deletions, ensuring efficient O(1) average-case lookups while abstracting the details.[14] Verification of these integrated modules focuses on testing at boundaries using preconditions and postconditions, which specify the assumptions and guarantees for each operation without delving into formal proof systems. Preconditions define the required state for an algorithm to execute correctly on a given data structure (e.g., a non-null pointer for list traversal), while postconditions assert the expected outcomes (e.g., the list remains sorted after insertion). This boundary testing ensures modular reliability by validating interactions independently, allowing systematic debugging and maintenance in larger programs. A specific illustration of this integration is the polynomial arithmetic program, where linked lists serve as the dynamic data structure to represent sparse polynomials efficiently—each node holds a coefficient and exponent, linked in descending order of degree—paired with algorithms for operations like addition. This combination avoids the inefficiencies of dense array representations for sparse cases, as linked lists permit variable-length storage and easy term merging. The following pseudocode, in a Pascal-like style, demonstrates the addition procedure, which traverses both input lists, adds corresponding coefficients for matching exponents, and constructs a result list while skipping zero coefficients:type
Term = record
Coef, Exp: integer;
Next: ^Term
end;
Polynomial = ^Term;
procedure AddPoly(P, Q: Polynomial; var R: Polynomial);
var
TermP, TermQ, NewTerm: ^Term;
SumCoef: integer;
begin
New(R);
R^.Next := nil;
TermP := P;
TermQ := Q;
while (TermP <> nil) and (TermQ <> nil) do
begin
if TermP^.Exp > TermQ^.Exp then
begin
New(NewTerm);
NewTerm^.Coef := TermP^.Coef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermP := TermP^.Next
end
else if TermP^.Exp < TermQ^.Exp then
begin
New(NewTerm);
NewTerm^.Coef := TermQ^.Coef;
NewTerm^.Exp := TermQ^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermQ := TermQ^.Next
end
else
begin
SumCoef := TermP^.Coef + TermQ^.Coef;
if SumCoef <> 0 then
begin
New(NewTerm);
NewTerm^.Coef := SumCoef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm
end;
TermP := TermP^.Next;
TermQ := TermQ^.Next
end
end;
{ Handle remaining terms from P or Q }
while TermP <> nil do
begin
New(NewTerm);
NewTerm^.Coef := TermP^.Coef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermP := TermP^.Next
end;
while TermQ <> nil do
begin
New(NewTerm);
NewTerm^.Coef := TermQ^.Coef;
NewTerm^.Exp := TermQ^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermQ := TermQ^.Next
end;
{ Reverse the result list to maintain descending order }
{ (Implementation of reversal omitted for brevity) }
end;
type
Term = record
Coef, Exp: integer;
Next: ^Term
end;
Polynomial = ^Term;
procedure AddPoly(P, Q: Polynomial; var R: Polynomial);
var
TermP, TermQ, NewTerm: ^Term;
SumCoef: integer;
begin
New(R);
R^.Next := nil;
TermP := P;
TermQ := Q;
while (TermP <> nil) and (TermQ <> nil) do
begin
if TermP^.Exp > TermQ^.Exp then
begin
New(NewTerm);
NewTerm^.Coef := TermP^.Coef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermP := TermP^.Next
end
else if TermP^.Exp < TermQ^.Exp then
begin
New(NewTerm);
NewTerm^.Coef := TermQ^.Coef;
NewTerm^.Exp := TermQ^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermQ := TermQ^.Next
end
else
begin
SumCoef := TermP^.Coef + TermQ^.Coef;
if SumCoef <> 0 then
begin
New(NewTerm);
NewTerm^.Coef := SumCoef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm
end;
TermP := TermP^.Next;
TermQ := TermQ^.Next
end
end;
{ Handle remaining terms from P or Q }
while TermP <> nil do
begin
New(NewTerm);
NewTerm^.Coef := TermP^.Coef;
NewTerm^.Exp := TermP^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermP := TermP^.Next
end;
while TermQ <> nil do
begin
New(NewTerm);
NewTerm^.Coef := TermQ^.Coef;
NewTerm^.Exp := TermQ^.Exp;
NewTerm^.Next := R^.Next;
R^.Next := NewTerm;
TermQ := TermQ^.Next
end;
{ Reverse the result list to maintain descending order }
{ (Implementation of reversal omitted for brevity) }
end;
Book Structure
Early Chapters on Fundamentals
The early chapters of Algorithms + Data Structures = Programs establish a foundational approach to programming by framing it as a disciplined process of problem-solving, emphasizing clarity and modularity from the outset.[15] Chapter 1 introduces programming not merely as writing code but as systematically breaking down problems into manageable parts, using Pascal as the illustrative language due to its structured syntax that enforces good practices.[16] This chapter covers essential elements such as variables for storing data, control structures including conditional statements like if-then-else for decision-making, and loops such as while-do and for for repetition, all presented through simple programs to demonstrate how these constructs enable algorithmic expression.[15] A key pedagogical tool in these initial sections is the use of simple arithmetic programs to illustrate basic syntax and execution flow. For instance, Wirth presents a program to compute the greatest common divisor (GCD) of two integers, employing the Euclidean algorithm within a loop to repeatedly apply division until the remainder is zero, highlighting how variables track intermediate results and control structures manage the iteration.[16] These examples underscore the importance of precise variable declaration in Pascal, where types like integer or real must be specified to ensure type safety and prevent runtime errors. Building on this, Chapter 1 delves into stepwise refinement, a method for developing programs incrementally by starting with a high-level outline and successively adding detail while verifying correctness at each step. Chapter 1 spans pages 1-55 in the 1976 edition, laying this groundwork while introducing notation for time and space complexity that foreshadows later discussions.[15] The number-guessing game serves as the central example for stepwise refinement, providing a full walkthrough of transforming an abstract problem into a working program. The process begins with the top-level task: generate a random number between 1 and 100, prompt the user for guesses, and provide feedback until correct. In the first refinement, this is broken into sub-tasks such as initializing the secret number, reading user input, and comparing it via an if-then-else structure to output hints like "too high" or "too low." Subsequent steps refine input validation (ensuring guesses are within range using a while loop), count attempts with a counter variable, and terminate with a success message. This iterative approach demonstrates how refinement maintains program structure, avoiding unstructured jumps. Throughout these chapters, Wirth emphasizes program correctness through assertions—formal statements inserted at key points to specify expected conditions, such as "the secret number is between 1 and 100" before the guessing loop begins. These assertions aid in debugging and proving program behavior, aligning with structured programming principles. He explicitly advocates avoiding goto statements, arguing that they lead to spaghetti code and obscure control flow; instead, Pascal's block-structured design with begin-end and conditional/loop constructs ensures readability and maintainability.[16]Middle Chapters on Techniques
The middle chapters of Algorithms + Data Structures = Programs shift from foundational concepts to intermediate techniques, illustrating how algorithms and data structures interlock in practical Pascal implementations to achieve efficiency and modularity. These sections emphasize design trade-offs, such as choosing between recursive and iterative approaches, and demonstrate dynamic memory management via pointers, all while prioritizing structures that minimize time and space complexity for common operations. Wirth uses concrete examples to show how poor structure selection can degrade performance, reinforcing the book's central equation through code that is both executable and analyzable.[17] Sorting and searching form a cornerstone of these techniques, with Chapter 2 detailing multiple array-based sorting methods, including a quicksort variant called partition sort. In partition sort, an array is recursively divided around a pivot element, swapping items to position smaller values to the left and larger to the right, yielding an average O(n log n) performance through reduced comparisons in subsequent partitions. This contrasts with simpler methods like insertion sort, which achieves O(n²) in the worst case but requires minimal auxiliary space. Searching builds on sorted outputs, employing binary search on arrays to repeatedly halve the interval until the target is found or absent, delivering O(log n) steps and highlighting the necessity of prior sorting for logarithmic efficiency.[17] Chapter 3 explores recursion as a technique for algorithms with inherent hierarchical decomposition, weighing it against iteration for practicality. Recursion simplifies code for problems like the eight queens puzzle, where a backtracking function calls itself to place queens row by row, pruning invalid configurations via depth-first exploration. However, Wirth cautions against overuse, noting recursion's stack overhead can lead to exhaustion in deep calls, advocating iteration for tail-recursive cases—such as linear scans—where loops avoid function call costs while preserving clarity. Examples include converting recursive tree traversals to iterative stacks, balancing readability with runtime efficiency in resource-constrained environments.[17] Dynamic allocation emerges prominently in Chapter 4, leveraging Pascal's pointer types to construct flexible structures at runtime, bypassing fixed-size arrays for scalable implementations. Pointers enable linking nodes in linear lists, supporting insertions and deletions in O(1) time at known positions, with applications like topological sorting on directed acyclic graphs by maintaining predecessor lists. Binary search trees (BSTs) exemplify pointer-based trees, where each node holds a key and references to left (smaller keys) and right (larger keys) subtrees, facilitating ordered storage. Search and insertion traverse from the root, averaging O(log n) comparisons in balanced trees, though Wirth analyzes worst-case O(n) degeneration from sequential insertions and introduces rotations for AVL-like balancing to restore logarithmic bounds without external storage. Deletion handles cases like leaf nodes (O(log n)) or those with children by successor replacement, integrating pointers to re-link subtrees seamlessly.[17] Throughout these chapters, Wirth stresses qualitative efficiency gains from structure-aligned algorithms, such as logarithmic operations in trees versus linear scans in arrays, urging programmers to prototype in Pascal to validate trade-offs empirically.[17]Later Chapters on Applications
The later chapters of the book shift focus from foundational techniques to their practical application in constructing complex software systems, particularly compilers and interpreters, demonstrating how algorithms and data structures integrate to form robust programs. Chapter 5, titled "Language Structures and Compilers," serves as the primary vehicle for these applications, building on earlier discussions of data types, sorting, recursion, and dynamic structures to address the challenges of processing programming languages. Wirth emphasizes that compiler design exemplifies the book's central thesis, where data structures such as graphs and trees encode language syntax, while algorithms perform analysis, transformation, and generation tasks. This chapter illustrates scalability by applying modular designs to real-world problems, underscoring the need for efficiency in handling large-scale inputs like source code files. A pivotal integration of these techniques appears in the chapter's expression parsing, where stacks orchestrate operator precedence and associativity for infix to postfix conversion, embodying the LIFO principle to resolve parsing ambiguities efficiently. The algorithm initializes two stacks: one for operators (with precedence levels) and one for output (postfix operands). As tokens are scanned left-to-right, operands are directly output or pushed; operators are compared by precedence—if the incoming operator has lower or equal priority to the top of the operator stack, the top is popped, applied to the prior two operands (fetched from output or computed), and reduced to a single result pushed back. Parentheses enforce grouping by clearing the operator stack upon matching close, ensuring hierarchical evaluation. This yields O(n) time, linear in expression length, with stacks sized constantly relative to depth.[17] Error handling enhances robustness: during scanning, unmatched parentheses trigger stack underflow detection, reporting "missing )" or "excess )" based on empty/non-empty states post-pop; invalid operators (e.g., undefined symbols) halt with syntax violation messages, while precedence mismatches (like adjacent high-priority operators without operands) are flagged via insufficient stack elements before reduction. In Pascal, this is realized through a modular procedure like:procedure Expression;
var opstack: array[1..maxdepth] of symbol;
outstack: array[1..maxlen] of item;
optop, outtop: integer;
begin
optop := 0; outtop := 0;
while not eos do
begin
getsym;
case sym of
number, ident: push(outstack, outtop, sym);
'+', '-', '*', '/', '^':
while (optop > 0) and (precedence[sym] <= precedence[opstack[optop]]) do
begin
pop(opstack, optop, opsym);
if outtop < 2 then error('operand missing')
else reduce(outstack, outtop, opsym);
end;
push(opstack, optop, sym);
'(', '[': push(opstack, optop, sym);
')', ']':
while (optop > 0) and (opstack[optop] <> matching_open) do
begin
pop(opstack, optop, opsym);
reduce(outstack, outtop, opsym);
end;
if optop = 0 then error('unmatched close')
else pop(opstack, optop, dummy);
end;
end;
while optop > 0 do
begin
pop(opstack, optop, opsym);
if outtop < 2 then error('incomplete expression')
else reduce(outstack, outtop, opsym);
end;
if outtop <> 1 then error('invalid expression')
end;
procedure Expression;
var opstack: array[1..maxdepth] of symbol;
outstack: array[1..maxlen] of item;
optop, outtop: integer;
begin
optop := 0; outtop := 0;
while not eos do
begin
getsym;
case sym of
number, ident: push(outstack, outtop, sym);
'+', '-', '*', '/', '^':
while (optop > 0) and (precedence[sym] <= precedence[opstack[optop]]) do
begin
pop(opstack, optop, opsym);
if outtop < 2 then error('operand missing')
else reduce(outstack, outtop, opsym);
end;
push(opstack, optop, sym);
'(', '[': push(opstack, optop, sym);
')', ']':
while (optop > 0) and (opstack[optop] <> matching_open) do
begin
pop(opstack, optop, opsym);
reduce(outstack, outtop, opsym);
end;
if optop = 0 then error('unmatched close')
else pop(opstack, optop, dummy);
end;
end;
while optop > 0 do
begin
pop(opstack, optop, opsym);
if outtop < 2 then error('incomplete expression')
else reduce(outstack, outtop, opsym);
end;
if outtop <> 1 then error('invalid expression')
end;

