Hubbry Logo
Planner (programming language)Planner (programming language)Main
Open search
Planner (programming language)
Community hub
Planner (programming language)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Planner (programming language)
Planner (programming language)
from Wikipedia
Planner
ParadigmMulti-paradigm: logic, procedural
Designed byCarl Hewitt
First appeared1969; 56 years ago (1969)
Major implementations
Micro-planner, Pico-Planner, Popler, PICO-PLANNER
Dialects
QA4, Conniver, QLISP, Ether
Influenced
Prolog, Smalltalk

Planner (often seen in publications as "PLANNER" although it is not an acronym) is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language.[1] Derivations such as QA4, Conniver, QLISP and Ether (see scientific community metaphor) were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).

Procedural approach versus logical approach

[edit]

The two major paradigms for constructing semantic software systems were procedural and logical. The procedural paradigm was epitomized by Lisp[2] which featured recursive procedures that operated on list structures.

The logical paradigm was epitomized by uniform proof procedure resolution-based derivation (proof) finders.[3] According to the logical paradigm it was “cheating” to incorporate procedural knowledge.[4]

Procedural embedding of knowledge

[edit]

Planner was invented for the purposes of the procedural embedding of knowledge[5] and was a rejection of the resolution uniform proof procedure paradigm,[6] which

  1. Converted everything to clausal form. Converting all information to clausal form is problematic because it hides the underlying structure of the information.
  2. Then used resolution to attempt to obtain a proof by contradiction by adding the clausal form of the negation of the theorem to be proved. Using only resolution as the rule of inference is problematical because it hides the underlying structure of proofs. Also, using proof by contradiction is problematical because the axiomatizations of all practical domains of knowledge are inconsistent in practice.

Planner was a kind of hybrid between the procedural and logical paradigms because it combined programmability with logical reasoning. Planner featured a procedural interpretation of logical sentences where an implication of the form (P implies Q) can be procedurally interpreted in the following ways using pattern-directed invocation:

  1. Forward chaining (antecedently):
    If assert P, assert Q
    If assert not Q, assert not P
  2. Backward chaining (consequently)
    If goal Q, goal P
    If goal not P, goal not Q

In this respect, the development of Planner was influenced by natural deductive logical systems (especially the one by Frederic Fitch [1952]).

Micro-planner implementation

[edit]

A subset called Micro-Planner was implemented by Gerry Sussman, Eugene Charniak and Terry Winograd[7] and was used in Winograd's natural-language understanding program SHRDLU, Eugene Charniak's story understanding work, Thorne McCarty's work on legal reasoning, and some other projects. This generated a great deal of excitement in the field of AI. It also generated controversy because it proposed an alternative to the logic approach that had been one of the mainstay paradigms for AI.

At SRI International, Jeff Rulifson, Jan Derksen, and Richard Waldinger developed QA4 which built on the constructs in Planner and introduced a context mechanism to provide modularity for expressions in the database. Earl Sacerdoti and Rene Reboh developed QLISP, an extension of QA4 embedded in INTERLISP, providing Planner-like reasoning embedded in a procedural language and developed in its rich programming environment. QLISP was used by Richard Waldinger and Karl Levitt for program verification, by Earl Sacerdoti for planning and execution monitoring, by Jean-Claude Latombe for computer-aided design, by Nachum Dershowitz for program synthesis, by Richard Fikes for deductive retrieval, and by Steven Coles for an early expert system that guided use of an econometric model.

Computers were expensive. They had only a single slow processor and their memories were very small by comparison with today. So Planner adopted some efficiency expedients including the following:

  • Backtracking[8] was adopted to economize on the use of time and storage by working on and storing only one possibility at a time in exploring alternatives.
  • A unique name assumption was adopted to save space and time by assuming that different names referred to different objects. For example, names like Peking (previous PRC capital name) and Beijing (current PRC capital transliteration) were assumed to refer to different objects.
  • A closed-world assumption could be implemented by conditionally testing whether an attempt to prove a goal exhaustively failed. Later this capability was given the misleading name "negation as failure" because for a goal G it was possible to say: "if attempting to achieve G exhaustively fails then assert (Not G)."

The genesis of Prolog

[edit]

Gerry Sussman, Eugene Charniak, Seymour Papert and Terry Winograd visited the University of Edinburgh in 1971, spreading the news about Micro-Planner and SHRDLU and casting doubt on the resolution uniform proof procedure approach that had been the mainstay of the Edinburgh Logicists. At the University of Edinburgh, Bruce Anderson implemented a subset of Micro-Planner called PICO-PLANNER,[9] and Julian Davies (1973) implemented essentially all of Planner.

According to Donald MacKenzie, Pat Hayes recalled the impact of a visit from Papert to Edinburgh, which had become the "heart of artificial intelligence's Logicland," according to Papert's MIT colleague, Carl Hewitt. Papert eloquently voiced his critique of the resolution approach dominant at Edinburgh "…and at least one person upped sticks and left because of Papert."[10]

The above developments generated tension among the Logicists at Edinburgh. These tensions were exacerbated when the UK Science Research Council commissioned Sir James Lighthill to write a report on the AI research situation in the UK. The resulting report [Lighthill 1973; McCarthy 1973] was highly critical although SHRDLU was favorably mentioned.

Pat Hayes visited Stanford where he learned about Planner. When he returned to Edinburgh, he tried to influence his friend Bob Kowalski to take Planner into account in their joint work on automated theorem proving. "Resolution theorem-proving was demoted from a hot topic to a relic of the misguided past. Bob Kowalski doggedly stuck to his faith in the potential of resolution theorem proving. He carefully studied Planner.”.[11] Kowalski [1988] states "I can recall trying to convince Hewitt that Planner was similar to SL-resolution." But Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm. Colmerauer and Roussel recalled their reaction to learning about Planner in the following way:

"While attending an IJCAI convention in September ‘71 with Jean Trudel, we met Robert Kowalski again and heard a lecture by Terry Winograd on natural language processing. The fact that he did not use a unified formalism left us puzzled. It was at this time that we learned of the existence of Carl Hewitt’s programming language, Planner. The lack of formalization of this language, our ignorance of Lisp and, above all, the fact that we were absolutely devoted to logic meant that this work had little influence on our later research."[12]

In the fall of 1972, Philippe Roussel implemented a language called Prolog (an abbreviation for PROgrammation en LOGique – French for "programming in logic"). Prolog programs are generically of the following form (which is a special case of the backward-chaining in Planner):

When goal Q, goal P1 and ... and goal Pn

Prolog duplicated the following aspects of Micro-Planner:

  • Pattern directed invocation of procedures from goals (i.e. backward chaining)
  • An indexed data base of pattern-directed procedures and ground sentences.
  • Giving up on the completeness paradigm that had characterized previous work on theorem proving and replacing it with the programming language procedural embedding of knowledge paradigm.

Prolog also duplicated the following capabilities of Micro-Planner which were pragmatically useful for the computers of the era because they saved space and time:

  • Backtracking control structure
  • Unique Name Assumption by which different names are assumed to refer to distinct entities, e.g., Peking and Beijing are assumed to be different.
  • Reification of Failure. The way that Planner established that something was provable was to successfully attempt it as a goal and the way that it establish that something was unprovable was to attempt it as a goal and explicitly fail. Of course the other possibility is that the attempt to prove the goal runs forever and never returns any value. Planner also had a (not expression) construct which succeeded if expression failed, which gave rise to the “Negation as Failure” terminology in Planner.

Use of the Unique Name Assumption and Negation as Failure became more questionable when attention turned to Open Systems.[13]

The following capabilities of Micro-Planner were omitted from Prolog:

  • Pattern-directed invocation of procedural plans from assertions (i.e., forward chaining)
  • Logical negation, e.g., (not (human Socrates)).

Prolog did not include negation in part because it raises implementation issues. Consider for example if negation were included in the following Prolog program:

not Q.
Q  :- P.

The above program would be unable to prove not P even though it follows by the rules of mathematical logic. This is an illustration of the fact that Prolog (like Planner) is intended to be a programming language and so does not (by itself) prove many of the logical consequences that follow from a declarative reading of its programs.

The work on Prolog was valuable in that it was much simpler than Planner. However, as the need arose for greater expressive power in the language, Prolog began to include many of the capabilities of Planner that were left out of the original version of Prolog.

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Planner is a programming language designed by Carl Hewitt at MIT for proving theorems and manipulating models in robotic systems, featuring a hierarchical control structure and problem-solving primitives that enable efficient pattern-directed invocation of procedural plans from assertions and goals. First described in a 1969 paper presented at the International Joint Conference on Artificial Intelligence, Planner combines imperative and declarative elements to support dynamic state changes, goal establishment, and theorem application through a subordinate deductive system optimized for long inference chains. Subsets of Planner, such as Micro-Planner and Pico-Planner, were implemented in the early 1970s to make the language more accessible for AI applications. Micro-Planner, developed by Gerry Sussman, Eugene Charniak, and Terry Winograd, served as a simplified interpreter that facilitated practical use in projects like Winograd's SHRDLU natural-language understanding system and Charniak's story comprehension research. These implementations embedded procedural knowledge representation within a pattern-matching framework, allowing programs to adapt to new information and handle complex problem-solving tasks in robotics and artificial intelligence. Planner significantly influenced the development of logic programming languages, particularly Prolog, by introducing concepts like procedural attachment to logical assertions and goal-directed computation. Although early critics sometimes viewed Prolog as a mere reinvention of Planner, the latter's emphasis on flexible pattern matching and control structures paved the way for Prolog's advancements in unification and clausal logic as a general-purpose paradigm. Hewitt's work on Planner also laid foundational ideas for concurrent computation, later expanded in his actor model, underscoring its lasting impact on AI and programming language design.

Overview and Design Principles

Language Overview

Planner is a procedural programming language designed for artificial intelligence applications, with a focus on theorem proving and problem-solving in robotic systems. Developed by Carl Hewitt at the MIT Artificial Intelligence Laboratory, it enables the manipulation of models and the drawing of conclusions as the state of the world changes through assertions and withdrawals of statements. Key characteristics of Planner include pattern-directed invocation of procedures, which allows indirect function calls based on the form of data rather than explicit names, goal-directed execution for establishing and satisfying objectives, and support for non-deterministic computation via backtracking mechanisms. These features facilitate a hierarchical control structure that enhances deductive efficiency in complex problem-solving scenarios. First described in 1967 in A.I. Memo 137 as part of Hewitt's work at MIT's Project MAC, Planner marked a paradigm shift from traditional sequential programming paradigms, such as those in pure LISP, toward reactive and knowledge-based processing that integrates procedural and deductive elements. The language's first implementations appeared around 1971 in the form of subsets like Micro-Planner, providing practical tools for AI research. Planner's innovative approach to procedural knowledge representation influenced subsequent languages, including Prolog.

Core Design Goals

Planner was designed to address the limitations of traditional procedural programming languages in artificial intelligence, which often imposed rigid control flows that hindered the flexible representation and manipulation of knowledge in complex problem-solving scenarios. By integrating declarative and imperative elements, Planner aimed to enable modular knowledge structures that could be easily manipulated and extended, allowing AI systems to handle diverse tasks such as theorem proving and robotic manipulation more intuitively. This motivation stemmed from the need for a language that could support both data and control aspects in a unified framework, moving beyond the constraints of purely sequential execution. Carl Hewitt's vision for Planner emphasized decentralized control mechanisms, laying groundwork for actor-based computation where autonomous components could interact dynamically without centralized orchestration. This approach anticipated key ideas in concurrent programming by treating knowledge and processes as distributed entities capable of independent operation, influencing later developments in the actor model. Developed amid the 1960s AI research boom at MIT's Project MAC, Planner sought to create systems resilient to the complexities of real-world inference. Among its core goals, Planner facilitated theorem proving through procedural attachments, where antecedents and consequents could invoke executable code tied directly to logical statements, rather than relying solely on formal logic systems. It also prioritized the easy extension of knowledge bases by allowing incremental addition of declarative facts and imperative procedures without disrupting existing structures. Furthermore, to manage uncertainty in search processes, the language incorporated backtracking as a fundamental mechanism, enabling systematic exploration of alternative paths upon failure. In contrast to earlier AI methods that separated data from procedures, Planner embedded procedural knowledge directly into data structures via pattern-directed mechanisms, fostering a more natural and intuitive paradigm for AI programming that blurred the lines between representation and computation. This design choice aimed to make knowledge bases not just passive stores but active, executable components of the system.

Historical Development

Origins at MIT AI Lab

Planner was developed by Carl Hewitt in the late 1960s as a PhD project at the MIT Artificial Intelligence Laboratory, where he sought to develop a framework for automated reasoning and problem-solving in robotic systems. The initial concepts were outlined in Hewitt's AI Memo 137 from July 1967. This work built upon Hewitt's doctoral research, culminating in his PhD awarded in 1971. Hewitt's seminal presentation of the language at the First International Joint Conference on Artificial Intelligence (IJCAI) in 1969 marked its formal introduction to the AI community. The development occurred amid the vibrant AI research environment of MIT's Project MAC in the late 1960s, a pioneering initiative launched in 1963 to advance machine-aided cognition and multi-user computing. This institutional context fostered innovations like MacLISP, a dialect of Lisp tailored for AI applications and emerging around the same period, which provided the foundational implementation substrate for Planner as an embedded extension. Concurrent efforts, such as Terry Winograd's early work on natural language understanding that later evolved into the SHRDLU system, contributed to the lab's emphasis on integrating procedural knowledge representation—ideas that indirectly shaped the motivational backdrop for Planner's design. Hewitt led the effort as the primary architect, drawing on his background in computer science under mentors like Marvin Minsky and Seymour Papert. Early collaborations involved key MIT researchers, including Terry Winograd, who co-implemented subsets of the language and tested its primitives in practical AI scenarios. Initial prototypes of Planner consisted of hand-coded experiments focused on theorem proving and planning tasks, demonstrating the language's primitives for goal-directed search and pattern matching in simple robotic model manipulations around 1970. These early tests highlighted the need for procedural embedding of knowledge to handle non-monotonic reasoning, inspiring further refinements in subsequent work.

Evolution and Key Milestones

The development of Planner began with Carl Hewitt's seminal paper introducing the language as a tool for theorem proving and model manipulation in robotic systems, published in the proceedings of the first International Joint Conference on Artificial Intelligence in 1969. This initial formulation established core primitives for goal-directed computation and pattern-directed invocation, enabling early applications in AI problem-solving at the MIT AI Lab. Subsequent refinements emerged from practical use in theorem proving and planning tasks, where users identified needs for more flexible knowledge integration to handle dynamic environments beyond basic goal satisfaction. By 1971, Hewitt addressed these limitations through advancements in procedural embedding of knowledge, allowing for more sophisticated representation and manipulation of complex structures within Planner programs. This evolution incorporated feedback from AI demonstrations, such as robotic path planning and model updating, shifting the language from simple goal achievement to robust support for hierarchical control and extensible procedures in real-world simulations. The changes enhanced Planner's applicability in MIT's experimental setups, where it facilitated interactive theorem application and backtracking in robotic contexts. Hewitt's 1972 technical report (AI Memo 258) provided a comprehensive description of Planner's syntax and semantics, formalizing its theoretical foundations using schemata for analysis. This dissemination solidified the language's framework, promoting its adoption in MIT demos for robotic planning tasks like object manipulation and environmental modeling. However, challenges with efficiency in managing large knowledge bases prompted explorations into streamlined variants, optimizing performance for broader AI applications without altering core concepts.

Fundamental Concepts

Procedural Versus Logical Approaches

In the late 1960s and early 1970s, artificial intelligence research featured prominent logical approaches to knowledge representation and problem-solving, exemplified by resolution-based theorem proving systems. These systems, such as QA4, emphasized declarative rules expressed in first-order logic, where knowledge was represented as static facts and inference rules, and solutions were derived through uniform deduction processes like resolution to achieve proofs by contradiction or model generation. This paradigm prioritized mathematical rigor and compositionality, treating all knowledge uniformly without procedural distinctions, but it often struggled with scalability for complex, real-world problems due to exhaustive search spaces. In contrast, Planner adopted a procedural approach, embedding knowledge directly into executable procedures attached to pattern-matching structures, which allowed for dynamic control flow and side effects during computation. Rather than relying solely on declarative inference, Planner's design subordinated its deductive mechanisms to a hierarchical control structure, enabling procedures to manipulate models, generate subgoals, and integrate new information on the fly. This procedural embedding treated intellectual structures—like descriptions or proofs—as active processes, such as recognition or deduction procedures, fostering a more imperative style suited to theorem proving in robotic contexts. The procedural paradigm in Planner offered key advantages over purely logical methods, particularly in handling incomplete or fragmentary information through mechanisms like backtracking, which supported exploratory search without rigid uniformity. It excelled in modeling real-world actions requiring side effects, such as database modifications during planning, and improved efficiency for lengthy computations by avoiding the "blowing up" of search spaces common in resolution systems. However, drawbacks included reduced compositionality, as procedures could introduce unintended side effects, complicating verification and debugging compared to the declarative purity of logical approaches. This choice reflected the broader 1970s debate in the AI community between logic-based and procedural paradigms, where Planner advocated for a hybrid embedding of knowledge to balance deductive power with practical flexibility, shifting focus from toy problems to intellectually capable systems.

Pattern-Directed Invocation

Pattern-directed invocation is a central mechanism in the Planner programming language, where procedures, referred to as actors, are selected and executed reactively based on pattern matching against incoming goals or assertions. In this system, actors are stored in an indexed database and associated with pattern expressions that serve as triggers; when a goal is processed, the system scans the database to find matching patterns, binds variables through a unification-like matching algorithm, and invokes the corresponding actor to generate subgoals or assertions. This approach embeds procedural knowledge directly into logical structures, allowing the control flow to emerge from data patterns rather than explicit sequencing. The key components of pattern-directed invocation include pattern expressions composed of literals (specific constants) and variables (denoted by question marks or similar placeholders), which define the conditions for matching. The matching algorithm performs substitution to unify the goal's structure with the pattern, ensuring consistent bindings across variables; for instance, a pattern like (THGOAL ?P) would match any goal of the form (THGOAL <some-proposition>), binding ?P to that proposition. This unification process is analogous to that in logic programming but oriented toward procedural activation rather than purely declarative inference. A representative example of pattern-directed invocation appears in theorem proving tasks, where a goal such as (THGOAL (MORTAL SOCRATES)) triggers an actor with the pattern (THGOAL (?X MORTAL)). This actor might then invoke subgoals like (THGOAL (?X HUMAN)) and apply rules such as "all humans are mortal," propagating bindings to assert or further subgoal as needed. Such invocation enables systematic exploration of proof spaces by chaining pattern matches. The benefits of this mechanism include the modular addition of knowledge, as new actors can be defined and indexed without altering the existing control structure, facilitating incremental system extension.

Goal-Directed Computation and Backtracking

In Planner, goals are formalized as expressions that the system attempts to satisfy, typically through the primitive THGOAL (or GOAL in the original formulation), which initiates a search for supporting facts or derivations. A goal such as (THGOAL (PART FINGER PERSON)) queries the database for matching assertions or invokes applicable theorems to establish the relation, often by reducing the primary goal to a set of subgoals. Satisfaction occurs either by primitive actions—direct database matches or assertions—or by recursively generating subgoals, as in theorem consequents that propose further THGOAL invocations, such as (THGOAL (PART $?Y PERSON)) where $?Y binds to "FINGER" to prove the hierarchy. For specialized cases like theorem proving, THGOAL extends to forms such as THGOAL for deductive goals, ensuring hierarchical decomposition until base cases are reached. The backtracking mechanism in Planner employs a depth-first search strategy to explore the goal tree, automatically retrying alternatives upon failure to recover from dead ends. When a subgoal fails—due to no matching database entry or unsuccessful derivation—the system retracts to the most recent choice point, unbinds variables, and restores the state as if the failed path never occurred, using a trail to record and undo variable bindings and side effects like assertions. This trail maintains a stack of bindings and modifications (e.g., via explicit markers like THSETQ or event times for actions), enabling efficient restoration without manual intervention, though data changes require programmer-specified undo primitives in some cases. Failure propagation continues up the goal hierarchy until success or exhaustion of options, promoting exhaustive yet orderly exploration. Planner's computation model is inherently non-deterministic, with choice points arising whenever multiple procedures match a goal pattern, creating branches in the search tree for parallel exploration via sequential trials. These points, set implicitly by THGOAL or explicit disjunctions, allow the system to select and commit to one alternative temporarily, deferring others until backtracking demands retry. This relies on pattern matching to identify candidate procedures from a theorem base, directing invocation toward goal satisfaction. The model supports and-or goal structures, where conjunctions sequence subgoals and disjunctions branch on alternatives, fostering flexible problem-solving without explicit recursion. The basic algorithm for goal reduction and invocation in Planner proceeds as follows:

To satisfy a goal G: 1. Attempt to match G against the database; if a match exists, succeed and return bindings. 2. If no match, scan theorems for consequent patterns that match G; select the first (or recommended) matching theorem T. 3. Bind variables from the match and invoke T's antecedent actions, which may generate subgoals S1, S2, ..., Sn. 4. Recursively satisfy each Si in sequence (conjunction); on success of all, assert T's consequent and succeed. 5. On failure of any Si, backtrack: unbind variables via trail, restore state, and try the next matching theorem or alternative in T. 6. If no alternatives remain, propagate failure to the parent goal.

To satisfy a goal G: 1. Attempt to match G against the database; if a match exists, succeed and return bindings. 2. If no match, scan theorems for consequent patterns that match G; select the first (or recommended) matching theorem T. 3. Bind variables from the match and invoke T's antecedent actions, which may generate subgoals S1, S2, ..., Sn. 4. Recursively satisfy each Si in sequence (conjunction); on success of all, assert T's consequent and succeed. 5. On failure of any Si, backtrack: unbind variables via trail, restore state, and try the next matching theorem or alternative in T. 6. If no alternatives remain, propagate failure to the parent goal.

This step-by-step reduction emphasizes automatic search control, with recommendations (e.g., FIRST or ONLY) guiding theorem selection to optimize depth-first traversal.

Implementations and Variants

Micro-Planner Implementation

Micro-Planner represents a practical subset of the original Planner language, implemented in MacLISP by Gerald Jay Sussman, Terry Winograd, and Eugene Charniak at the MIT Artificial Intelligence Laboratory between 1970 and 1971. This implementation provided an accessible system for exploring Planner's goal-directed paradigm within the Lisp environment, serving as the primary reference for early applications in theorem proving and problem solving. Unlike the more ambitious full Planner design, Micro-Planner focused on core primitives to ensure efficiency and ease of use on contemporary hardware. The syntax of Micro-Planner emphasizes pattern-directed procedures and goal invocation, building directly on Lisp's list-processing capabilities. Theorem procedures, which define how to achieve specific goals, are specified using the form (DEFTHEO <pattern> <body>), where <pattern> is a template that matches incoming goals via unification, and <body> contains the sequence of actions or subgoals to execute upon a match. For instance, goals are pursued through the primitive (THGOAL <expr>), which attempts to satisfy the expression <expr> by searching a database of assertions or invoking matching theorem procedures; success binds variables in the pattern and executes the body, while failure triggers backtracking. Other forms include (THASSERT <expr>) for adding facts to the database and (THPROG <vars> <body>) for defining procedural blocks with local variable bindings. Key features of Micro-Planner include a simplified unification mechanism that uses Lisp-like patterns with variable indicators such as $?var for matching lists and $var for atoms, enabling flexible pattern matching without full first-order logic support. Built-in backtracking is managed implicitly by the goal solver, which retries alternative theorem applications or database entries upon failure, promoting exploratory search in planning tasks. The system integrates tightly with MacLISP, allowing seamless calls to Lisp functions for arithmetic, I/O, or other primitives, which extends Planner's expressiveness while embedding it within a robust host language. These elements made Micro-Planner suitable for rapid prototyping of AI applications, though it omitted advanced features like coroutining from the full Planner specification. A representative usage example appears in blocks world planning, where the system models stacking and moving blocks through goals and theorems. Consider defining a theorem to achieve an "on" relation:

(DEFTHEO (ON ?X ?Y) (THGOAL (CLEAR ?X)) (THGOAL (CLEAR ?Y)) (THGOAL (IN-HAND ?X)) (DROP ?X ON ?Y) (THASSERT (ON ?X ?Y)) (THASSERT (CLEAR ?X)))

(DEFTHEO (ON ?X ?Y) (THGOAL (CLEAR ?X)) (THGOAL (CLEAR ?Y)) (THGOAL (IN-HAND ?X)) (DROP ?X ON ?Y) (THASSERT (ON ?X ?Y)) (THASSERT (CLEAR ?X)))

To solve a goal like (THGOAL (ON A B)), Micro-Planner would unify the pattern, pursue subgoals such as picking up block A (via other theorems like (DEFTHEO (IN-HAND ?BLOCK) (UNSTACK ?BLOCK FROM TABLE))), and backtrack if a subgoal fails, ultimately updating the world state upon success. This demonstrates the language's procedural attachment to goals, as seen in early AI experiments. Conniver was developed by Drew McDermott in 1972 at the MIT Artificial Intelligence Laboratory, in collaboration with Gerald J. Sussman, as an extension of Micro-Planner aimed at overcoming its rigidities in control and search mechanisms. This language introduced more sophisticated programming constructs to handle complex reasoning tasks in artificial intelligence, emphasizing programmer-directed control over automated processes. Key enhancements in Conniver included refined pattern matching capabilities that allowed for more expressive and efficient invocation of procedures from databases, surpassing the basic unification in Micro-Planner. It supported chronological backtracking but shifted it from fully automatic operation to explicit user management via a "hairy control structure," enabling selective exploration of alternatives and avoiding unnecessary recomputation. Additionally, Conniver integrated features for dependency-directed search, where programmers could track and manipulate dependencies between goals and assertions to guide backtracking more intelligently and reduce search space explosion. These improvements addressed the high overhead of Micro-Planner's indiscriminate backtracking in large-scale applications, such as theorem proving or planning systems, by permitting tailored strategies that minimized redundant efforts. During the 1970s, related systems built on or adapted Conniver's ideas appeared in other Lisp environments, including experimental ports to dialects like Interlisp at Stanford, where features such as pattern-directed invocation and controlled backtracking influenced languages like QLISP. Implementations in SAIL, Stanford's AI-oriented Lisp successor, incorporated similar procedural attachment and search control paradigms, though not direct ports of Conniver itself, to support AI research in robotics and vision. These variants extended Conniver's efficiency gains to diverse hardware and software contexts, facilitating broader adoption in academic AI labs.

Other Implementations

Pico-Planner, another subset of Planner, was a minimal implementation developed in the early 1970s, providing basic goal-directed capabilities for simple AI experiments. Additionally, Popler, implemented by Julian Davies at the University of Edinburgh around 1972, realized much of the full Planner language within the POP-2 programming environment, supporting more advanced features like hierarchical planning and theorem proving.

Influence and Legacy

Connection to Prolog's Development

The development of Prolog was significantly shaped by ideas originating from Planner, particularly through the work of Robert Kowalski, who sought to formalize and refine Planner's hybrid procedural-logical approach into a purely logical framework. In 1972, Kowalski published "The Predicate Calculus as a Programming Language," an abstract presented at the First Symposium on Mathematical Foundations of Computer Science in Jablonna, Poland, where he proposed interpreting Horn clauses procedurally via backward chaining to enable computational deduction. This paper bridged pattern-directed invocation and goal-directed computation to the foundations of logic programming, emphasizing how logical inference could subsume procedural control. However, perspectives differ on the extent of Planner's direct influence: while Carl Hewitt viewed Prolog as a subset of Planner, Kowalski and collaborators emphasized logical foundations over procedural aspects. A pivotal transition occurred through Kowalski's collaboration with Alain Colmerauer at the University of Marseille in spring 1972, where they worked on SL-resolution for natural language processing. Colmerauer, along with Philippe Roussel, implemented an initial version of what became Prolog in fall 1972, formally defining it in 1973 as a language centered on Horn clauses and resolution. This work built on Kowalski's SL-resolution (selective linear resolution for definite clauses), incorporating backtracking for handling nondeterminism and depth-first search strategy to explore proof trees efficiently. Prolog shares conceptual similarities with Planner, including goal reduction akin to backward chaining within SLD resolution (selective linear definite-clause resolution), pattern matching related to unification for variable binding, and depth-first search with backtracking to manage search failures without exhaustive exploration. These features allowed Prolog to perform automated theorem proving and programming in a unified logical paradigm, retaining efficiency in goal-directed computation while eliminating reliance on Lisp embeddings. Despite these similarities, Prolog diverged from Planner by adopting a purely declarative style based on logic alone, eschewing Planner's procedural attachments such as theorem-proving primitives (THETA) and evaluation mechanisms (EVAL) that intertwined code with data in a non-logical manner. This shift emphasized "Algorithm = Logic + Control," where control was fixed and implicit, contrasting Planner's flexible but ad hoc procedural extensions.

Impact on AI and Programming Languages

Planner's pioneering use of pattern-directed invocation and procedural attachment in knowledge representation laid foundational techniques for expert systems and planning algorithms that proliferated in the 1980s. These mechanisms allowed for dynamic triggering of procedures based on matching patterns in assertions and goals, enabling more efficient manipulation of symbolic knowledge compared to purely declarative approaches. This influenced early systems like QA4, which adopted Planner's deductive and pattern-matching capabilities to advance theorem proving and question-answering in AI research. In the realm of programming languages, Planner's pattern-directed programming paradigm resonated in Lisp dialects such as QLISP, where goal-directed processing and backtracking were integrated to support complex AI tasks like automatic programming and problem reduction. The language's emphasis on rule-like invocation also echoed in rule-based production systems, exemplified by OPS5 and its descendant CLIPS, which employed similar pattern-matching for forward-chaining inference in expert system applications. These adaptations extended Planner's ideas to practical, declarative rule engines widely used in knowledge-intensive domains. Over the long term, Planner contributed to a paradigm shift in AI from rigid symbolic logic toward hybrid procedural-logical models, blending nondeterministic search with procedural control to better handle real-world uncertainty in areas like robotics. This evolution is reflected in Carl Hewitt's extension of Planner concepts into the actor model, where actor semantics formalized PLANNER-73's handling of parallelism, side-effects, and synchronization, influencing concurrent programming paradigms. Planner retains archival value for studying non-deterministic computation.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.