Recent from talks
Nothing was collected or created yet.
Computational cognition
View on WikipediaComputational cognition (sometimes referred to as computational cognitive science or computational psychology or cognitive simulation) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.[1]
Artificial intelligence
[edit]There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature.[2][3] In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology.[4]
The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols.[5] However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to break down the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism.[3]
Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started to believe that symbolic artificial intelligence might never be able to imitate some intricate processes of human cognition like perception or learning. The then perceived impossibility (since refuted [6]) of implementing emotion in AI, was seen to be a stumbling block on the path to achieving human-like cognition with computers.[7] Researchers began to take a “sub-symbolic” approach to create intelligence without specifically representing that knowledge. This movement led to the emerging discipline of computational modeling, connectionism, and computational intelligence.[5]
Computational modeling
[edit]As it contributes more to the understanding of human cognition than artificial intelligence, computational cognitive modeling emerged from the need to define various cognition functionalities (like motivation, emotion, or perception) by representing them in computational models of mechanisms and processes.[8] Computational models study complex systems through the use of algorithms of many variables and extensive computational resources to produce computer simulation.[9] Simulation is achieved by adjusting the variables, changing one alone or even combining them together, to observe the effect on the outcomes. The results help experimenters make predictions about what would happen in the real system if those similar changes were to occur.[10]
When computational models attempt to mimic human cognitive functioning, all the details of the function must be known for them to transfer and display properly through the models, allowing researchers to thoroughly understand and test an existing theory because no variables are vague and all variables are modifiable. Consider a model of memory built by Atkinson and Shiffrin in 1968, it showed how rehearsal leads to long-term memory, where the information being rehearsed would be stored. Despite the advancement it made in revealing the function of memory, this model fails to provide answers to crucial questions like: how much information can be rehearsed at a time? How long does it take for information to transfer from rehearsal to long-term memory? Similarly, other computational models raise more questions about cognition than they answer, making their contributions much less significant for the understanding of human cognition than other cognitive approaches.[11] An additional shortcoming of computational modeling is its reported lack of objectivity.[12]
John Anderson in his Adaptive Control of Thought-Rational (ACT-R) model uses the functions of computational models and the findings of cognitive science. The ACT-R model is based on the theory that the brain consists of several modules which perform specialized functions separate of each other.[11] The ACT-R model is classified as a symbolic approach to cognitive science.[13]
Connectionist networks
[edit]Another approach which deals more with the semantic content of cognitive science is connectionism or neural network modeling. Connectionism relies on the idea that the brain consists of simple units or nodes and the behavioral response comes primarily from the layers of connections between the nodes and not from the environmental stimulus itself.[11]
Connectionist network differs from computational modeling specifically because of two functions: neural back-propagation and parallel-processing. Neural back-propagation is a method utilized by connectionist networks to show evidence of learning. After a connectionist network produces a response, the simulated results are compared to real-life situational results. The feedback provided by the backward propagation of errors would be used to improve accuracy for the network's subsequent responses.[14] The second function, parallel-processing, stemmed from the belief that knowledge and perception are not limited to specific modules but rather are distributed throughout the cognitive networks. The present of parallel distributed processing has been shown in psychological demonstrations like the Stroop effect, where the brain seems to be analyzing the perception of color and meaning of language at the same time.[15] However, this theoretical approach has been continually disproved because the two cognitive functions for color-perception and word-forming are operating separately and simultaneously, not parallel of each other.[16]
The field of cognition may have benefitted from the use of connectionist networks, but setting up the neural network models can be quite a tedious task and the results may be less interpretable than the system they are trying to model. Therefore, the results may be used as evidence for a broad theory of cognition without explaining the particular process happening within the cognitive function. Other disadvantages of connectionism lie in the research methods it employs or hypothesis it tests as they have been proven inaccurate or ineffective often, taking connectionist models away from an accurate representation of how the brain functions. These issues cause neural network models to be ineffective on studying higher forms of information-processing, and hinder connectionism from advancing the general understanding of human cognition.[17]
References
[edit]- ^ Green, C., & Sokal, Michael M. (2000). "Dispelling the "Mystery" of Computational Cognitive Science". History of Psychology. 3 (1): 62–66. doi:10.1037/1093-4510.3.1.62. PMID 11624164.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor & Francis. ISBN 9781138207929.
- ^ a b McCorduck, Pamela (2004). Machines Who Think (2 ed.). Natick, MA: A. K. Peters, Ltd. pp. 100–101. ISBN 978-1-56881-205-2. Archived from the original on 2020-03-01. Retrieved 2016-12-25.
- ^ Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press. ISBN 978-0-262-08153-5.
- ^ a b Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. pp. 145–215. ISBN 978-0-465-02997-6.
- ^ Megill, J. (2014). "Emotion, cognition and artificial intelligence". Minds and Machines. 24 (2): 189–199. doi:10.1007/s11023-013-9320-8. S2CID 17907148.
- ^ Dreyfus, Hubert L. (1972). What Computers Still Can't Do:A Critique of Artificial Reason. MIT Press. ISBN 9780262540674.
- ^ Sun, Ron (2008). Introduction to computational cognitive modeling. Cambridge, MA: Cambridge handbook of computational psychology. ISBN 978-0521674102.
- ^ "Computer Simulations in Science". Stanford Encyclopedia of Philosophy, Computer Simulations in Science. Metaphysics Research Lab, Stanford University. 2018.
- ^ Sun, R. (2008). The Cambridge Handbook of Computational Psychology. New York: Cambridge University Press.
- ^ a b c Eysenck, Michael (2012). Fundamentals of Cognition. New York, NY: Psychology Press. ISBN 978-1848720718.
- ^ Restrepo Echavarria, R. (2009). "Russell's Structuralism and the Supposed Death of Computational Cognitive Science". Minds and Machines. 19 (2): 181–197. doi:10.1007/s11023-009-9155-5. S2CID 195233608.
- ^ Polk, Thad; Seifert, Colleen (2002). Cognitive Modeling. Cambridge, MA: MIT Press. ISBN 978-0-262-66116-4.
- ^ Anderson, James; Pellionisz, Andras; Rosenfeld, Edward (1993). Neurocomputing 2: Directions for Research. Cambridge, MA: MIT Press. ISBN 978-0262510752.
- ^ Rumelhart, David; McClelland, James (1986). Parallel distributed processing, Vol. 1: Foundations. Cambridge, MA: MIT Press. ASIN B008Q6LHXE.
- ^ Cohen, Jonathan; Dunbar, Kevin; McClelland, James (1990). "On The Control Of Automatic Processes: A Parallel Distributed Processing Account Of The Stroop Effect". Psychological Review. 97 (3): 332–361. CiteSeerX 10.1.1.321.3453. doi:10.1037/0033-295x.97.3.332. PMID 2200075.
- ^ Garson, James; Zalta, Edward (Spring 2015). "Connectionism". The Stanford Encyclopedia of Philosophy. Stanford University.
See also
[edit]Further reading
[edit]- Busemeyer, Jerome R.; Wang, Zheng; Townsend, James T.; Eidels, Ami, eds. (2015). The Oxford handbook of computational and mathematical psychology. Oxford library of psychology. Vol. 1. Oxford; New York: Oxford University Press. doi:10.1093/oxfordhb/9780199957996.001.0001. ISBN 9780199957996. OCLC 894139948.
- Chipman, Susan F., ed. (2017). "Part I. The new computational psychology: cognitive architectures and the computational modeling of cognition". The Oxford handbook of cognitive science. Oxford library of psychology. Vol. 1. Oxford; New York: Oxford University Press. doi:10.1093/oxfordhb/9780199842193.001.0001. ISBN 9780199842193. OCLC 953823360.
- Sun, Ron, ed. (2008). The Cambridge handbook of computational psychology. Cambridge, UK; New York: Cambridge University Press. doi:10.1017/CBO9780511816772. ISBN 9780521857413. OCLC 153772906.
External links and bibliography
[edit]Computational cognition
View on GrokipediaOverview
Definition and Scope
Computational cognition refers to the interdisciplinary field that employs computational models to simulate, explain, and predict human cognitive processes, including perception, memory, reasoning, and decision-making. These models represent mental activities as algorithmic procedures operating on internal representations, drawing from the broader computational theory of mind, which posits the mind as a system performing computations akin to a Turing machine.[12][3] The scope of computational cognition encompasses the development of formal models that abstractly capture mental representations and processes, aiming to mimic aspects of human-like intelligence without focusing on biological substrates or practical engineering implementations. It distinguishes itself from neuroscience, which emphasizes neural mechanisms and physiological details, by prioritizing abstract, medium-independent computations that can be multiply realized in various systems, such as biological brains or digital hardware.[12][11] Unlike artificial intelligence, which often prioritizes building functional systems, computational cognition focuses on theoretical explanations grounded in empirical cognitive data.[12] Core principles of the field include the testability of models through computer simulations that generate observable predictions comparable to human behavior, the falsifiability of hypotheses via empirical validation, and the integration of experimental data from psychology and related disciplines to refine algorithmic representations. These principles ensure that models serve as mechanistic explanations of cognition, emphasizing precision and replicability over mere descriptive accounts.[3][11] Illustrative examples within this scope include finite state machines, which model basic decision tasks by representing cognitive states as discrete nodes connected by transition rules based on inputs, thereby simulating simple sequential behaviors like pattern recognition in perception. Connectionist models, such as artificial neural networks, represent one paradigm for capturing distributed representations in cognition. Emerging from the cognitive revolution of the 1950s, computational cognition continues to evolve as a foundational approach in cognitive science.[3][12]Interdisciplinary Foundations
Computational cognition has roots in the cognitive revolution of the mid-1950s and emerged as a distinct subfield of cognitive science in the 1970s and 1980s, drawing on the interdisciplinary momentum of the era to integrate computational methods with the study of mental processes.[13] This development was influenced by foundational efforts in quantitative psychology, such as the establishment of the Journal of Mathematical Psychology in 1964, which formalized mathematical modeling in psychological research and paved the way for the Society for Mathematical Psychology's formal incorporation in 1977.[14] As a distinct area, computational cognition focuses on using algorithms and simulations to model cognitive phenomena, building directly on the cognitive revolution's emphasis on internal mental representations and processes.[12] The discipline rests on contributions from several key fields. Cognitive psychology supplies behavioral data and experimental paradigms that identify cognitive tasks and performance metrics, providing empirical constraints for model validation.[13] Computer science contributes algorithms, data structures, and implementation techniques essential for simulating cognitive functions on digital systems.[13] Philosophy, particularly in epistemology and philosophy of mind, addresses foundational questions about mental representation, intentionality, and the nature of computation in cognition.[13] Linguistics offers models of language structure and acquisition, informing how computational systems process syntax, semantics, and pragmatics.[13] Neuroscience provides insights into biological substrates, imposing constraints on computational models through observations of neural activity and brain organization.[13] These fields intersect to form a cohesive framework for computational cognition. Cognitive science as a whole offers empirical grounding through interdisciplinary experiments that test hypotheses across human and machine performance, ensuring models align with observable behaviors.[13] Artificial intelligence, a branch of computer science, supplies practical tools like search algorithms and machine learning techniques that enable the realization of cognitive models in software.[13] Philosophy debates representationalism—the idea that mental states are symbolically encoded, much like programs—providing a theoretical pillar that underpins the computational theory of mind, which posits cognition as information processing akin to computation.[12] These intersections motivate applications in psychology, such as simulating decision-making under uncertainty to predict human errors.[13] A unifying concept across these disciplines is David Marr's three levels of analysis, which provide a structured approach to understanding cognitive systems without presupposing specific implementations. The computational level specifies the problem's nature and goals: what is the input-output mapping, and why is this computation performed in the context of the system's function? It focuses on the abstract task, independent of how it is achieved. The algorithmic level describes the representation and procedures: how is the computation organized as a sequence of steps, including the choice of data structures and algorithms that transform inputs to outputs? This level bridges theory and practice by outlining feasible methods. The implementational level examines the physical realization: how are the algorithms embodied in hardware or biology, considering the physical constraints and efficiency of the substrate, such as neural circuits or silicon processors? Marr's framework ensures that analyses at each level inform the others, promoting rigorous, hierarchical explanations of cognition that integrate insights from all contributing disciplines.Historical Development
Early Origins
The foundations of computational cognition trace back to early 20th-century developments in logic, mathematics, and philosophy that began to conceptualize mental processes as mechanistic and computable. Alan Turing's seminal 1936 paper, "On Computable Numbers, with an Application to the Entscheidungsproblem," introduced the concept of a universal computing machine capable of simulating any algorithmic process, providing a theoretical basis for understanding cognition as a form of computation by demonstrating the limits of what can be mechanically calculated.[15] This work shifted perspectives from purely philosophical inquiries about the mind toward formal models of information processing, influencing later ideas in cognitive modeling. In 1950, Turing extended these ideas in "Computing Machinery and Intelligence," proposing the imitation game—later known as the Turing Test—as a criterion for machine intelligence, framing intelligent behavior as indistinguishable from human responses through computational means.[16] A pivotal biological abstraction emerged in 1943 with Warren McCulloch and Walter Pitts' model of the artificial neuron, which represented neural activity as a logical calculus using threshold logic gates to simulate binary decision-making in networks, laying the groundwork for computational simulations of brain-like processing.[17] This model demonstrated that complex logical functions could be realized through interconnected simple units, inspiring early connectionist approaches to cognition without relying on empirical neural data. Concurrently, mid-1940s advancements in systems theory provided further precursors: Norbert Wiener's 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine formalized feedback loops as universal principles governing both mechanical and biological systems, enabling the modeling of adaptive cognitive behaviors like learning through circular causation.[18] Complementing this, Claude Shannon's 1948 paper "A Mathematical Theory of Communication" quantified information as entropy in bits, offering a metric for cognitive processes involving uncertainty and transmission, which later underpinned probabilistic models of perception and decision-making.[19] Philosophical currents also contributed to the mechanistic framing of the mind. Logical positivism, emerging from the Vienna Circle in the 1920s and 1930s, emphasized verifiable propositions and empirical analysis, promoting a view of mental states as reducible to observable, logical structures that aligned with emerging computational paradigms.[20] Behaviorism, dominant in psychology from the 1910s through John B. Watson and B.F. Skinner, rejected introspective mentalism in favor of stimulus-response mechanisms, portraying cognition as predictable chains of observable actions amenable to mathematical modeling.[20] Meanwhile, Gestalt psychology, pioneered by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka in the 1910s–1930s, highlighted holistic pattern perception over atomistic elements, influencing early computational efforts in pattern recognition by stressing emergent structures in sensory data.[21] These intellectual strands collectively primed the field by reconceptualizing the mind as an information-processing system rather than an inscrutable entity.Key Milestones and Figures
The 1956 Dartmouth Summer Research Project on Artificial Intelligence marked the formal inception of AI as a distinct field, where researchers including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposed studying intelligence as the computational simulation of human cognitive processes, emphasizing problem-solving through search algorithms.[22] This conference laid the groundwork for computational cognition by framing mental activities as programmable operations on digital computers.[23] Pioneering figures Herbert Simon and Allen Newell advanced this vision in 1956 with the Logic Theorist program, the first AI software designed to mimic human reasoning by automatically proving theorems from Principia Mathematica using heuristic search methods.[24] Their work demonstrated that complex cognitive tasks, such as theorem proving, could be decomposed into searchable problem spaces, influencing early symbolic approaches to cognition.[25] Building on this, Newell, Simon, and J.C. Shaw introduced the General Problem Solver (GPS) in 1959, a cognitive architecture intended to handle diverse tasks via means-ends analysis, further solidifying the idea of general-purpose computational models for human problem-solving.[26] The 1970s saw the expansion of cognitive architectures and expert systems, which applied rule-based reasoning to domain-specific problems, but enthusiasm waned amid the first "AI winter" from 1974 to 1980, triggered by funding cuts following overoptimistic projections and critiques like the 1973 Lighthill Report in the UK, which highlighted limitations in achieving general intelligence.[27] In parallel, John Anderson initiated the ACT cognitive architecture in 1976, rooted in associative memory models, which evolved to simulate human learning and performance through production rules and declarative knowledge.[28] The 1980s brought revival through connectionist paradigms, exemplified by the 1986 publication of Parallel Distributed Processing by David Rumelhart, James McClelland, and the PDP Research Group, which promoted neural network models as alternatives to symbolic systems for capturing emergent cognitive behaviors.[29] A key contribution was Rumelhart, Geoffrey Hinton, and Ronald Williams' 1986 introduction of backpropagation, an efficient algorithm for training multilayer neural networks by propagating errors backward, enabling practical learning in connectionist models.[30]Theoretical Foundations
Computational Theory of Mind
The computational theory of mind (CTM) posits that mental states and processes are fundamentally computational, involving the manipulation of symbolic representations according to formal rules, much like a digital computer processes information. This core thesis was first articulated by Hilary Putnam in his 1967 paper "Psychological Predicates," where he argued that psychological states, such as beliefs or desires, can be understood as functional states defined by their causal roles rather than specific physical realizations, and that these states are realized through computational procedures on internal representations.[31] Jerry Fodor further developed this idea in his 1975 book The Language of Thought, proposing that the mind operates via a "language of thought" (Mentalese), where cognitive processes consist of syntactic operations on mental symbols that encode semantic content, analogous to computations in a Turing machine.[32] Central to CTM are several key arguments supporting its framework. Functionalism maintains that the mind is akin to software, independent of its hardware substrate, allowing the same cognitive functions to be implemented across diverse physical systems as long as they perform the requisite computations.[31] This leads to multiple realizability, the notion that cognitive states can be realized in non-biological substrates like silicon-based computers or even alien biology, underscoring that cognition is substrate-neutral and computable in principle.[31] Additionally, the theory assumes the Turing completeness of the brain, implying that neural processes are sufficiently powerful to simulate any Turing machine, thereby encompassing all effectively computable functions relevant to cognition.[33] A prominent criticism of CTM is John Searle's Chinese Room argument, introduced in his 1980 paper "Minds, Brains, and Programs," which challenges the sufficiency of syntactic computation for genuine understanding. In the thought experiment, a person who understands no Chinese manipulates symbols according to a rulebook to produce coherent Chinese responses, yet lacks comprehension of the meaning; Searle contends this illustrates how computational systems, operating solely on formal syntax, fail to achieve semantic understanding or intentionality, thus undermining strong claims of CTM about replicating minds.[34] CTM also extends the Church-Turing thesis to cognition, asserting that every effective cognitive procedure—any systematic mental operation that can be described algorithmically—is computable by a Turing machine, thereby delimiting the scope of what counts as a viable model of the mind to digital computation.[35] This extension reinforces the theory's claim that all aspects of intelligent behavior, from reasoning to perception, can be captured by algorithmic processes, provided they adhere to the limits of computability.[33]Levels of Analysis in Cognition
The levels of analysis framework, proposed by David Marr in his seminal work on vision, provides a hierarchical structure for dissecting cognitive processes into distinct but interdependent layers of description. This approach posits that understanding any cognitive computation requires examining it at three mutually supportive levels: the computational level, which specifies the abstract task or problem being solved; the algorithmic level, which details the representations and procedures used to perform the computation; and the implementational level, which concerns the physical mechanisms realizing the algorithm.[36] Marr emphasized that these levels allow researchers to analyze information-processing systems without conflating the "what" of a task with the "how" of its execution or realization, thereby facilitating rigorous theorizing in cognitive science.[36] At the computational level, the focus is on defining the goal of the computation and the logical constraints governing it, independent of specific implementations—for instance, in visual recognition, this involves specifying the problem of recovering three-dimensional structure from two-dimensional retinal images under varying lighting conditions.[36] The algorithmic level then addresses how the computation is achieved, including the choice of data structures and step-by-step procedures, such as algorithms for edge detection using gradient operators to identify boundaries in an image.[36] Finally, the implementational level examines how these algorithms are embodied in hardware, accounting for physical constraints like neural circuitry efficiency, though without delving into low-level biology unless directly relevant.[36] This tripartite division ensures that analyses remain modular, allowing progress at one level to inform but not dictate the others. In cognitive science, Marr's levels serve as a bridge between abstract theoretical models and biological substrates, enabling researchers to test hypotheses about mental functions against empirical data from behavior and neuroscience.[37] For example, Bayesian inference has been applied at the computational level to model perception, where the visual system is viewed as performing optimal statistical inference by combining sensory evidence with prior knowledge to estimate scene properties, such as depth or object identity, under uncertainty.[38] This approach highlights how high-level goals, like minimizing perceptual error, can guide the development of corresponding algorithms without presupposing neural details. Zenon Pylyshyn extended and critiqued Marr's framework in his analysis of cognitive processes, arguing that it insufficiently accounts for real-time constraints in embedded systems, where cognition must operate continuously in dynamic environments rather than in isolated, offline computations.[39] Pylyshyn refined the levels by emphasizing the need to incorporate temporal and attentional factors at the algorithmic stage, particularly through concepts like immediate perceptual indexing that provide direct, non-symbolic access to the world, challenging full algorithmic capture of certain cognitive aspects.[39] The framework has been particularly influential in vision science, where it dissects complex tasks like object recognition into these levels without relying on biological specifics. At the computational level, object recognition is framed as assigning labels to image regions based on shape and context invariants; algorithmically, this involves hierarchical feature extraction, such as from primitive edges to volumetric descriptions in Marr's 2.5D sketch; and implementationally, it considers efficiency in parallel processing, though the focus remains on functional adequacy rather than neural wiring.[36] This dissection has shaped decades of research by providing a structured methodology for evaluating computational models of visual cognition.[40]Computational Approaches
Symbolic Methods
Symbolic methods in computational cognition represent knowledge through discrete, explicit symbols—such as propositions in formal logic—and manipulate these symbols using algorithmic rules to simulate cognitive processes like reasoning and problem-solving. This approach posits that cognition can be modeled as the transformation of symbolic structures, enabling systematic inference and decision-making without reliance on statistical patterns.[41] Central to this paradigm is the Physical Symbol System Hypothesis, proposed by Allen Newell and Herbert A. Simon in 1976, which asserts that intelligence arises from the manipulation of symbols in a physical system, providing both necessary and sufficient conditions for general intelligent action. Knowledge in symbolic systems is typically encoded as structured representations, including logical expressions (e.g., predicates like "Parent(John, Mary)") or factual assertions, which are then processed via inference mechanisms.[42] Manipulation occurs through algorithms such as production rules—conditional statements of the form "if condition then action"—that fire based on matching patterns in working memory, or theorem provers that derive new facts from axioms using deduction rules like modus ponens.[43] These methods emphasize transparency and verifiability, as each step in the computation is traceable to explicit rules, contrasting with approaches that use distributed, sub-symbolic representations.[44] A prominent example of symbolic methods is the MYCIN expert system, developed by Edward Shortliffe in 1976, which diagnosed bacterial infections and recommended antibiotic therapies using approximately 450 if-then production rules derived from medical expertise. MYCIN's knowledge base encoded domain-specific facts, such as organism characteristics and drug interactions, and applied certainty factors to handle incomplete information through rule chaining, achieving performance comparable to human experts in controlled evaluations.[45] This system demonstrated how symbolic rule-based reasoning could operationalize specialized cognitive tasks, influencing subsequent expert system development in fields like diagnostics.[46] Another key example is the SOAR cognitive architecture, introduced by John Laird, Allen Newell, and Paul Rosenbloom in 1983, which integrates symbolic processing to achieve general problem-solving across domains.[47] SOAR represents knowledge as production rules operating on a centralized working memory of symbolic objects and relations, employing a problem-space computational model where goals are pursued through operators selected via means-ends analysis.[48] A distinctive feature is chunking, a learning mechanism that compiles sequences of rule firings into new production rules, enabling SOAR to generalize solutions from experience and reduce redundant computation in future tasks, as seen in its application to planning and game-playing scenarios.[49] Symbolic methods often rely on search algorithms to navigate large spaces of possible symbol configurations, particularly in planning tasks where the goal is to find a sequence of actions leading from an initial state to a desired outcome.[50] The A* algorithm, developed by Peter Hart, Nils Nilsson, and Bertram Raphael in 1968, exemplifies this by combining uniform-cost search with heuristic guidance to efficiently compute optimal paths in graph-based problem spaces.[50] A* maintains an open list of nodes to expand, prioritized by an evaluation function , where is the exact cost from the start to node , and is an admissible heuristic estimate of the cost from to the goal (ensuring true cost to guarantee optimality).[51] The process unfolds as follows: Initialize the open list with the start node and a closed list as empty; while the open list is not empty, select the node with the lowest ; if is the goal, reconstruct the path backward via parent pointers; otherwise, add to the closed list and generate successors, updating each successor with and , inserting into the open list if it improves prior estimates (using a priority queue for efficiency).[50] For pathfinding, this might model a state space where nodes represent positions and edges denote moves, with as Euclidean distance to the goal. Pseudocode for A* in a planning context:function A*(start, goal):
open_list = priority_queue() // Prioritized by f(n)
open_list.insert(start, g=0, f=h(start))
closed_list = [empty set](/page/Empty_set)
parent = [dictionary](/page/Dictionary)()
while open_list not empty:
n = open_list.extract_min() // Lowest f(n)
if n == goal:
return reconstruct_path(parent, goal)
closed_list.add(n)
for each successor m of n:
if m in closed_list: continue
tentative_g = g(n) + cost(n, m)
if m not in open_list or tentative_g < g(m):
parent[m] = n
g(m) = tentative_g
f(m) = g(m) + h(m)
if m not in open_list:
open_list.insert(m)
return failure // No path found
function A*(start, goal):
open_list = priority_queue() // Prioritized by f(n)
open_list.insert(start, g=0, f=h(start))
closed_list = [empty set](/page/Empty_set)
parent = [dictionary](/page/Dictionary)()
while open_list not empty:
n = open_list.extract_min() // Lowest f(n)
if n == goal:
return reconstruct_path(parent, goal)
closed_list.add(n)
for each successor m of n:
if m in closed_list: continue
tentative_g = g(n) + cost(n, m)
if m not in open_list or tentative_g < g(m):
parent[m] = n
g(m) = tentative_g
f(m) = g(m) + h(m)
if m not in open_list:
open_list.insert(m)
return failure // No path found
