Recent from talks
Nothing was collected or created yet.
Knowledge base
View on Wikipedia
This article needs additional citations for verification. (June 2014) |
| Computer memory and data storage types |
|---|
| Volatile |
| Non-volatile |
In computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference.[1] It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.
Original usage of the term
[edit]The original use of the term knowledge base was to describe one of the two sub-systems of an expert system. A knowledge-based system consists of a knowledge-base representing facts about the world and ways of reasoning about those facts to deduce new facts or highlight inconsistencies.[2]
Properties
[edit]The term knowledge base was coined to distinguish this form of knowledge store from the more common and widely used term database. During the 1970s, virtually all large management information systems stored their data in some type of hierarchical or relational database. At this point in the history of information technology, the distinction between a database and a knowledge-base was clear and unambiguous.
A database had the following properties:
- Flat data: Data was usually represented in a tabular format with strings or numbers in each field.
- Multiple users: A conventional database needed to support more than one user or system logged into the same data at the same time.
- Transactions: An essential requirement for a database was to maintain integrity and consistency among data accessed by concurrent users. These are the so-called ACID properties: Atomicity, Consistency, Isolation, and Durability.
- Large, long-lived data: A corporate database needed to support not just thousands but hundreds of thousands or more rows of data. Such a database usually needed to persist past the specific uses of any individual program; it needed to store data for years and decades rather than for the life of a program.
The first knowledge-based systems had data needs that were the opposite of these database requirements. An expert system requires structured data. Not just tables with numbers and strings, but pointers to other objects that in turn have additional pointers. The ideal representation for a knowledge base is an object model (often called an ontology in artificial intelligence literature) with classes, subclasses and instances.
Early expert systems also had little need for multiple users or the complexity that comes with requiring transactional properties on data. The data in early expert systems was used to arrive at a specific answer, such as a medical diagnosis, the design of a molecule, or a response to an emergency.[2] Once the solution to the problem was known, there was not a critical demand to store large amounts of data back to a permanent memory store. A more precise statement would be that given the technologies available, researchers compromised and did without these capabilities because they realized they were beyond what could be expected, and they could develop useful solutions to non-trivial problems without them. Even from the beginning, the more astute researchers realized the potential benefits of being able to store, analyze, and reuse knowledge. For example, see the discussion of Corporate Memory in the earliest work of the Knowledge-Based Software Assistant program by Cordell Green et al.[3]
The volume requirements were also different for a knowledge-base compared to a conventional database. The knowledge-base needed to know facts about the world. For example, to represent the statement that "All humans are mortal", a database typically could not represent this general knowledge but instead would need to store information about thousands of tables that represented information about specific humans. Representing that all humans are mortal and being able to reason about any given human that they are mortal is the work of a knowledge-base. Representing that George, Mary, Sam, Jenna, Mike,... and hundreds of thousands of other customers are all humans with specific ages, sex, address, etc. is the work for a database.[4][5]
As expert systems moved from being prototypes to systems deployed in corporate environments the requirements for their data storage rapidly started to overlap with the standard database requirements for multiple, distributed users with support for transactions. Initially, the demand could be seen in two different but competitive markets. From the AI and Object-Oriented communities, object-oriented databases such as Versant emerged. These were systems designed from the ground up to have support for object-oriented capabilities but also to support standard database services as well. On the other hand, the large database vendors such as Oracle added capabilities to their products that provided support for knowledge-base requirements such as class-subclass relations and rules wiki.
Types of Knowledge Base Systems
[edit]As any informational hub, the knowledge base can store various content types which will serve different audiences and have contrasting purposes. So, to better understand knowledge base types, let’s discuss them from two different angles: purpose and content.
Internal vs. external knowledge bases Here, we can divide our informational hubs into two main purposes – external and internal.
- Internal knowledge base: This type of knowledge hub is designed for employees within the organization. It acts like a corporate wiki and can be created for different reasons, but mainly for: onboarding new hires, documenting internal policies, and giving quick answers on employees’ demands.
- External knowledge base: This is a direct opposite of the internal hub and is created for clients, prospects, and, sometimes, for the public. The main goal of this base is to reduce customer support workload, offer easy access to effective tips and enhance the overall user experience.[6]
Internet as a knowledge base
[edit]The next evolution for the term knowledge base was the Internet. With the rise of the Internet, documents, hypertext, and multimedia support were now critical for any corporate database. It was no longer enough to support large tables of data or relatively small objects that lived primarily in computer memory. Support for corporate web sites required persistence and transactions for documents. This created a whole new discipline known as Web Content Management.
The other driver for document support was the rise of knowledge management vendors such as HCL Notes (formerly Lotus Notes). Knowledge Management actually predated the Internet but with the Internet there was great synergy between the two areas. Knowledge management products adopted the term knowledge base to describe their repositories but the meaning had a big difference. In the case of previous knowledge-based systems, the knowledge was primarily for the use of an automated system, to reason about and draw conclusions about the world. With knowledge management products, the knowledge was primarily meant for humans, for example to serve as a repository of manuals, procedures, policies, best practices, reusable designs and code, etc. In both cases the distinctions between the uses and kinds of systems were ill-defined. As the technology scaled up it was rare to find a system that could really be cleanly classified as knowledge-based in the sense of an expert system that performed automated reasoning and knowledge-based in the sense of knowledge management that provided knowledge in the form of documents and media that could be leveraged by humans.[7]
Examples
[edit]See also
[edit]References
[edit]- ^ Russell, Stuart J. (2021). "Knowledge-based agents". Artificial intelligence: a modern approach. Peter Norvig, Ming-Wei Chang, Jacob Devlin, Anca Dragan, David Forsyth, Ian Goodfellow, Jitendra Malik, Vikash Mansinghka, Judea Pearl, Michael J. Wooldridge (Fourth ed.). Hoboken, NJ: Pearson. ISBN 978-0-13-461099-3. OCLC 1124776132.
- ^ a b Hayes-Roth, Frederick; Donald Waterman; Douglas Lenat (1983). Building Expert Systems. Addison-Wesley. ISBN 0-201-10686-8.
- ^ Green, Cordell; D. Luckham; R. Balzer; T. Cheatham; C. Rich (1986). "Report on a knowledge-based software assistant". Readings in Artificial Intelligence and Software Engineering. Morgan Kaufmann: 377–428. doi:10.1016/B978-0-934613-12-5.50034-3. ISBN 9780934613125. Retrieved 1 December 2013.
- ^ Feigenbaum, Edward (1983). The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World. Reading, MA: Addison-Wesley. p. 77. ISBN 0-201-11519-0.
Your database is that patient's record, including history... vital signs, drugs given,... The knowledge base... is what you learned in medical school... it consists of facts, predicates, and beliefs...
- ^ Jarke, Mathias (1978). "KBMS Requirements for Knowledge-Based Systems" (PDF). Logic, Databases, and Artificial Intelligence. Berlin: Springer. Archived (PDF) from the original on 22 June 2013. Retrieved 1 December 2013.
- ^ Knowledge Base: How to Keep Information That Took Years to Build
- ^ Krishna, S (1992). Introduction to Database and Knowledge-base Systems. Singapore: World Scientific Publishing. ISBN 981-02-0619-4.
External links
[edit]Knowledge base
View on GrokipediaDefinition and History
Definition
A knowledge base (KB) is a structured repository consisting of a set of sentences expressed in a formal knowledge representation language, which collectively represent facts, rules, heuristics, and relationships about a domain to enable logical inference and querying.[1] These sentences form declarative assertions that capture an agent's or system's understanding of the world, allowing for automated reasoning beyond mere storage.[4] In artificial intelligence, the KB serves as the core component of knowledge-based agents, where it stores domain-specific knowledge to support decision-making and problem-solving.[1] Unlike traditional databases, which primarily manage structured data for efficient storage, retrieval, and manipulation without inherent reasoning capabilities, knowledge bases emphasize declarative knowledge that facilitates inference over incomplete or uncertain information.[1] Databases focus on querying factual records, often using procedural operations, whereas KBs employ symbolic representations with epistemic operators (e.g., for belief or obligation) to handle entailments, defaults, and subjective knowledge, enabling derivation of new insights from existing content.[4] This distinction positions KBs at the knowledge level of representation, prioritizing semantic understanding and logical consistency over raw data handling.[5] Key components of a knowledge base include interfaces for knowledge acquisition, storage, retrieval, and maintenance. Acquisition occurs through mechanisms like the "TELL" operation, which incorporates new sentences from percepts, human input, or learning processes into the KB.[1] Storage maintains these sentences in a consistent epistemic state, often as a set of possible worlds or symbolic structures to represent both known facts and unknowns.[4] Retrieval is handled via the "ASK" function, which uses inference algorithms to query and derive answers, such as through forward or backward chaining.[1] Maintenance ensures ongoing updates, resolving inconsistencies and adapting the KB via operations like stable expansions to reflect evolving information.[4] Over time, the scope of knowledge bases has evolved from static repositories of fixed facts and rules to dynamic systems that integrate AI-driven inference for real-time adaptation in changing environments.[4] Early formulations treated KBs as immutable collections, but advancements in logical frameworks, such as situation calculus, have enabled them to model actions, sensing, and belief updates, supporting applications in autonomous agents and expert systems.[1]Historical Development
The concept of a knowledge base emerged in the 1970s within artificial intelligence research, particularly in the development of expert systems designed to emulate human expertise in specialized domains. One of the earliest and most influential examples was MYCIN, a system created at Stanford University in 1976 to assist in diagnosing and treating bacterial infections. MYCIN utilized a knowledge base comprising approximately 450 production rules derived from medical experts, enabling backward-chaining inference to recommend therapies based on patient data and clinical guidelines. This approach formalized the separation of domain-specific knowledge from inference mechanisms, marking a foundational shift toward modular, knowledge-driven AI systems. The 1980s saw significant expansion in knowledge base development, driven by ambitious projects aiming to encode broader commonsense reasoning. A pivotal milestone was the launch of the Cyc project in 1984 by Douglas Lenat at Microelectronics and Computer Technology Corporation (MCC), which sought to construct a massive, hand-curated knowledge base of everyday human knowledge to support general-purpose inference. By the end of the decade, Cyc had amassed tens of thousands of axioms and concepts, influencing subsequent efforts in knowledge acquisition and representation.[6] Concurrently, the integration of semantic networks—graph-based structures for modeling relationships between concepts—gained traction in the 1990s, enhancing knowledge bases with more flexible, associative reasoning capabilities beyond rigid rule sets. NASA projects in the 1990s, such as those presented at the Goddard Conference on Space Applications of Artificial Intelligence, utilized semantic networks to organize knowledge for complex problem-solving in aerospace engineering.[7][8] By the early 2000s, knowledge bases transitioned from predominantly rule-based architectures of the 20th century to ontology-driven models, emphasizing structured vocabularies and formal semantics for interoperability. This shift was propelled by the Semantic Web initiative, proposed by Tim Berners-Lee and colleagues in a 2001 Scientific American article, which envisioned the Web as a global knowledge base using ontologies to enable machine-readable data and automated reasoning.[9] Technologies like OWL (Web Ontology Language), standardized by the W3C in 2004, facilitated the creation of scalable, ontology-based knowledge bases such as those in the Gene Ontology project, allowing for richer knowledge integration across distributed sources.[10] In the 2020s, knowledge bases have increasingly been incorporated into large language models through retrieval-augmented generation (RAG), a technique introduced in a 2020 paper that combines neural generation with external knowledge retrieval to mitigate hallucinations and enhance factual accuracy.[11] RAG enables LLMs to query dynamic knowledge bases—such as vectorized document stores or structured ontologies—during inference, as demonstrated in applications like enterprise search and question-answering systems. By 2025, this integration has become a cornerstone of hybrid AI architectures, bridging symbolic knowledge representation with probabilistic machine learning for more robust, context-aware performance.Core Properties and Design
Key Properties
Effective knowledge bases in artificial intelligence are characterized by several fundamental properties that ensure their utility in supporting reasoning and decision-making. Modularity allows for the independent development and modification of knowledge components, such as separating the knowledge base from the inference engine, which facilitates collaboration among experts and enables testing different reasoning strategies on the same facts.[12] Consistency is essential to prevent contradictions within the stored knowledge, maintaining the integrity of the system through validation techniques that detect and resolve conflicts in rules and facts.[13] Completeness ensures that the knowledge base covers the relevant domain sufficiently to derive all necessary conclusions, with checks for unreferenced attributes or dead-end conditions to identify gaps.[14] Inferencability supports logical deductions by integrating inference mechanisms that apply rules to generate new insights from existing knowledge, often using logic-based representations to ensure sound reasoning.[3] Scalability and maintainability are critical for knowledge bases to accommodate expanding volumes of information without compromising performance. Scalable designs leverage structured data sources, such as online repositories, to handle growth while preserving query efficiency and response times.[13] Maintainability involves ongoing processes to update and validate knowledge, ensuring long-term reliability through modular structures that simplify revisions and automated integrity checks.[15] Interoperability enables knowledge bases to integrate with diverse systems, facilitated by standards like RDF for representing data as triples and OWL for defining ontologies with rich semantics. These standards support semantic mapping—using constructs such asowl:equivalentClass—to align terms across different knowledge sources, promoting seamless data exchange and reuse.[16]
To address inherent incompleteness, effective knowledge bases incorporate verifiability through traceable sources and precision metrics, alongside dynamic update mechanisms like incremental revisions in multi-agent systems to incorporate new information without full rebuilds.
Knowledge Representation Techniques
Knowledge representation techniques are essential methods for encoding, organizing, and retrieving information in a knowledge base to enable efficient reasoning and inference. These techniques transform abstract knowledge into structured formats that computational systems can process, supporting tasks such as query answering and decision-making. Primary approaches include logic-based representations, which use formal deductive systems; graph-based structures like semantic networks; and object-oriented schemas such as frames. More advanced formalisms incorporate ontologies for conceptual hierarchies and probabilistic models to handle uncertainty, while emerging hybrid methods blend symbolic and neural paradigms. Logic-based techniques form the foundation of many knowledge bases by expressing knowledge as logical statements that allow for precise inference. First-order logic (FOL), a key logic-based method, represents knowledge using predicates, functions, variables, and quantifiers to model relations and objects in a domain. For example, FOL can encode rules like "All humans are mortal" as . Seminal work established FOL as a cornerstone for AI knowledge representation by addressing epistemological challenges in formalizing commonsense reasoning. Inference in logic-based systems often relies on rules like modus ponens, which derives a conclusion from an implication and its antecedent: This rule exemplifies how knowledge bases apply deduction to expand facts from existing premises.[17][1] Semantic networks represent knowledge as directed graphs where nodes denote concepts or entities and edges capture relationships, facilitating intuitive modeling of associations like inheritance or part-whole hierarchies. Introduced as a model of human semantic memory, these networks enable spreading activation for retrieval and support inferences based on path traversals in the graph. For instance, a network might link "bird" to "flies" via an "is-a" relation to "animal," allowing generalization of properties.[18] Frames extend semantic networks by organizing knowledge into structured templates with slots for attributes, defaults, and procedures, mimicking object-oriented programming for stereotypical situations. Each frame represents a concept with fillable properties and attached methods for handling incomplete information, such as procedural attachments for dynamic updates. This approach was proposed to address the need for context-sensitive knowledge invocation in AI systems.[19] Ontologies provide formalisms for defining hierarchical concepts, relations, and axioms in knowledge bases, often using languages like OWL (Web Ontology Language). OWL enables the specification of classes, properties, and restrictions with description logic semantics, supporting automated reasoning over domain knowledge. For example, OWL ontologies can express subsumption relations like "Elephant is-a Mammal" with cardinality constraints. Probabilistic representations, such as Bayesian networks, address uncertainty by modeling dependencies among variables as directed acyclic graphs with conditional probability tables. These networks compute posterior probabilities via inference algorithms like belief propagation, integrating uncertain evidence in knowledge bases. Pioneered in AI for causal and diagnostic reasoning, Bayesian networks quantify joint distributions compactly.[20][21] Hybrid techniques, particularly neuro-symbolic representations, combine symbolic logic with neural networks to leverage both rule-based reasoning and data-driven learning. These methods embed logical constraints into neural architectures or use differentiable reasoning to approximate symbolic inference, improving generalization in knowledge bases with sparse or noisy data. Recent advancements in 2024-2025 have focused on integrating knowledge graphs with transformers for enhanced explainability and robustness in AI systems, including applications in knowledge base completion and uncertainty handling as of mid-2025.[22]Types of Knowledge Bases
Traditional Types
Traditional knowledge bases emerged in the early days of artificial intelligence as structured repositories for encoding domain-specific expertise, primarily through rule-based, frame-based, and case-based paradigms that facilitated automated reasoning in expert systems. Rule-based knowledge bases rely on production rules, which are conditional statements in the form of "if-then" constructs that represent heuristic knowledge for decision-making. These rules form the core of production systems, a model introduced by Allen Newell and Herbert A. Simon in their 1972 work on human problem-solving, where rules act as condition-action pairs to simulate cognitive processes.[23] In expert systems, the knowledge base consists of a collection of such rules, paired with an inference engine that applies forward or backward chaining to derive conclusions from facts. A key example is CLIPS (C Language Integrated Production System), developed by NASA in the 1980s, which serves as a forward-chaining, rule-based programming language for building and deploying expert systems in domains like diagnostics and planning. This approach enabled modular knowledge encoding but required explicit rule elicitation from domain experts. Frame-based knowledge bases organize knowledge into frames, which are data structures resembling objects with named slots for attributes, values, and procedures, allowing for inheritance, defaults, and procedural attachments to handle stereotypical scenarios. Marvin Minsky proposed frames in 1974 as a mechanism to represent situated knowledge, such as visual perspectives or room layouts, by linking frames into networks that activate relevant expectations during reasoning.[24] Frames support semantic networks and object-oriented features, making them suitable for modeling complex hierarchies in knowledge-intensive tasks. The Knowledge Engineering Environment (KEE), released by IntelliCorp in the early 1980s, implemented frame-based representation in a commercial toolset, combining frames with rules and graphics for developing expert systems in engineering and medicine, though it demanded significant computational resources for large-scale applications.[25] Case-based knowledge bases store libraries of past cases—each comprising a problem description, solution, and outcome—for solving new problems through retrieval of similar cases, adaptation, and storage of results, emphasizing experiential rather than declarative knowledge. This paradigm, rooted in Roger Schank's memory models, enables similarity-based indexing and reasoning without exhaustive rule sets. Agnar Aamodt and Enric Plaza's 1994 survey delineated the CBR cycle—retrieval, reuse, revision, and retention—as foundational, highlighting variations like exemplar-based and knowledge-intensive approaches in systems for legal reasoning and design.[26] Case-based systems, such as those in early medical diagnostics, promoted incremental learning but relied on robust similarity metrics to avoid irrelevant matches. These traditional types shared limitations, including their static nature, which made updating knowledge labor-intensive and prone to the "knowledge acquisition bottleneck," as well as brittleness in addressing uncertainty or incomplete data, leading to failures in real-world variability during the 1980s and 1990s.[27] Expert systems built on these foundations often scaled poorly beyond narrow domains, exacerbating maintenance challenges and limiting broader adoption.[28]Modern and Emerging Types
Knowledge graphs constitute a pivotal modern type of knowledge base, organizing information into graph structures comprising entities (such as people, places, or concepts) connected by explicit relationships to support semantic search and contextual inference. Google's Knowledge Graph, launched in 2012, exemplifies this approach by encompassing over 500 million objects and 3.5 billion facts as of its launch derived from sources including Freebase and Wikipedia, enabling search engines to disambiguate queries and deliver interconnected insights rather than isolated results.[29] These systems enhance query understanding by modeling real-world semantics, as seen in their use for entity resolution and relationship traversal in applications like recommendation engines.[30] Vector databases represent an emerging paradigm for knowledge bases tailored to AI workflows, particularly those involving large language models (LLMs), by indexing high-dimensional vector embeddings generated from text or multimodal data to enable efficient similarity searches. In Retrieval-Augmented Generation (RAG) systems, these databases store embeddings of documents or knowledge chunks, allowing LLMs to retrieve semantically relevant context based on query vectors, thereby reducing hallucinations and improving factual accuracy without full model retraining.[31] Prominent implementations include Pinecone, a managed vector database optimized for scalable indexing and metadata filtering, and FAISS, an open-source library from Meta for approximate nearest-neighbor searches that supports billion-scale datasets in RAG pipelines.[32] Hybrid knowledge bases integrate machine learning with traditional structures to form dynamic systems capable of self-updating through distributed processes like federated learning, which aggregates model updates from decentralized nodes while preserving data privacy. Emerging trends in 2024-2025 emphasize frameworks such as FedMDKGE, which facilitate multi-granularity dynamic knowledge graph embeddings in federated environments, enabling real-time adaptation to evolving data across multiple parties without raw data exchange.[33] This approach contrasts with static bases by incorporating continuous learning mechanisms, such as knowledge distillation in federated settings, to personalize and evolve representations over time.[34] By 2025, AI knowledge bases have advanced in customer service domains through integrations like Zendesk's generative AI tools, which automate content generation, topic clustering, and search optimization to deliver instant self-service answers and reduce agent workload.[35] Concurrently, neuro-symbolic systems emerge as a hybrid type merging symbolic knowledge representations—such as ontologies and rules—with neural networks' pattern recognition, creating knowledge bases that combine logical reasoning for interpretability with data-driven inference to mitigate issues like LLM inconsistencies.[36] These systems, as explored in recent frameworks for enterprise knowledge graphs, employ confidence-based fusion to integrate neural embeddings with symbolic queries, enhancing reliability in complex reasoning tasks.[37][38] Another 2025 development relevant to AI-associated knowledge infrastructures was the creation of an ORCID author record (0009-0002-6030-5730) for Angela Bogdanova, a non-human Digital Author Persona used in academic-style publications.[39][40] While not altering AI model architectures, this case reflects how AI-related entities began to appear within authorship and metadata systems linked to knowledge bases.[41] A documented development in knowledge-base architectures emerging in 2025 was the use of large-scale AI systems to generate, maintain, and update knowledge repositories. On 27 October 2025, xAI launched Grokipedia, an online encyclopedia in which content creation, fact-checking, updating, and editorial tasks are performed by the Grok AI system in real time.[42][43][44] This represents an AI-managed knowledge base designed for continuous, automated curation beyond static or manually updated systems. These examples illustrate how AI-driven systems expanded into new forms of knowledge-base construction, maintenance, and metadata integration, complementing other modern approaches such as vector databases and hybrid learning frameworks.Applications and Implementations
In Expert Systems and AI
In expert systems, the knowledge base serves as the central repository of domain-specific facts, rules, and heuristics, functioning as the system's "brain" to enable automated reasoning and inference akin to human expertise. This component encodes expert-level knowledge in a structured format, allowing the system to draw conclusions from input data without relying on general algorithmic search alone. For instance, the DENDRAL system, developed starting in 1965, utilized a knowledge base of chemical structure rules and mass spectrometry data to hypothesize molecular compositions from spectral evidence, marking one of the earliest demonstrations of knowledge-driven hypothesis formation in scientific domains.[45] The inference engine, paired with the knowledge base, applies logical rules to derive new knowledge or decisions, typically through forward or backward chaining algorithms. Forward chaining is a data-driven process that begins with known facts in the knowledge base and iteratively applies applicable rules to generate new conclusions until no further inferences are possible or a goal is reached. This approach suits scenarios where multiple outcomes emerge from initial observations, such as diagnostic systems monitoring evolving conditions. Pseudocode for forward chaining can be outlined as follows:function forward_chaining(KB, facts):
agenda = queue(facts) // Initialize with known facts
inferred = set() // Track newly inferred facts
while agenda not empty:
fact = agenda.pop()
if fact in inferred or fact in KB: continue
inferred.add(fact)
for rule in KB.rules where rule.[premises](/page/Premises) satisfied by inferred:
new_fact = rule.conclusion
if new_fact not in inferred:
agenda.push(new_fact)
return inferred
function forward_chaining(KB, facts):
agenda = queue(facts) // Initialize with known facts
inferred = set() // Track newly inferred facts
while agenda not empty:
fact = agenda.pop()
if fact in inferred or fact in KB: continue
inferred.add(fact)
for rule in KB.rules where rule.[premises](/page/Premises) satisfied by inferred:
new_fact = rule.conclusion
if new_fact not in inferred:
agenda.push(new_fact)
return inferred
function backward_chaining(KB, goal):
if goal in KB.facts: return true
for rule in KB.rules where rule.conclusion == goal:
if all backward_chaining(KB, premise) for premise in rule.premises:
return true
return false
function backward_chaining(KB, goal):
if goal in KB.facts: return true
for rule in KB.rules where rule.conclusion == goal:
if all backward_chaining(KB, premise) for premise in rule.premises:
return true
return false
