Hubbry Logo
Semantic networkSemantic networkMain
Open search
Semantic network
Community hub
Semantic network
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Semantic network
Semantic network
from Wikipedia

Example of a semantic network

A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts,[1] mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples.

Semantic networks are used in natural language processing applications such as semantic parsing[2] and word-sense disambiguation.[3] Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., of social media posts), to reveal biases (e.g., in news coverage), or even to map an entire research field.

History

[edit]

Examples of the use of semantic networks in logic, directed acyclic graphs as a mnemonic tool, dates back centuries. The earliest documented use being the Greek philosopher Porphyry's commentary on Aristotle's categories in the third century AD.

In computing history, "Semantic Nets" for the propositional calculus were first implemented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an "interlingua" for machine translation of natural languages.[4] Although the importance of this work and the CLRU was only belatedly realized.

Semantic networks were also independently implemented by Robert F. Simmons[5] and Sheldon Klein, using the first order predicate calculus as a base, after being inspired by a demonstration of Victor Yngve. The "line of research was originated by the first President of the Association [Association for Computational Linguistics], Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962-1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text."[6] Other researchers, most notably M. Ross Quillian[7] and others at System Development Corporation helped contribute to their work in the early 1960s as part of the SYNTHEX project. It's from these publications at SDC that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done by Allan M. Collins and Quillian (e.g., Collins and Quillian;[8][9] Collins and Loftus[10] Quillian[11][12][13][14]). Still later in 2006, Hermann Helbig fully described MultiNet.[15]

In the late 1980s, two Netherlands universities, Groningen and Twente, jointly began a project called Knowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitate algebras on the graph.[16] In the subsequent decades, the distinction between semantic networks and knowledge graphs was blurred.[17][18] In 2012, Google gave their knowledge graph the name Knowledge Graph. The Semantic Link Network was systematically studied as a social semantics networking method. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004.[19] This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998[20] and the Active Document Framework ADF.[21] Since 2003, research has developed toward social semantic networking.[22] This work is a systematic innovation at the age of the World Wide Web and global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network).[23] The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network.[24][25] Recently it has been developed to support Cyber-Physical-Social Intelligence.[26] It was used for creating a general summarization method.[27] The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links[28][29] It has been verified that Semantic Link Network play an important role in understanding and representation through text summarisation applications.[30][31] Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence[32]

More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized the Semantic Similarity Network (SSN) that contains specialized relationships and propagation algorithms to simplify the semantic similarity representation and calculations.[33]

Basics of semantic networks

[edit]

A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another.

Most semantic networks are cognitively based. They also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects.

Examples

[edit]

In Lisp

[edit]

The following code shows an example of a semantic network in the Lisp programming language using an association list.

(setq *database*
'((canary  (is-a bird)
           (color yellow)
           (size small))
  (penguin (is-a bird)
           (movement swim))
  (bird    (is-a vertebrate)
           (has-part wings)
           (reproduction egg-laying))))

To extract all the information about the "canary" type, one would use the assoc function with a key of "canary".[34]

WordNet

[edit]

An example of a semantic network is WordNet, a lexical database of English. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined are meronymy (A is a meronym of B if A is part of B), holonymy (B is a holonym of A if B contains A), hyponymy (or troponymy) (A is subordinate of B; A is kind of B), hypernymy (A is superordinate of B), synonymy (A denotes the same as B) and antonymy (A denotes the opposite of B).

WordNet properties have been studied from a network theory perspective and compared to other semantic networks created from Roget's Thesaurus and word association tasks. From this perspective the three of them are a small world structure.[35]

Other examples

[edit]

It is also possible to represent logical descriptions using semantic networks such as the existential graphs of Charles Sanders Peirce or the related conceptual graphs of John F. Sowa.[1] These have expressive power equal to or exceeding standard first-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing.

Other examples of semantic networks are Gellish models. Gellish English with its Gellish English dictionary, is a formal language that is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable.

SciCrunch is a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities.

Another example of semantic networks, based on category theory, is ologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function. Commutative diagrams also are prescribed to constrain the semantics.

In the social sciences people sometimes use the term semantic network to refer to co-occurrence networks.[36]

Software tools

[edit]

There are also elaborate types of semantic networks connected with corresponding sets of software tools used for lexical knowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro[37] or the MultiNet paradigm of Hermann Helbig,[38] especially suited for the semantic representation of natural language expressions and used in several NLP applications.

Semantic networks are used in specialized information retrieval tasks, such as plagiarism detection. They provide information on hierarchical relations in order to employ semantic compression to reduce language diversity and enable the system to match word meanings, independently from sets of words used.

The Knowledge Graph proposed by Google in 2012 is actually an application of semantic network in search engine.

Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE[39] (NIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction.

See also

[edit]

Other examples

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A semantic network is a directed graph structure used in artificial intelligence and knowledge representation to model knowledge as nodes representing concepts, objects, or entities connected by labeled edges that denote semantic relations, such as "is-a" for inheritance or "has" for attributes, enabling inference and reasoning through traversal and pattern matching. This approach organizes information hierarchically and associatively, mimicking aspects of human semantic memory by storing general facts and word meanings in interconnected patterns rather than isolated entries. The concept of semantic networks traces its roots to early philosophical diagrams like Porphyry's Tree from the 3rd century AD, which illustrated categorical hierarchies, but it was formalized in computer science during the 1950s and 1960s amid efforts in machine translation and natural language processing. M. Ross Quillian introduced the term in his 1966 doctoral dissertation on semantic memory, proposing a computational model where knowledge is encoded as a network of "type" nodes (unique concepts) and "token" nodes (instances pointing to types), linked by associative paths to support tasks like word disambiguation and text comprehension through intersection-finding algorithms. Subsequent developments in the 1970s and 1980s refined these into formal systems, incorporating logic-based semantics to address issues like ambiguity and inheritance in knowledge bases. Semantic networks feature several key types and capabilities that distinguish them from other representation methods: definitional networks for taxonomic hierarchies, assertional networks for factual propositions, and implicational networks for rule-based inference, often supporting monotonic or nonmonotonic reasoning. They facilitate efficient storage and retrieval by minimizing redundancy—e.g., through inheritance where subclasses automatically acquire properties of superclasses—and enable hybrid extensions combining them with procedural code or learning mechanisms. In practice, these networks power applications in expert systems, ontology engineering, and the Semantic Web, where standards like OWL (Web Ontology Language) build on their principles to enable machine-readable knowledge sharing across the internet.

Fundamentals

Definition and Core Concepts

A semantic network is a structure for representing knowledge, consisting of nodes that denote concepts such as objects, events, or entities, and labeled edges or arcs that specify semantic relations between them, such as "is-a" for hierarchical subclass relationships, "part-of" for componential links, or "has-property" for attribute assignments. This representation, pioneered in early models of , organizes information in a way that captures the associative and relational nature of human cognition, where concepts are not stored in isolation but as part of an interconnected web. The primary purpose of semantic networks is to model domain-specific knowledge in a manner that facilitates automated inference, detection of conceptual similarities, and relational understanding, particularly in fields like , , and . By encoding relationships explicitly, these networks enable systems to derive new information from existing facts, such as inferring properties through transitive relations, thereby supporting tasks like and without exhaustive rule-based programming. At their core, semantic networks embody the principle that knowledge emerges from patterns of interconnections among nodes rather than discrete, isolated propositions, allowing for flexible querying and dynamic knowledge exploration. They incorporate mechanisms like , where properties defined at higher-level nodes (e.g., supertypes) propagate to subordinate nodes (e.g., subtypes) via "is-a" links, promoting and consistency in representation. Additionally, supports reasoning by propagating signals or markers from an activated node through the network along relational paths, simulating associative and enabling the retrieval of related concepts based on proximity and strength of connections. Unlike general graphs, where edges merely indicate connectivity without inherent meaning, semantic networks distinguish themselves through the semantic labels on edges, which encode specific relational semantics to underpin knowledge-based reasoning, such as deduction or , beyond simple topological analysis.

Components and Representation

Semantic networks are structured as directed graphs where nodes represent key elements of and edges denote the relationships between them. Nodes typically fall into several types: concepts, which capture general ideas or classes such as "animal"; instances, which refer to specific entities like "a particular "; categories, serving as hierarchical classes such as ""; and properties, which describe attributes like "color" or "weight." These node types allow for a modular of , enabling distinctions between elements. Edges in semantic networks encode the semantic relations between nodes, with labels specifying the nature of the connection. Taxonomic edges, such as "is-a," indicate or subclass relationships, allowing to propagate from superclasses to subclasses. Associative edges, like "similar-to," link related but non-hierarchical concepts, while functional edges, such as "causes," express causal or procedural dependencies. For instance, a simple might connect "bird" to "animal" via an "is-a" edge, as illustrated below:

Bird | is-a | [Animal](/page/A.N.I.M.A.L.)

Bird | is-a | [Animal](/page/A.N.I.M.A.L.)

This structure supports intuitive representation of taxonomic knowledge. Semantic networks can be represented in multiple formats to suit different storage and processing needs. Visually, they appear as graphs with nodes as vertices and labeled edges as arcs, facilitating human interpretation. Computationally, adjacency lists store connections by listing, for each node, its outgoing edges with relation types and target nodes. A prominent format is the triple store, using subject-predicate-object tuples where the subject and object are nodes (often uniquely identified resources), and the predicate is the edge label; this approach, foundational to RDF, explicitly encodes meaning through predicates to enable machine-readable semantics. To manage complexity in larger networks, techniques focus on maintaining clarity and efficiency. Unique identifiers, such as URIs in RDF-based representations, are assigned to nodes to avoid redundancy and ensure unambiguous references across the graph. Cycles in edges, particularly in taxonomic structures, are typically prohibited to prevent reasoning paradoxes, though general networks may include them with safeguards; redundancy is further reduced by leveraging to implicitly define properties rather than explicit repetition.

Historical Development

Early Origins

The computational origins of semantic networks trace back to mid-20th-century , where associative memory models sought to explain how humans organize and retrieve knowledge, building on earlier philosophical concepts such as Porphyry's Tree from the 3rd century AD, which illustrated categorical hierarchies. A key influence was Frederic Bartlett's schema theory, outlined in his 1932 book Remembering: A Study in Experimental and Social Psychology, which conceptualized not as passive storage but as an active reconstruction shaped by schemas—integrated frameworks of prior experiences that guide interpretation of new information. Bartlett's ideas emphasized the interconnected, reconstructive nature of , laying groundwork for later network representations of conceptual associations. Building on these psychological foundations, early work in human during the and explored associative structures to model . Researchers drew from , viewing memory as a web of linked concepts rather than isolated facts, which anticipated computational implementations. These models contrasted with earlier behaviorist approaches by focusing on internal mental structures, influencing the shift toward . Semantic networks also connected to broader pre-1970 developments in and , where simple hand-drawn diagrams illustrated concept interconnections and information flow in complex systems. Cybernetic thinkers like , in his 1948 foundational text Cybernetics: Or Control and Communication in the Animal and the Machine, used network-like diagrams to depict feedback and associative processes in biological and mechanical systems, providing an interdisciplinary bridge to AI. , meanwhile, contributed ideas of encoded relationships, as seen in early sketches of associative hierarchies that prefigured digital representations. A pivotal advancement came from Ross Quillian's PhD research at in 1966–1968, which introduced the first computational semantic network. In his 1967 paper "Word Concepts: A Theory and Simulation of Some Basic Semantic Capabilities," Quillian described a graph-based structure where nodes represented concepts and edges denoted semantic relations, enabling storage and inference of word meanings. This work motivated modeling human-like knowledge storage and retrieval to simulate , offering a flexible alternative to rigid rule-based systems prevalent in early AI. Quillian's Teachable Language Comprehender (TLC), detailed in a 1969 publication but developed during his doctoral period, applied this network to natural language understanding by allowing the system to "learn" associations from text inputs. TLC demonstrated how semantic networks could process queries through along links, simulating human comprehension and highlighting the approach's potential for AI applications in the late 1960s.

Key Advancements and Milestones

In the 1970s, semantic networks advanced through structured representations that addressed limitations in earlier associative models. introduced in 1975 as a for organizing stereotypical situations and , extending semantic networks by incorporating default values, procedural attachments, and hierarchical to facilitate efficient and expectation-driven processing in AI systems. Similarly, developed between 1972 and 1975, proposing a primitive-based framework for representing events and actions in natural language understanding, which formalized causal relations and actor roles within network-like structures to enable deeper semantic parsing beyond simple links. During the 1980s and 1990s, semantic networks integrated into practical AI applications, particularly expert systems, where they supported diagnostic reasoning through interconnected knowledge bases. Concurrently, William Woods' 1975 work on procedural attachment enhanced semantic networks by embedding executable procedures in links, allowing dynamic interpretation of static representations and addressing ambiguities in meaning through context-dependent computation. These developments shifted focus toward applied systems, though the 1980s AI winter severely curtailed funding for knowledge representation research, including semantic networks, due to overhyped expectations and limited computational scalability, prompting a reevaluation of pure symbolic approaches. By the 2000s, semantic networks influenced the standardization of web-scale knowledge representation, transitioning from isolated AI tools to interoperable frameworks. The (RDF), proposed by the W3C in 1999, formalized graph-based semantics inspired by network models, enabling machine-readable data exchange on the web through triples that capture entities, properties, and relations. This paved the way for the (OWL) in 2004, which extended RDF with formal axioms for richer inference, drawing on semantic network hierarchies to support complex ontologies in distributed environments. Key challenges in this era included scalability limitations of large semantic networks, which led to the development of hybrid models combining graphs with statistical or rule-based methods to manage and improve in real-world applications. These advancements marked a progression from theoretical constructs to foundational elements of modern knowledge systems, emphasizing standardization and practicality.

Formal Models

Mathematical Foundations

A semantic network is formally modeled as a directed labeled graph G=(V,E,L)G = (V, E, L), where VV is the set of vertices representing concepts or entities, EV×VE \subseteq V \times V is the set of directed edges denoting relations between concepts, and L:EΣL: E \to \Sigma is a label function that assigns semantic labels from an alphabet Σ\Sigma to each edge (e.g., L(e)=L(e) = "is-a" for taxonomic relations or L(e)=L(e) = "has-part" for meronymy). This graph-theoretic foundation enables the representation of knowledge as interconnected structures, drawing from standard definitions in knowledge representation systems like SNEPS. Path semantics in semantic networks rely on the transitive closure of relations to infer inherited properties, allowing complex inferences from simple edge traversals. For instance, if there exists an edge A"is-a"BA \xrightarrow{\text{"is-a"}} B
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.