Recent from talks
Nothing was collected or created yet.
Graph database
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
A graph database (GDB) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data.[1] A key concept of the system is the graph (or edge or relationship). The graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships is fast because they are perpetually stored in the database. Relationships can be intuitively visualized using graph databases, making them useful for heavily inter-connected data.[2]
Graph databases are commonly referred to as a NoSQL database. Graph databases are similar to 1970s network model databases in that both represent general graphs, but network-model databases operate at a lower level of abstraction[3] and lack easy traversal over a chain of edges.[4]
The underlying storage mechanism of graph databases can vary. Relationships are first-class citizens in a graph database and can be labelled, directed, and given properties. Some depend on a relational engine and store the graph data in a table (although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices). Others use a key–value store or document-oriented database for storage, making them inherently NoSQL structures.
As of 2021[update], no graph query language has been universally adopted in the same way as SQL was for relational databases, and there are a wide variety of systems, many of which are tightly tied to one product. Some early standardization efforts led to multi-vendor query languages like Gremlin, SPARQL, and Cypher. In September 2019 a proposal for a project to create a new standard graph query language (ISO/IEC 39075 Information Technology — Database Languages — GQL) was approved by members of ISO/IEC Joint Technical Committee 1(ISO/IEC JTC 1). GQL is intended to be a declarative database query language, like SQL. In addition to having query language interfaces, some graph databases are accessed through application programming interfaces (APIs).
Graph databases differ from graph compute engines. Graph databases are technologies that are translations of the relational online transaction processing (OLTP) databases. On the other hand, graph compute engines are used in online analytical processing (OLAP) for bulk analysis.[5] Graph databases attracted considerable attention in the 2000s, due to the successes of major technology corporations in using proprietary graph databases,[6] along with the introduction of open-source graph databases.
One study concluded that an RDBMS was "comparable" in performance to existing graph analysis engines at executing graph queries.[7]
History
[edit]In the mid-1960s, navigational databases such as IBM's IMS supported tree-like structures in its hierarchical model, but the strict tree structure could be circumvented with virtual records.[8][9]
Graph structures could be represented in network model databases from the late 1960s. CODASYL, which had defined COBOL in 1959, defined the Network Database Language in 1969.
Labeled graphs could be represented in graph databases from the mid-1980s, such as the Logical Data Model.[10][11]
Commercial object databases (ODBMSs) emerged in the early 1990s. In 2000, the Object Data Management Group published a standard language for defining object and relationship (graph) structures in their ODMG'93 publication.
Several improvements to graph databases appeared in the early 1990s, accelerating in the late 1990s with endeavors to index web pages.
In the mid-to-late 2000s, commercial graph databases with ACID guarantees such as Neo4j and Oracle Spatial and Graph became available.
In the 2010s, commercial ACID graph databases that could be scaled horizontally became available. Further, SAP HANA brought in-memory and columnar technologies to graph databases.[12] Also in the 2010s, multi-model databases that supported graph models (and other models such as relational database or document-oriented database) became available, such as OrientDB, ArangoDB, and MarkLogic (starting with its 7.0 version). During this time, graph databases of various types have become especially popular with social network analysis with the advent of social media companies. Also during the decade, cloud-based graph databases such as Amazon Neptune and Neo4j AuraDB became available.
Background
[edit]Graph databases portray the data as it is viewed conceptually. This is accomplished by transferring the data into nodes and its relationships into edges.
A graph database is a database that is based on graph theory. It consists of a set of objects, which can be a node or an edge.
- Nodes represent entities or instances such as people, businesses, accounts, or any other item to be tracked. They are roughly the equivalent of a record, relation, or row in a relational database, or a document in a document-store database.
- Edges, also termed graphs or relationships, are the lines that connect nodes to other nodes; representing the relationship between them. Meaningful patterns emerge when examining the connections and interconnections of nodes, properties and edges. The edges can either be directed or undirected. In an undirected graph, an edge connecting two nodes has a single meaning. In a directed graph, the edges connecting two different nodes have different meanings, depending on their direction. Edges are the key concept in graph databases, representing an abstraction that is not directly implemented in a relational model or a document-store model.
- Properties are information associated to nodes. For example, if Wikipedia were one of the nodes, it might be tied to properties such as website, reference material, or words that starts with the letter w, depending on which aspects of Wikipedia are germane to a given database.
Graph models
[edit]Labeled-property graph
[edit]
A labeled-property graph model is represented by a set of nodes, relationships, properties, and labels. Both nodes of data and their relationships are named and can store properties represented by key–value pairs. Nodes can be labelled to be grouped. The edges representing the relationships have two qualities: they always have a start node and an end node, and are directed;[13] making the graph a directed graph. Relationships can also have properties. This is useful in providing additional metadata and semantics to relationships of the nodes.[14] Direct storage of relationships allows a constant-time traversal.[15]
Resource Description Framework (RDF)
[edit]
In an RDF graph model, each addition of information is represented with a separate node. For example, imagine a scenario where a user has to add a name property for a person represented as a distinct node in the graph. In a labeled-property graph model, this would be done with an addition of a name property into the node of the person. However, in an RDF, the user has to add a separate node called hasName connecting it to the original person node. Specifically, an RDF graph model is composed of nodes and arcs. An RDF graph notation or a statement is represented by: a node for the subject, a node for the object, and an arc for the predicate. A node may be left blank, a literal and/or be identified by a URI. An arc may also be identified by a URI. A literal for a node may be of two types: plain (untyped) and typed. A plain literal has a lexical form and optionally a language tag. A typed literal is made up of a string with a URI that identifies a particular datatype. A blank node may be used to accurately illustrate the state of the data when the data does not have a URI.[16]
Properties
[edit]Graph databases are a powerful tool for graph-like queries. For example, computing the shortest path between two nodes in the graph. Other graph-like queries can be performed over a graph database in a natural way (for example graph's diameter computations or community detection).
Graphs are flexible, meaning it allows the user to insert new data into the existing graph without loss of application functionality. There is no need for the designer of the database to plan out extensive details of the database's future use cases.
Storage
[edit]The underlying storage mechanism of graph databases can vary. Some depend on a relational engine and "store" the graph data in a table (although a table is a logical element, therefore this approach imposes another level of abstraction between the graph database, the graph database management system and the physical devices where the data is actually stored). Others use a key–value store or document-oriented database for storage, making them inherently NoSQL structures. A node would be represented as any other document store, but edges that link two different nodes hold special attributes inside its document; a _from and _to attributes.
Index-free adjacency
[edit]Data lookup performance is dependent on the access speed from one particular node to another. Because index-free adjacency enforces the nodes to have direct physical RAM addresses and physically point to other adjacent nodes, it results in a fast retrieval. A native graph system with index-free adjacency does not have to move through any other type of data structures to find links between the nodes. Directly related nodes in a graph are stored in the cache once one of the nodes are retrieved, making the data lookup even faster than the first time a user fetches a node. However, such advantage comes at a cost. Index-free adjacency sacrifices the efficiency of queries that do not use graph traversals. Native graph databases use index-free adjacency to process CRUD operations on the stored data.
Applications
[edit]Multiple categories of graphs by kind of data have been recognised. Gartner suggests the five broad categories of graphs:[17]
- Social graph: this is about the connections between people; examples include Facebook, Twitter, and the idea of six degrees of separation
- Intent graph: this deals with reasoning and motivation.
- Consumption graph: also known as the "payment graph", the consumption graph is heavily used in the retail industry. E-commerce companies such as Amazon, eBay and Walmart use consumption graphs to track the consumption of individual customers.
- Interest graph: this maps a person's interests and is often complemented by a social graph. It has the potential to follow the previous revolution of web organization by mapping the web by interest rather than indexing webpages.
- Mobile graph: this is built from mobile data. Mobile data in the future may include data from the web, applications, digital wallets, GPS, and Internet of Things (IoT) devices.
Comparison with relational databases
[edit]Since Edgar F. Codd's 1970 paper on the relational model,[18] relational databases have been the de facto industry standard for large-scale data storage systems. Relational models require a strict schema and data normalization which separates data into many tables and removes any duplicate data within the database. Data is normalized in order to preserve data consistency and support ACID transactions. However this imposes limitations on how relationships can be queried.
One of the relational model's design motivations was to achieve a fast row-by-row access.[18] Problems arise when there is a need to form complex relationships between the stored data. Although relationships can be analyzed with the relational model, complex queries performing many join operations on many different attributes over several tables are required. In working with relational models, foreign key constraints should also be considered when retrieving relationships, causing additional overhead.
Compared with relational databases, graph databases are often faster for associative data sets[19] and map more directly to the structure of object-oriented applications. They can scale more naturally[20] to large datasets as they do not typically need join operations, which can often be expensive. As they depend less on a rigid schema, they are marketed as more suitable to manage ad hoc and changing data with evolving schemas.
Conversely, relational database management systems are typically faster at performing the same operation on large numbers of data elements, permitting the manipulation of the data in its natural structure. Despite the graph databases' advantages and recent popularity over [21] relational databases, it is recommended the graph model itself should not be the sole reason to replace an existing relational database. A graph database may become relevant if there is an evidence for performance improvement by orders of magnitude and lower latency.[22]
Examples
[edit]The relational model gathers data together using information in the data. For example, one might look for all the "users" whose phone number contains the area code "311". This would be done by searching selected datastores, or tables, looking in the selected phone number fields for the string "311". This can be a time-consuming process in large tables, so relational databases offer indexes, which allow data to be stored in a smaller sub-table, containing only the selected data and a unique key (or primary key) of the record. If the phone numbers are indexed, the same search would occur in the smaller index table, gathering the keys of matching records, and then looking in the main data table for the records with those keys. Usually, a table is stored in a way that allows a lookup via a key to be very fast.[23]
Relational databases do not inherently contain the idea of fixed relationships between records. Instead, related data is linked to each other by storing one record's unique key in another record's data. For example, a table containing email addresses for users might hold a data item called userpk, which contains the primary key of the user record it is associated with. In order to link users and their email addresses, the system first looks up the selected user records primary keys, looks for those keys in the userpk column in the email table (or, more likely, an index of them), extracts the email data, and then links the user and email records to make composite records containing all the selected data. This operation, termed a join, can be computationally expensive. Depending on the complexity of the query, the number of joins, and indexing various keys, the system may have to search through multiple tables and indexes and then sort it all to match it together.[23]
In contrast, graph databases directly store the relationships between records. Instead of an email address being found by looking up its user's key in the userpk column, the user record contains a pointer that directly refers to the email address record. That is, having selected a user, the pointer can be followed directly to the email records, there is no need to search the email table to find the matching records. This can eliminate the costly join operations. For example, if one searches for all of the email addresses for users in area code "311", the engine would first perform a conventional search to find the users in "311", but then retrieve the email addresses by following the links found in those records. A relational database would first find all the users in "311", extract a list of the primary keys, perform another search for any records in the email table with those primary keys, and link the matching records together. For these types of common operations, graph databases would theoretically be faster.[23]
The true value of the graph approach becomes evident when one performs searches that are more than one level deep. For example, consider a search for users who have "subscribers" (a table linking users to other users) in the "311" area code. In this case a relational database has to first search for all the users with an area code in "311", then search the subscribers table for any of those users, and then finally search the users table to retrieve the matching users. In contrast, a graph database would search for all the users in "311", then follow the backlinks through the subscriber relationship to find the subscriber users. This avoids several searches, look-ups, and the memory usage involved in holding all of the temporary data from multiple records needed to construct the output. In terms of big O notation, this query would be time – i.e., proportional to the logarithm of the size of the data. In contrast, the relational version would be multiple lookups, plus the time needed to join all of the data records.[23]
The relative advantage of graph retrieval grows with the complexity of a query. For example, one might want to know "that movie about submarines with the actor who was in that movie with that other actor that played the lead in Gone With the Wind". This first requires the system to find the actors in Gone With the Wind, find all the movies they were in, find all the actors in all of those movies who were not the lead in Gone With the Wind, and then find all of the movies they were in, finally filtering that list to those with descriptions containing "submarine". In a relational database, this would require several separate searches through the movies and actors tables, doing another search on submarine movies, finding all the actors in those movies, and then comparing the (large) collected results. In contrast, the graph database would walk from Gone With the Wind to Clark Gable, gather the links to the movies he has been in, gather the links out of those movies to other actors, and then follow the links out of those actors back to the list of movies. The resulting list of movies can then be searched for "submarine". All of this can be done via one search.[24]
Properties add another layer of abstraction to this structure that also improves many common queries. Properties are essentially labels that can be applied to any record, or in some cases, edges as well. For example, one might label Clark Gable as "actor", which would then allow the system to quickly find all the records that are actors, as opposed to director or camera operator. If labels on edges are allowed, one could also label the relationship between Gone With the Wind and Clark Gable as "lead", and by performing a search on people that are "lead" "actor" in the movie Gone With the Wind, the database would produce Vivien Leigh, Olivia de Havilland and Clark Gable. The equivalent SQL query would have to rely on added data in the table linking people and movies, adding more complexity to the query syntax. These sorts of labels may improve search performance under certain circumstances, but are generally more useful in providing added semantic data for end users.[24]
Relational databases are very well suited to flat data layouts, where relationships between data are only one or two levels deep. For example, an accounting database might need to look up all the line items for all the invoices for a given customer, a three-join query. Graph databases are aimed at datasets that contain many more links. They are especially well suited to social networking systems, where the "friends" relationship is essentially unbounded. These properties make graph databases naturally suited to types of searches that are increasingly common in online systems, and in big data environments. For this reason, graph databases are becoming very popular for large online systems like Facebook, Google, Twitter, and similar systems with deep links between records.
To further illustrate, imagine a relational model with two tables: a people table (which has a person_id and person_name column) and a friend table (with friend_id and person_id, which is a foreign key from the people table). In this case, searching for all of Jack's friends would result in the following SQL query.
SELECT p2.person_name
FROM people p1
JOIN friend ON (p1.person_id = friend.person_id)
JOIN people p2 ON (p2.person_id = friend.friend_id)
WHERE p1.person_name = 'Jack';
The same query may be translated into --
- Cypher, a graph database query language
MATCH (p1:person {name: 'Jack'})-[:FRIEND_WITH]-(p2:person) RETURN p2.name
- SPARQL, an RDF graph database query language standardized by W3C and used in multiple RDF Triple and Quad stores
- Long form
PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?name WHERE { ?s a foaf:Person . ?s foaf:name "Jack" . ?s foaf:knows ?o . ?o foaf:name ?name . }
- Short form
PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?name WHERE { ?s foaf:name "Jack" ; foaf:knows ?o . ?o foaf:name ?name . }
- Long form
- SPASQL, a hybrid database query language, that extends SQL with SPARQL
SELECT people.name FROM ( SPARQL PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?name WHERE { ?s foaf:name "Jack" ; foaf:knows ?o . ?o foaf:name ?name . } ) AS people ;
The above examples are a simple illustration of a basic relationship query. They condense the idea of relational models' query complexity that increases with the total amount of data. In comparison, a graph database query is easily able to sort through the relationship graph to present the results.
There are also results that indicate simple, condensed, and declarative queries of the graph databases do not necessarily provide good performance in comparison to the relational databases. While graph databases offer an intuitive representation of data, relational databases offer better results when set operations are needed.[15]
List of graph databases
[edit]The following is a list of notable graph databases:
| name | current version |
latest release date (YYYY-MM-DD) |
software license |
programming language | description |
|---|---|---|---|---|---|
| Aerospike | 7.0 | 2024-05-15 | Proprietary | C | Aerospike Graph is a highly scalable, low-latency property graph database built on Aerospike’s proven real-time data platform. Aerospike Graph combines the enterprise capabilities of the Aerospike Database - the most scalable real-time NoSQL database - with the property graph data model via the Apache Tinkerpop graph compute engine. Developers will enjoy native support for the Gremlin query language, which enables them to write powerful business processes directly. |
| AgensGraph[25] | 2.14.1 | 2025-01[26] | Apache 2 Community version, proprietary Enterprise Edition | C | AgensGraph is a cutting-edge multi-model graph database designed for modern complex data environments. By supporting both relational and graph data models simultaneously, AgensGraph allows developers to seamlessly integrate legacy relational data with the flexible graph data model within a single database. AgensGraph is built on the robust PostgreSQL RDBMS, providing a highly reliable, fully-featured platform ready for enterprise use. |
| AllegroGraph | 7.0.0 | 2022-12-20 | Proprietary, clients: Eclipse Public License v1 | C#, C, Common Lisp, Java, Python | Resource Description Framework (RDF) and graph database. |
| Amazon Neptune |
1.4.0.0 | 2024-11-06[27] | Proprietary | Not disclosed | Amazon Neptune is a fully managed graph database by Amazon.com. It is used as a web service, and is part of Amazon Web Services. Supports popular graph models property graph and W3C's RDF, and their respective query languages Apache TinkerPop, Gremlin, SPARQL, and openCypher. |
| Altair Graph Studio | 2.1 | 2020-02 | Proprietary | C, C++ | AnzoGraph DB is a massively parallel native Graph Online Analytics Processing (GOLAP) style database built to support SPARQL and Cypher Query Language to analyze trillions of relationships. AnzoGraph DB is designed for interactive analysis of large sets of semantic triple data, but also supports labeled properties under proposed W3C standards.[28][29][30][31] |
| ArangoDB | 3.12.4.2 | 2025-04-09 | Free Apache 2, Proprietary | C++, JavaScript, .NET, Java, Python, Node.js, PHP, Scala, Go, Ruby, Elixir | NoSQL native graph database system developed by ArangoDB Inc, supporting three data models (key/value, documents, graphs, vector), with one database core and a unified query language called AQL (ArangoDB Query Language). Provides scalability and high availability via datacenter-to-datacenter replication, auto-sharding, automatic failover, and other capabilities. |
| Azure Cosmos DB | 2017 | Proprietary | Not disclosed | Multi-modal database which supports graph concepts using the Apache Gremlin query language | |
| DataStax Enterprise Graph |
v6.0.1 | 2018-06 | Proprietary | Java | Distributed, real-time, scalable database; supports Tinkerpop, and integrates with Cassandra[32] |
| GUN (Graph Universe Node) | 0.2020.1240 | 2024 | Open source, MIT License, Apache 2.0, zlib License | JavaScript | An open source, offline-first, real-time, decentralized, graph database written in JavaScript for the web browser.[33][34]
It is implemented as a peer-to-peer network featuring multi-master replication with a custom commutative replicated data type (CRDT).[citation needed] |
| InfiniteGraph | 2021.2 | 2021-05 | Proprietary, commercial, free 50GB version | Java, C++, 'DO' query language | A distributed, cloud-enabled and massively scalable graph database for complex, real-time queries and operations. Its Vertex and Edge objects have unique 64-bit object identifiers that considerably speed up graph navigation and pathfinding operations. It supports batch or streaming updates to the graph alongside concurrent, parallel queries. InfiniteGraph's 'DO' query language enables both value based queries, as well as complex graph queries. InfiniteGraph is goes beyond graph databases to also support complex object queries. |
| JanusGraph | 1.1.0 | 2024-11-07[35] | Apache 2 | Java | Open source, scalable, distributed across a multi-machine cluster graph database under The Linux Foundation; supports various storage backends (Apache Cassandra, Apache HBase, Google Cloud Bigtable, Oracle Berkeley DB);[36] supports global graph data analytics, reporting, and extract, transform, load (ETL) through integration with big data platforms (Apache Spark, Apache Giraph, Apache Hadoop); supports geo, numeric range, and full-text search via external index storages (Elasticsearch, Apache Solr, Apache Lucene).[37] |
| MarkLogic | 8.0.4 | 2015 | Proprietary, freeware developer version | Java | Multi-model NoSQL database that stores documents (JSON and XML) and semantic graph data (RDF triples); also has a built-in search engine. |
| Microsoft SQL Server 2017 | RC1 | Proprietary | SQL/T-SQL, R, Python | Offers graph database abilities to model many-to-many relationships. The graph relationships are integrated into Transact-SQL, and use SQL Server as the foundational database management system.[38] | |
| NebulaGraph | 3.8.0 | 2024-05 | Open Source Edition is under Apache 2.0, Common Clause 1.0 | C++, Go, Java, Python | A scalable open-source distributed graph database for storing and handling billions of vertices and trillions of edges with milliseconds of latency. It is designed based on a shared-nothing distributed architecture for linear scalability.[39] |
| Neo4j | 2025.10.1 | 2025-10-30[40] | GPLv3 Community Edition, commercial and AGPLv3 options for enterprise and advanced editions | Java, .NET, JavaScript, Python, Go, Ruby, PHP, R, Erlang/Elixir, C/C++, Clojure, Perl, Haskell | Open-source, supports ACID, has high-availability clustering for enterprise deployments, and comes with a web-based administration that includes full transaction support and visual node-link graph explorer; accessible from most programming languages using its built-in REST web API interface, and a proprietary Bolt protocol with official drivers. |
| Ontotext GraphDB | 10.7.6 | 2024-10-15[41] | Proprietary, Standard and Enterprise Editions are commercial, Free Edition is freeware | Java | Highly efficient and robust semantic graph database with RDF and SPARQL support, also available as a high-availability cluster. Integrates OpenRefine for ingestion and reconciliation of tabular data and ontop for Ontology-Based Data Access. Connects to Lucene, SOLR and Elasticsearch for Full text and Faceted search, and Kafka for event and stream processing. Supports OGC GeoSPARQL. Provides JDBC access to Knowledge Graphs.[42] |
| OpenLink Virtuoso |
8.2 | 2018-10 | Open Source Edition is GPLv2, Enterprise Edition is proprietary | C, C++ | Multi-model (Hybrid) relational database management system (RDBMS) that supports both SQL and SPARQL for declarative (Data Definition and Data Manipulation) operations on data modelled as SQL tables and/or RDF Graphs. Also supports indexing of RDF-Turtle, RDF-N-Triples, RDF-XML, JSON-LD, and mapping and generation of relations (SQL tables or RDF graphs) from numerous document types including CSV, XML, and JSON. May be deployed as a local or embedded instance (as used in the NEPOMUK Semantic Desktop), a one-instance network server, or a shared-nothing elastic-cluster multiple-instance networked server[43] |
| Oracle RDF Graph; part of Oracle Database | 21c | 2020 | Proprietary | SPARQL, SQL | RDF Graph capabilities as features in multi-model Oracle Database: RDF Graph: comprehensive W3C RDF graph management in Oracle Database with native reasoning and triple-level label security. ACID, high-availability, enterprise scale. Includes visualization, RDF4J, and native end Sparql end point. |
| Oracle Property Graph; part of Oracle Database | 21c | 2020 | Proprietary; Open Source language specification | PGQL, Java, Python | Property Graph; consisting of a set of objects or vertices, and a set of arrows or edges connecting the objects. Vertices and edges can have multiple properties, which are represented as key–value pairs. Includes PGQL, an SQL-like graph query language and an in-memory analytic engine (PGX) nearly 60 prebuilt parallel graph algorithms. Includes REST APIs and graph visualization. |
| OrientDB | 3.2.28 | 2024-02 | Community Edition is Apache 2, Enterprise Edition is commercial | Java | Second-generation[44] distributed graph database with the flexibility of documents in one product (i.e., it is both a graph database and a document NoSQL database); licensed under open-source Apache 2 license; and has full ACID support; it has a multi-master replication; supports schema-less, -full, and -mixed modes; has security profiling based on user and roles; supports a query language similar to SQL. It has HTTP REST and JSON API. |
| RedisGraph | 2.0.20 | 2020-09 | Redis Source Available License | C | In-memory, queryable Property Graph database which uses sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph.[45] |
| SAP HANA | 2.0 SPS 05 | 2020-06[46] | Proprietary | C, C++, Java, JavaScript and SQL-like language | In-memory ACID transaction supported property graph[47] |
| Sparksee | 5.2.0 | 2015 | Proprietary, commercial, freeware for evaluation, research, development | C++ | High-performance scalable database management system from Sparsity Technologies; main trait is its query performance for retrieving and exploring large networks; has bindings for Java, C++, C#, Python, and Objective-C; version 5 is the first graph mobile database. |
| Teradata Aster |
7 | 2016 | Proprietary | Java, SQL, Python, C++, R | Massive parallel processing (MPP) database incorporating patented engines supporting native SQL, MapReduce, and graph data storage and manipulation; provides a set of analytic function libraries and data visualization[48] |
| TerminusDB | 11.0.6 | 2023-05-03[49] | Apache 2 | Prolog, Rust, Python, JSON-LD | Document-oriented knowledge graph; the power of an enterprise knowledge graph with the simplicity of documents. |
| TigerGraph | 4.1.2 | 2024-12-20[50] | Proprietary | C++ | Massive parallel processing (MPP) native graph database management system[51] |
| TypeDB | 2.14.0 | 2022-11[52] | Free, GNU AGPLv3, Proprietary | Java, Python, JavaScript | TypeDB is a strongly-typed database with a rich and logical type system. TypeDB empowers you to tackle complex problems, and TypeQL is its query language. TypeDB allows you to model your domain based on logical and object-oriented principles. Composed of entity, relationship, and attribute types, as well as type hierarchies, roles, and rules, TypeDB allows you to think higher-level, as opposed to join-tables, columns, documents, vertices, edges, and properties.[promotion?] |
| Tarantool Graph DB | 1.2.0 | 2024-01-01[53] | Proprietary | Lua, C | Tarantool Graph DB is a graph-vector database. Analyze data connections in real time using a high-speed graph and vector storage |
Graph query-programming languages
[edit]- AQL (ArangoDB Query Language): a SQL-like query language used in ArangoDB for both documents and graphs
- Cypher Query Language (Cypher): a graph query declarative language for Neo4j that enables ad hoc and programmatic (SQL-like) access to the graph.[54]
- GQL: proposed ISO standard graph query language
- GraphQL: an open-source data query and manipulation language for APIs. Dgraph implements modified GraphQL language called DQL (formerly GraphQL+-)
- Gremlin: a graph programming language that is a part of Apache TinkerPop open-source project[55]
- SPARQL: a query language for RDF databases that can retrieve and manipulate data stored in RDF format
- regular path queries, a theoretical language for queries on graph databases
See also
[edit]- Graph transformation – Creating a new graph from an existing graph
- Hierarchical database model – Tree-like structure for data
- Datalog – Declarative logic programming language
- Vadalog – Type of Knowledge Graph Management System
- Object database – Database presenting data as objects
- RDF Database – Database for storage and retrieval of triples
- Structured storage – Database class for storage and retrieval of modeled data
- Text graph
- Vector database – Type of database that uses vectors to represent other data
- Wikidata – Free knowledge database project — Wikidata is a Wikipedia sister project that stores data in a graph database. Ordinary web browsing allows for viewing nodes, following edges, and running SPARQL queries.
References
[edit]- ^ Bourbakis, Nikolaos G. (1998). Artificial Intelligence and Automation. World Scientific. p. 381. ISBN 9789810226374. Retrieved 2018-04-20.
- ^ Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young (March 2017). "Use of Graph Database for the Integration of Heterogeneous Biological Data". Genomics & Informatics. 15 (1): 19–27. doi:10.5808/GI.2017.15.1.19. ISSN 1598-866X. PMC 5389944. PMID 28416946.
- ^ Angles, Renzo; Gutierrez, Claudio (1 Feb 2008). "Survey of graph database models" (PDF). ACM Computing Surveys. 40 (1): 1–39. CiteSeerX 10.1.1.110.1072. doi:10.1145/1322432.1322433. S2CID 207166126. Archived from the original (PDF) on 15 August 2017. Retrieved 28 May 2016.
network models [...] lack a good abstraction level: it is difficult to separate the db-model from the actual implementation
- ^ Silberschatz, Avi (28 January 2010). Database System Concepts, Sixth Edition (PDF). McGraw-Hill. p. D-29. ISBN 978-0-07-352332-3.
- ^ Robinson, Ian (2015-06-10). Graph Databases: New Opportunities for Connected Data. O'Reilly Media, Inc. p. 4. ISBN 9781491930861.
- ^ "Graph Databases Burst into the Mainstream". www.kdnuggets.com. Retrieved 2018-10-23.
- ^ Fan, Jing; Gerald, Adalbert (2014-12-25). The case against specialized graph analytics engines (PDF). Conference on Innovative Data Systems Research (CIDR).
- ^ Silberschatz, Avi (28 January 2010). Database System Concepts, Sixth Edition (PDF). McGraw-Hill. p. E-20. ISBN 978-0-07-352332-3.
- ^ Parker, Lorraine. "IMS Notes". vcu.edu. Retrieved 31 May 2016.
- ^ Angles, Renzo; Gutierrez, Claudio (1 Feb 2008). "Survey of graph database models" (PDF). ACM Computing Surveys. 40 (1): 1–39. CiteSeerX 10.1.1.110.1072. doi:10.1145/1322432.1322433. S2CID 207166126. Archived from the original (PDF) on 15 August 2017. Retrieved 28 May 2016.
network models [...] lack a good abstraction level: it is difficult to separate the db-model from the actual implementation
- ^ Kuper, Gabriel M. (1985). The Logical Data Model: A New Approach to Database Logic (PDF) (Ph.D.). Docket STAN-CS-85-1069. Archived (PDF) from the original on June 30, 2016. Retrieved 31 May 2016.
- ^ "SAP Announces New Capabilities in the Cloud with HANA". 2014-10-22. Retrieved 2016-07-07.
- ^ Frisendal, Thomas (2017-09-22). "Property Graphs". graphdatamodeling.com. Retrieved 2018-10-23.
- ^ Das, S; Srinivasan, J; Perry, Matthew; Chong, Eugene; Banerjee, Jay (2014-03-24). "A Tale of Two Graphs: Property Graphs as RDF in Oracle".
{{cite journal}}: Cite journal requires|journal=(help) - ^ a b Have, Christian Theil; Jensen, Lars Juhl (2013-10-17). "Are graph databases ready for bioinformatics?". Bioinformatics. 29 (24): 3107–3108. doi:10.1093/bioinformatics/btt549. ISSN 1460-2059. PMC 3842757. PMID 24135261.
- ^ "Resource Description Framework (RDF): Concepts and Abstract Syntax". www.w3.org. Retrieved 2018-10-24.
- ^ "The Competitive Dynamics of the Consumer Web: Five Graphs Deliver a Sustainable Advantage". www.gartner.com. Retrieved 2018-10-23.
- ^ a b Codd, E. F. (1970-06-01). "A relational model of data for large shared data banks". Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685. ISSN 0001-0782. S2CID 207549016.
- ^ Kallistrate, N. (2022-06-15). "Transition from relational to graph database". Neo4j Docs. Retrieved 2025-08-13.
- ^ Averbuch, A. (2013-01-22). "Partitioning Graph Databases – A Quantitative Evaluation". arXiv:1301.5121 [cs.DB].
- ^ Dravenloch, E. M. (2019-03-16). "Graph Database vs Relational Database: Which Is Best for Your Needs?". InterSystems. Retrieved 2025-08-13.
- ^ "Graph Databases, 2nd Edition". O’Reilly | Safari. Retrieved 2018-10-23.
- ^ a b c d "From Relational to Graph Databases". Neo4j.
- ^ a b "Examples where Graph databases shine: Neo4j edition", ZeroTurnaround
- ^ "AgensGraph". bitnine.net. Retrieved 2025-02-19.
- ^ "Release AgensGraph v2.14.1 · bitnine-oss/agensgraph". github.com. SKAI Worldwide. 2025-01-16. Retrieved 2025-02-17.
- ^ "Amazon Neptune Engine version 1.4.0.0 (2024-11-06)". Docs.AWS.Amazon.com. Amazon Web Services. Retrieved 9 November 2024.
- ^ "In-memory massively parallel distributed graph database purpose-built for analytics". CambridgeSemantics.com. Retrieved 2018-02-20.
- ^ Rueter, John (15 February 2018). "Cambridge Semantics announces AnzoGraph graph-based analytics support for Amazon Neptune and graph databases". BusinessWire.com. Retrieved 20 February 2018.
- ^ Zane, Barry (2 November 2016). "Semantic graph databases: a worthy successor to relational databases". DBTA.com. Database Trends and Applications. Retrieved 20 February 2018.
- ^ "Cambridge Semantics announces AnzoGraph support for Amazon Neptune and graph databases". DBTA.com. Database Trends and Applications. 2018-02-15. Retrieved 2018-03-08.
- ^ Woodie, Alex (21 June 2016). "Beyond Titan: the evolution of DataStax's new graph database". Datanami.com. Retrieved 9 May 2017.
- ^ Fireship (2021-06-07). "GUN Decentralized Graph DB in 100 Seconds". YouTube. Retrieved 2024-08-02.
- ^ Smith, Noah (2019-07-21). "These technologists think the internet is broken. So they're building another one". NBC News.
- ^ "Release 1.1.0 · JanusGraph/Janusgraph". GitHub. 7 November 2024.
- ^ "JanusGraph storage backends". docs.JanusGraph.org. Archived from the original on 2018-10-02. Retrieved 2018-10-01.
- ^ "JanusGraph index storages". docs.JanusGraph.org. Archived from the original on 2018-10-02. Retrieved 2018-10-01.
- ^ "What's new in SQL Server 2017". Docs.Microsoft.com. Microsoft Corp. 19 April 2017. Retrieved 9 May 2017.
- ^ "Nebula Graph debuts for big data analytics discovery". Datanami.com. 29 June 2020. Retrieved 2 December 2020.
- ^ "Release Notes: Neo4j 2025.10.1". Neo4j.com. Neo4j Graph Database Platform. Retrieved 2025-10-30.
- ^ "Release Notes". Ontotext GraphDB. 9 November 2024. Retrieved 9 November 2024.
- ^ Sa, Wang (2025-02-07). "What is Knowledge Graph? A Comprehensive Guide". PuppyGraph. Retrieved 2025-08-13.
- ^ "Clustering deployment architecture diagrams for Virtuoso". Virtuoso.OpenLinkSW.com. OpenLink Software. Retrieved 9 May 2017.
- ^ Vorontsev (2021-11-04). "OrientDB Official Documentation". OrientDB Docs. Retrieved 2025-08-13.
- ^ Ewbank, Key. "RedisGraph reaches general availability". I-Programmer.info.
- ^ "What's new in SAP HANA 2.0 SPS 05". blogs.SAP.com. 2020-06-26. Retrieved 2020-06-26.
- ^ Rudolf, Michael; Paradies, Marcus; Bornhövd, Christof; Lehner, Wolfgang. The graph story of the SAP HANA database (PDF). Lecture Notes in Informatics.
- ^ Woodie, Alex (23 October 2015). "The art of analytics, or what the green-haired people can teach us". Datanami.com. Retrieved 9 May 2017.
- ^ "GitHub Releases". GitHub. Retrieved 2023-07-03.
- ^ "Release notes : TigerGraph : Docs". Docs.TigerGraph.com. TigerGraph. Retrieved 4 July 2024.
- ^ "The Forrester Wave™: graph data platforms, Q4 2020". AWS.Amazon.com. Amazon Web Services. 16 November 2020. Retrieved 16 November 2020.
- ^ "Release TypeDB 2.14.0 · vaticle/typedb". GitHub. Retrieved 2022-11-25.
- ^ "Key releases of Tarantool flagship products from 01.01.2024". tarantool.io (in Russian). Retrieved 2025-01-16.
- ^ Svensson, Johan (5 July 2016). "Guest View: Relational vs. graph databases: Which to use and when?". San Diego Times. BZ Media. Retrieved 30 August 2016.
- ^ TinkerPop, Apache. "Apache TinkerPop". Apache TinkerPop. Retrieved 2016-11-02.
External links
[edit]- "Graph Data Modeling: All You Need To Know". PuppyGraph. Retrieved 2025-08-21.
Graph database
View on GrokipediaFundamentals
Definition and Overview
A graph database is a database management system designed for storing, managing, and querying data using graph structures, where entities are represented as nodes and relationships as edges connecting nodes, with attributes modeled as properties, which may be attached to nodes and, in some models like property graphs, to edges as well.[12][13] This approach models data as a network of interconnected elements, prioritizing the explicit representation of relationships over hierarchical or tabular arrangements. The terminology derives from graph theory, with nodes denoting discrete entities such as people, products, or concepts, edges indicating directed or undirected connections like "friend of" or "purchased," and properties providing key-value pairs for additional descriptive data on nodes or edges.[14] Graph databases serve the core purpose of efficiently managing complex, interconnected datasets where relationships are as critical as the entities themselves, enabling rapid traversals and analytical queries on networks of data.[15] They are particularly suited for semi-structured data with variable connections, distinguishing them from relational databases that use tables, rows, and foreign key joins to indirectly model relationships, often leading to performance overhead in highly linked scenarios.[16] In contrast to hierarchical models, graph databases natively support flexible, many-to-many associations without predefined schemas, accommodating evolving data structures inherent in real-world networks.[17] High-level advantages of graph databases include superior query performance for connected data, as edge traversals occur in constant time without the computational cost of multi-table joins common in relational systems.[18] This efficiency scales well for applications involving deep relationship chains, such as social networks or recommendation engines. Furthermore, their schema-optional nature allows for agile data modeling, where new properties or relationships can be added dynamically without extensive refactoring.[14]Key Concepts
Graph databases rely on foundational concepts from graph theory to model and query interconnected data. A graph in this context is a mathematical structure comprising a set of vertices, also known as nodes, and a set of edges connecting pairs of vertices. Graphs can be undirected, where edges represent symmetric relationships without inherent direction, or directed, where edges, often termed arcs, indicate a specific orientation from one vertex to another.[19][20] Central to graph theory are notions of paths, cycles, and connectivity, which underpin efficient data traversal in graph databases. A path is a sequence of distinct edges linking two vertices, enabling the representation of step-by-step relationships. A cycle occurs when a path returns to its starting vertex, potentially indicating loops or redundancies in data connections. Connectivity measures how well vertices are linked; in undirected graphs, a graph is connected if there is a path between every pair of vertices, while in directed graphs, strong connectivity requires paths in both directions between any pair. These elements allow graph databases to handle complex, relational queries more intuitively than tabular structures.[21][22] The core components of a graph database are nodes and edges, which directly map to graph theory's vertices and arcs. Nodes represent entities, such as people, products, or locations, serving as the primary data points. Edges capture relationships between nodes, incorporating directionality to denote flow or hierarchy (e.g., "follows" in a directed social graph) and labels to categorize the relationship type (e.g., "friend" or "purchased"). Nodes typically support properties as key-value pairs; edges may also support properties in certain models, such as property graphs, enabling rich, contextual data without rigid structures.[23][24][25] These components facilitate modeling real-world scenarios with inherent interconnections, such as social networks, where individual users are nodes and friendships are undirected edges linking them, allowing queries to explore degrees of separation or influence propagation efficiently. In recommendation systems, products form nodes connected by "similar_to" edges with properties like similarity scores, capturing collaborative filtering patterns.[26][27] Graph databases feature schema-optional designs, often described as schema-free or schema-flexible, which permit the dynamic addition of nodes, edges, and properties during runtime without requiring upfront schema definitions. This contrasts with relational models and supports evolving data requirements, such as adding new relationship types in a growing knowledge base.[28][29][30] To ensure data integrity amid concurrent operations, many graph databases implement ACID properties—atomicity, consistency, isolation, and durability—tailored to graph-specific actions like multi-hop traversals and relationship updates, while others may use eventual consistency models for better scalability in distributed environments. Atomicity guarantees that complex graph modifications, such as creating interconnected nodes and edges, succeed entirely or not at all. Consistency preserves graph invariants, like edge directionality, across transactions. Isolation prevents interference during parallel queries, while durability ensures committed changes persist, often via native storage optimized for relational patterns.[31][32][28][33]Historical Development
Origins and Early Innovations
The conceptual foundations of graph databases trace back to the origins of graph theory in the 18th century, with Leonhard Euler's seminal work on the Seven Bridges of Königsberg problem in 1736. Euler formalized the problem as a network of landmasses (vertices) connected by bridges (edges), proving that no Eulerian path existed to traverse each bridge exactly once and return to the starting point, thereby establishing key ideas in connectivity and traversal that underpin modern graph structures.[34] This mathematical abstraction laid the groundwork for representing relationships as graphs, influencing later developments in database design. In the 20th century, mathematicians like Dénes Kőnig advanced graph theory through his 1936 treatise Theorie der endlichen und unendlichen Graphen, which systematized concepts such as matchings and bipartite graphs, providing tools for modeling complex interconnections essential to data relationships.[35] Similarly, Øystein Ore contributed foundational results in the 1950s and 1960s, including Ore's theorem on Hamiltonian paths, which explored conditions for traversable graphs and highlighted the challenges of navigating intricate networks.[36] Early database systems in the 1960s and 1970s drew on these graph-theoretic principles to address the limitations of emerging relational models, which struggled with efficiently representing and querying many-to-many relationships without excessive joins. Navigational databases, exemplified by the CODASYL Data Base Task Group specifications from the late 1960s, used pointer-based structures to traverse data sets as linked networks, allowing direct navigation along relationships akin to graph edges.[37] A pioneering implementation was Charles Bachman's Integrated Data Store (IDS), developed in the early 1960s at General Electric as the first direct-access database management system; IDS employed record types connected by physical pointers, enabling graph-like querying for integrated business data across departments.[38] These systems addressed relational models' rigidity by prioritizing relationship traversal over tabular storage, though they required manual navigation and lacked declarative querying. Concurrently, Peter Chen's 1976 entity-relationship (ER) model formalized entities and their associations using diagrams that mirrored graph structures, providing a semantic foundation for database design that emphasized relationships over strict hierarchies.[39] In the 1990s, precursors to the semantic web further propelled graph-based data representation, building on knowledge representation efforts to encode interconnected information for machine readability. Early work on ontologies and semantic networks, such as those explored in AI projects like Cyc, highlighted the need for flexible, relationship-centric models to capture domain knowledge beyond flat structures.[40] This culminated in the conceptualization of the Resource Description Framework (RDF) as a W3C recommendation in 1999, which defined a graph model using triples (subject-predicate-object) to represent resources and their interconnections on the web, addressing relational databases' shortcomings in handling distributed, schema-flexible relationships.[41] These innovations collectively tackled the pre-NoSQL era's challenges, where relational systems' join-heavy operations proved inefficient for deeply interconnected data, paving the way for graph-oriented persistence and querying.[38]Evolution and Milestones
The rise of the NoSQL movement in the early 2000s was driven by the need to handle web-scale data volumes and complex relationships that relational databases struggled with, paving the way for graph databases as a key NoSQL category.[42] Neo4j, the first prominent property graph database, emerged from a project initiated in 1999 and saw its company, Neo Technology, founded in 2007, with the initial public release of Neo4j 1.0 that same year, marking a commercial breakthrough for graph storage and traversal.[43] Parallel to these developments, the semantic web initiative advanced graph technologies through standardized RDF models, with the W3C publishing the RDF 1.0 specification in 2004 to enable linked data representation as directed graphs.[44] This was complemented by the release of the SPARQL query language as a W3C recommendation in January 2008, providing a declarative standard for querying RDF graphs across distributed sources.[45] Key milestones in graph computing frameworks followed, including the launch of Apache TinkerPop in 2009, which introduced Gremlin as a graph traversal language and established a vendor-neutral stack for property graph processing.[46] The post-2010 period saw an explosion in big data integrations, exemplified by Apache Giraph's initial development in 2011 at Facebook as an open-source implementation of the Pregel model for scalable graph analytics on Hadoop.[47] In recent years, graph databases have increasingly integrated with AI and machine learning, particularly through graph neural networks (GNNs) in the 2020s, which leverage graph structures for tasks like node classification and link prediction by propagating embeddings across connected data.[48] This evolution includes hybrid graph-vector databases that combine relational graph queries with vector embeddings for semantic search and recommendation systems, enhancing AI-driven applications such as knowledge graph reasoning.[49] Cloud-native solutions have further boosted scalability, with Amazon Neptune launching in general availability on May 30, 2018, as a managed service supporting both property graphs and RDF.[50] Standardization efforts culminated in the approval of the GQL project by ISO/IEC JTC1 in 2019, leading to the publication of the ISO/IEC 39075 standard in April 2024 for property graph querying, which promotes portability across implementations.[51]Graph Data Models
Property Graph Model
The labeled property graph (LPG) model, also known as the property graph model, is a flexible data structure for representing and querying interconnected data in graph databases. It consists of nodes representing entities, directed edges representing relationships between entities, and associated labels and properties for both nodes and edges. Formally, an LPG is defined as a directed labeled multigraph where each node and edge can carry a set of key-value pairs called properties, and labels categorize nodes and edge types to facilitate grouping and traversal.[52] This model was formally standardized in ISO/IEC 39075 (published April 2024), which specifies the property graph data structures and the Graph Query Language (GQL).[53] Nodes in an LPG denote discrete entities such as people, products, or locations, each optionally assigned one or more labels (e.g., "Person" or "Employee") and a map of properties (e.g., {name: "Alice", age: 30}). Edges are directed connections between nodes, each with a type label (e.g., "KNOWS" or "OWNS") indicating the relationship semantics and their own properties (e.g., {since: 2020}). This structure supports multiple edges between the same pair of nodes, allowing representation of complex, multi-faceted relationships. The model enables efficient traversals for complex queries, such as pathfinding or pattern matching, by leveraging labels for indexing and filtering without requiring a rigid schema.[52][54] A simple example illustrates the LPG structure in a JSON-like serialization: a node might be represented as{id: 1, labels: ["Person"], properties: {name: "Alice", born: 1990}}, connected via an edge {id: 101, type: "KNOWS", from: 1, to: 2, properties: {strength: "high"}} to another node {id: 2, labels: ["Person"], properties: {name: "Bob", born: 1985}}. This format captures entity attributes and relational details in a human-readable way, suitable for storage and exchange.[54][55]
Key features of the LPG include its schema-optional nature, which allows dynamic addition of labels and properties without predefined constraints, promoting agility in evolving datasets. Label-based indexing enhances query performance by enabling rapid lookups on node types or edge directions, supporting operations like neighborhood exploration. These attributes make the model particularly intuitive for object-oriented modeling, where entities and relationships mirror real-world domains like social networks or recommendation systems.[56][52]
The LPG excels in online transaction processing (OLTP) workloads due to its native support for local traversals and updates on interconnected data, outperforming relational models in scenarios involving deep relationships. For instance, it handles millions of traversals per second in recommendation engines by avoiding costly joins.[57][58]
Common implementations include Neo4j, a leading graph database that adopts the LPG as its core model and pairs it with Cypher, a declarative query language optimized for pattern matching and traversals on labeled properties. Other systems like Amazon Neptune and JanusGraph also build on this model for scalable, enterprise-grade applications.[59][54]
RDF Model
The Resource Description Framework (RDF) serves as a foundational graph data model for representing and exchanging semantic information on the Web, structured as a collection of triples in the form subject-predicate-object. Each triple forms a directed edge in the graph, where the subject and object act as nodes representing resources, and the predicate defines the relationship between them, enabling the modeling of complex, interconnected data. This abstract syntax ensures that RDF data can be serialized in various formats, such as RDF/XML, Turtle, or JSON-LD, while maintaining a consistent underlying graph structure.[60] A core feature of RDF is the use of Internationalized Resource Identifiers (IRIs) to globally and unambiguously identify resources, predicates, and literals, which promotes data integration across distributed systems without reliance on proprietary identifiers. RDF also incorporates reification, a mechanism to treat entire triples as resources themselves, allowing metadata—such as timestamps, sources, or certainty measures—to be attached to statements, thereby supporting advanced provenance tracking and meta-statements. Additionally, RDF extends its capabilities through integration with ontology languages like RDF Schema (RDFS), which defines basic vocabulary for classes and properties, and the Web Ontology Language (OWL), which enables more expressive descriptions including axioms for automated reasoning.[60][61] For instance, the RDF triple<http://example.org/alice> <http://xmlns.com/foaf/0.1/knows> <http://example.org/bob>. asserts a social relationship using the Friend of a Friend (FOAF) vocabulary, where "alice" and "bob" are resources linked by the "knows" predicate, illustrating how RDF builds directed graphs from standardized, reusable terms.[62]
The RDF model's advantages lie in its emphasis on interoperability, particularly within the Linked Open Data cloud, where datasets from disparate domains can be dereferenced and linked via shared URIs to form a vast, queryable knowledge graph. It further supports inference engines that derive implicit knowledge, such as subclass relationships or property transitivity, enhancing data discoverability and machine readability without altering the original triples.[63]
Prominent implementations include Apache Jena, an open-source Java framework that manages RDF graphs in memory or persistent stores like TDB, offering APIs for triple manipulation and integration with inference rules. RDF databases, often called triplestores, typically employ the SPARQL Protocol and RDF Query Language (SPARQL) for pattern matching and retrieval, making RDF suitable for semantic applications requiring flexible, schema-optional querying.[64]
Hybrid and Emerging Models
Hybrid graph models integrate traditional graph structures with vector embeddings to support both relational traversals and semantic similarity searches, enabling more versatile data retrieval in applications like recommendation systems and natural language processing. These models embed nodes or subgraphs as high-dimensional vectors, allowing approximate nearest-neighbor searches alongside exact graph queries, which addresses limitations in pure graph databases for handling unstructured data. For instance, post-2020 developments have incorporated vector indexes into graph frameworks to facilitate hybrid retrieval-augmented generation (RAG) pipelines, where vector similarity identifies relevant entities and graph traversals refine contextual relationships.[65] Knowledge graphs represent an enhancement to the RDF model by incorporating entity linking, inference rules, and schema ontologies to create interconnected representations of real-world entities, facilitating semantic reasoning and disambiguation in large-scale information systems. Introduced prominently by Google's Knowledge Graph in 2012, this approach links entities across diverse sources using probabilistic matching and rule-based inference to infer implicit relationships, improving search accuracy and enabling question-answering capabilities. Unlike standard RDF triples, knowledge graphs emphasize completeness through ongoing entity resolution and temporal updates, supporting applications in web search and enterprise knowledge management.[66] Other variants extend graph models to handle complex relational structures beyond binary edges. Hypergraphs generalize graphs by permitting n-ary relationships, where hyperedges connect multiple nodes simultaneously, which is particularly useful for modeling multifaceted interactions such as collaborative processes or biological pathways. Temporal graphs, on the other hand, incorporate time stamps on edges or nodes to capture evolving relationships, proving valuable in cybersecurity for analyzing dynamic threat networks and detecting anomalies in event logs over time.[67][68][69] In the 2020s, emerging trends have pushed graph models toward multi-modality and decentralization. Multi-modal graphs fuse diverse data types, such as text, images, and audio, into unified structures by embedding non-textual elements as nodes or attributes, enabling cross-modal queries in domains like visual question answering and multimedia recommendation. Additionally, integrations with blockchain technology have led to decentralized graph databases that ensure data immutability and distributed querying, often using protocols to index blockchain transactions as graph entities for transparent auditing in Web3 applications.[70][71][72] Despite these advances, hybrid and emerging models face significant challenges in balancing structural complexity with query efficiency. The addition of vector spaces or temporal dimensions increases storage overhead and computational demands during indexing and traversal, often requiring optimized algorithms to maintain sublinear query times on large datasets. Moreover, ensuring consistency in multi-modal or decentralized setups demands robust synchronization mechanisms to handle distributed updates without compromising relational integrity.[73][74]Architectural Properties
Storage and Persistence
Graph databases employ distinct storage schemas tailored to the interconnected nature of graph data, broadly categorized into native and non-native approaches. Native graph storage optimizes for graph structures by directly representing nodes, relationships, and properties using adjacency lists or matrices, enabling efficient traversals without intermediate mappings. For instance, systems like Neo4j utilize index-free adjacency, where pointers between nodes and relationships allow constant-time access to connected elements, preserving data integrity and supporting high-performance queries on dense graphs.[75] In contrast, non-native storage emulates graphs atop relational databases or key-value stores, typically modeling nodes and edges as tables or documents, which necessitates joins or lookups that introduce overhead and degrade performance for relationship-heavy operations.[76] This emulation, common in early or hybrid systems, suits simpler use cases but limits scalability in complex networks compared to native designs.[77] Persistence mechanisms in graph databases balance durability with access speed through disk-based, in-memory, and hybrid strategies. Disk-based persistence, as in Neo4j, stores graph elements in a native format using fixed-size records for nodes and dynamic structures for relationships, augmented by B-trees for indexing properties and labels to facilitate rapid lookups.[78] In-memory approaches, exemplified by Memgraph, load the entire graph into RAM for sub-millisecond traversals while ensuring persistence via write-ahead logging (WAL) and periodic snapshots to disk, mitigating data loss during failures.[79] Hybrid models combine these by caching frequently accessed subgraphs in memory while sharding larger datasets across distributed storage backends like Cassandra in JanusGraph, allowing horizontal scaling without full in-memory residency.[80] These mechanisms often uphold ACID properties—atomicity, consistency, isolation, and durability—in single-node setups, while distributed environments may employ ACID with causal consistency or relaxed models like BASE for better scalability, ensuring transactional integrity where applicable.[33] Data serialization in graph databases focuses on compact, efficient representations of edges and properties to support storage and interchange. Edges are often serialized in binary formats using adjacency lists to minimize space and enable fast deserialization during traversals, while properties—key-value pairs on nodes and edges—are handled via columnar storage for analytical queries or document-oriented formats like JSON for flexibility in property graphs.[77] Standardized formats such as the Property Graph Data Format (PGDF) provide a tabular, text-based structure for exporting complete graphs, including labels and metadata, facilitating interoperability across systems without loss of relational semantics.[81] Similarly, YARS-PG extends RDF serialization principles to property graphs, using extensible XML or JSON schemas to encode heterogeneous properties while maintaining platform independence.[82] Backup and recovery processes in graph databases emphasize preserving relational integrity alongside data durability. Graph-specific snapshots capture the full structure of nodes, edges, and properties atomically, as in Neo4j's online backup utility, which creates consistent point-in-time copies without downtime by leveraging transaction logs. Recovery relies on WAL replay to restore graphs to a valid state post-failure, ensuring ACID compliance in single-node setups and causal consistency in clusters via replicated logs.[79] In distributed systems like Amazon Neptune, backups export serialized graph data to S3 while maintaining relationship fidelity, with recovery procedures that reinstate partitions without orphaned edges. Scalability in graph databases is achieved through horizontal partitioning, where graph partitioning algorithms divide the data across nodes to minimize communication overhead. These algorithms, such as JA-BE-JA, employ local search and simulated annealing to balance vertex loads while reducing edge cuts—the inter-partition relationships that incur cross-node traversals—thus optimizing for distributed query performance on billion-scale graphs.[83] Streaming variants like Sheep enable scalable partitioning of large graphs by embedding hierarchical structures via map-reduce operations on elimination trees, independent of input distribution.[84] By minimizing edge cuts to under 1% in power-law graphs, such techniques enable linear scaling in systems like Pregel-based frameworks, where partitioned subgraphs process traversals locally before synchronizing.[80]Traversal Mechanisms
Index-free adjacency is a fundamental property in graph databases, where each node directly stores pointers to its neighboring nodes, enabling traversal without the need for intermediate index lookups. This structure treats the node's adjacency list as its own index, facilitating rapid access to connected elements.[85] In contrast to relational databases, where traversing relationships involves costly join operations and repeated index scans across tables, index-free adjacency allows for constant-time neighbor access, significantly improving efficiency for connected data queries.[85] Traversal in graph databases relies on algorithms that leverage this adjacency to navigate relationships systematically. Breadth-first search (BFS) is commonly used for discovering shortest paths between nodes, exploring all neighbors level by level from a starting vertex using a queue.[86] Depth-first search (DFS), on the other hand, delves deeply along branches before backtracking, making it suitable for tasks like connectivity checks or initial pattern exploration in recursive structures.[86] These algorithms exploit the direct links provided by index-free adjacency to iterate over edges efficiently. For more intricate queries involving structural patterns, graph databases employ subgraph isomorphism to identify exact matches of a query subgraph within the larger graph. This process maps nodes and edges injectively while preserving labels and directions, enabling applications like fraud detection or recommendation systems.[87] Optimizations such as bidirectional search enhance performance by simultaneously expanding from both ends of the potential match, reducing the search space in large graphs.[88] In distributed environments with massive graphs, traversal mechanisms scale via frameworks like Pregel, which model computation as iterative message passing between vertices across a cluster. Each superstep synchronizes updates, allowing vertices to compute based on incoming messages from neighbors, thus enabling parallel traversal without centralized coordination.[89] This bulk synchronous parallel approach handles billion-scale graphs by partitioning data and minimizing communication overhead. The time complexity of basic traversals in graph databases is generally O(|E|), where |E| denotes the number of edges, as the process examines each edge at most once via adjacency lists.[90] This linear scaling underscores the efficiency of index-free structures compared to non-native stores, where relationship navigation incurs higher costs.Performance Characteristics
Graph databases demonstrate superior query performance for operations involving connected data, often achieving sub-millisecond latencies for short traversals due to their index-free adjacency model that enables direct pointer following between nodes. This efficiency stems from optimized storage of relationships as first-class citizens, allowing rapid exploration of graph neighborhoods without costly joins or self-joins typical in relational systems. However, performance can slow in dense graphs where nodes have high degrees, as the exponential growth in candidate edges increases traversal time and memory footprint during pattern matching.[91][92] Scalability in graph databases is achieved through both vertical approaches, leveraging increased RAM and CPU to handle larger in-memory graphs on single machines, and horizontal scaling via distributed architectures, though the latter introduces challenges from graph interconnectedness, where sharding data across nodes can lead to expensive cross-shard traversals if partitions are not carefully designed to minimize boundary crossings. Advanced systems mitigate this through techniques like vertex-centric partitioning or replication, but trade computation overhead for improved throughput in multi-node setups.[93][94] Resource utilization in graph databases emphasizes high memory demands for in-memory variants, where entire graphs are loaded to facilitate constant-time edge access, potentially requiring terabytes for billion-scale datasets. CPU consumption rises with complex queries involving pattern matching or iterative traversals, as processors handle irregular access patterns and branching logic, contrasting with more predictable workloads in other database types. Optimization strategies, such as caching hot subgraphs or parallelizing traversals, help balance these demands but vary by implementation.[95][92] Standard benchmarks like LDBC Graphalytics evaluate graph database performance across analytics workloads, including breadth-first search and community detection, underscoring their strengths in relationship-oriented queries by measuring execution time and scalability on large synthetic graphs up to trillions of edges. These tests reveal consistent advantages in traversal-heavy tasks, with runtimes scaling near-linearly on distributed systems for sparse graphs.[96] Key trade-offs position graph databases as ideal for OLTP traversals, delivering low-latency responses for real-time relationship queries in scenarios like fraud detection, but less efficient for aggregation-intensive operations where columnar stores excel due to better compression and vectorized processing. Hybrid extensions or integration with analytical engines address this by offloading aggregations, though at the cost of added complexity.[14]Querying and Standards
Graph Query Languages
Graph query languages enable users to retrieve, manipulate, and analyze data in graph databases by expressing patterns, traversals, and operations over nodes, edges, and properties. These languages generally fall into two paradigms: declarative and imperative. Declarative languages, such as Cypher and SPARQL, allow users to specify what data is desired through high-level patterns and conditions, leaving the how of execution to the database engine for optimization.[97] In contrast, imperative languages like Gremlin focus on how to traverse the graph step-by-step, providing explicit control over the sequence of operations in a functional, data-flow style.[98] This distinction influences usability, with declarative approaches often being more intuitive for pattern matching and imperative ones suited for complex, programmatic traversals.[97] Cypher, developed by Neo4j, is a prominent declarative language for property graph models, featuring ASCII-art patterns to describe relationships and nodes.[99] It uses clauses likeMATCH for pattern specification and RETURN for result projection, supporting variable-length path traversals (e.g., [:KNOWS{2}] for paths of length 2) and graph-specific aggregations such as counting connected components.[99] For instance, to find friends-of-friends in a social network, a Cypher query might read:
MATCH (a:Person)-[:KNOWS{2}]-(b:Person)
WHERE a.name = 'Alice' AND b <> a
RETURN b.name
MATCH (a:Person)-[:KNOWS{2}]-(b:Person)
WHERE a.name = 'Alice' AND b <> a
RETURN b.name
KNOWS edges from a starting person, excluding self-references.[100]
Gremlin, part of the Apache TinkerPop framework, exemplifies the imperative paradigm with its traversal-based scripting for both property graphs and RDF stores.[98] Users compose queries as chains of steps (e.g., g.V().has('name', 'Alice').out('KNOWS').out('KNOWS')), enabling precise control over iterations, filters, and transformations like grouping by degree or aggregating path lengths.[101] It supports variable-length traversals via methods such as repeat() and times(), making it versatile for exploratory analysis.[98]
SPARQL, standardized by the W3C for RDF graphs, is another declarative language that queries triples using SELECT for variable bindings and CONSTRUCT for graph output.[102] It includes path expressions for traversals (e.g., /knows*/foaf:knows for variable-length paths) and aggregation functions like COUNT and SUM over result sets, facilitating federated queries across distributed RDF sources.[102]
Key features across these languages include path expressions for navigating relationships, support for variable-length traversals to handle arbitrary depths, and aggregation functions optimized for graph metrics such as centrality or connectivity.[99][102][98] To enhance interoperability between property graph and RDF models, efforts like the Property Graph Query Language (PGQL) integrate SQL-like syntax with graph patterns, allowing unified querying via extensions like MATCH clauses embedded in SQL.[103] PGQL supports features such as shortest-path finding and subgraph matching, bridging declarative paradigms across data models.[104]
