LargeTripleStores

From W3C Wiki

This page is for references to signed quotes of deployments of large triples stores rather than predictions of what some software might scale to.

(Ordered by reported triple/quad counts in descending order)



Oracle Spatial and Graph with Oracle Database 12c (1.08 T)

LUBM 4400K: 1.08 Trillion triples were loaded, inferred and queried executing the Lehigh University Benchmark (LUBM) on an Oracle Exadata Database Machine X4-2 in September 2014. The industry-leading results, and the details of the configuration and best practices are described in the white paper Oracle Spatial and Graph: Benchmarking a Trillion Edges RDF Graph

LUBM 200K: 48+ Billion triples
A graph containing over 48 Billion triples about universities and their departments was created and ordered into 200,000 named graphs by expanding the triples into quads. There was one named graph per university. The overall graph included 26.6 Billion unique asserted quads and an inference that produced another 21.4 Billion quads.

  • Data Loading Performance: 273K QLIPS (Quads Loaded* and Indexed Per Second) - 27.4 billion quads loaded* in 13 hrs 11 min. + two indexes created in 11 hrs 18 min. = 24 hrs 29 min. (*Loading included checking the quads were well formed and removing .8B duplicates.)
  • Inference Performance: 327K TIIPS (Triples Inferred and Indexed Per Second) - 21.4 billion triples Inferred in 12 hrs 56 min. + two indexes created in 5 hrs. = 17 hrs 56 min.
  • SPARQL Query Performance: 459K QRPS (Query Results Per Second) - 4.18 Billion answers in 2.53 hrs.

Setup:
Hardware: One node of a Sun Server X2-4, 3-node Oracle Real Application Cluster (RAC)

  • The node was configured with 1TB RAM, and 4 CPUs (2.4GHz 10-Core Intel E7-4870) having 40 total Cores and 80 Parallel Threads.
  • Storage: Dual Node 7420, both heads configured as: Sun ZFS Storage 7420 4 CPU 2.00GHz 8-Core (Intel E7-4820)256G Memory 4x SSD SATA2 512G (READZ) 2x SATA 500G 10K. 4 disk trays with 20 x 900GB disks @10Krpm, 4x SSD 73GB (WRITEZ)

Software: Oracle Database 11.2.0.3.0, SGA_TARGET=750G and PGA_AGGREGATE_TARGET=200G
The test was performed April 2013.

LUBM 25K: 6.1 Billion triples

  • Data Loading Performance: 539.7K TLIPS - 3.4 Billion triples loaded* and indexed in 105 min. (*Loading included checking the triples were well formed, removing duplicates and creating two indexes on the graph.)
  • Inference Performance: 281.3K TIIPS - 2.7 Billion triples inferred in 160 min. (Inference included creating two indexes on the inferred data in the graph.)
  • SPARQL Query Performance: 900.4K QRPS (470+ Million answers in 8.7 min.)

Setup:
Hardware: A Sun M8000 was configured with 512 GB RAM, 16 CPUs (SPARC64 VII+ 3.0 GHz) having 64 total Cores and 128 Parallel Threads, and Dual F5100 Flash Arrays having 160 total drives.
Software: Oracle Database 11.2.0.2.0 + Patch 9825019: SEMANTIC TECHNOLOGIES 11G R2 FIX BUNDLE 3, SGA_TARGET=256G and PGA_AGGREGATE_TARGET=206G

LUBM 8K: 1.969 Billion triples

  • Data Loading Performance: 650.5K TLIPS - 1.1 Billion triples loaded* and indexed in 28 min. 11 sec. (*Loading included checking the triples were well formed, removing duplicates and creating two indexes on the graph.)
  • Inference Performance: 233.6K TIIPS - 869 Million triples inferred in 62 min. (Inference included creating two indexes on the inferred data in the graph.)
  • SPARQL Query Performance: 577.5K QRPS (149+ Million answers in 4.3 min.)

Setup: See the LUBM 25k setup.

Please go to the Oracle Technology Network for more information about Oracle Spatial and Graph support for RDF graphs.

AnzoGraph DB by Cambridge Semantics (1.065T)

AnzoGraph DB TPC-H Benchmark:

  • TPC-H at scale factor 1000
  • Load and query over 100 billion triples with 22 TPC-H queries
  • 40 nodes
  • Completed in 3.5 minutes
  • [1] AnzoGraph Benchmark Study

LUBM benchmark:

  • 587 billion triples loaded in 29 minutes and 24 seconds (total load time)
  • 478 billion inferred triples created in 1 hour 16 minutes and 14 seconds (total interference time)
  • Query execution in 14 minutes (total query execution time)
  • Total triples (load and infer): 1.065 Trillion triples
  • Total time for loading and querying 1.98 hours
  • [2] AnzoGraph Benchmark Study

Commercial License

http://anzograph.com/ AnzoGraph DB

AllegroGraph (1+T)

Franz announced at the June 2011 Semtech conference a load and query of 310 Billion triples as part of a joint project with Intel. In August 2011, with the help of Stillwater SC and Intel we achieved the industry's first load and query of 1 Trillion RDF Triples. Total load was 1,009,690,381,946 triples in just over 338 hours for an average rate of 829,556 triples per second.

The driving force has been Amdocs and their AIDA platform. Here are two video presentations Semtech 2011 and Semtech 201.

We currently load LUBM 8000 in just over 36 minutes. Query times are also very fast. We do not preprocess the strings, and we do not need to apply the graph (or named context) to Universities in order to gain better performance (see Note 1, below). The benchmark section of our website is updated with each product release.

Franz is in late-stage development on a clustered version of AllegroGraph that will push storage into trillions of triples. We use hash-based partitioning for our triples so that the query engines don't have to engage in map/reduce operations. The two biggest challenges we are addressing right now are [1] to develop smarter query techniques to limit trips across machine boundaries, and [2] to keep the database ACID in a clustered environment.

Note 1: AllegroGraph provides dynamic reasoning and DOES NOT require materialization. AllegroGraph's RDFS++ engine dynamically maintains the ontological entailments required for reasoning; it has no explicit materialization phase. Materialization is the pre-computation and storage of inferred triples so that future queries run more efficiently. The central problem with materialization is its maintenance: changes to the triple-store's ontology or facts usually change the set of inferred triples. In static materialization, any change in the store requires complete re-processing before new queries can run. AllegroGraph's dynamic materialization simplifies store maintenance and reduces the time required between data changes and querying. AllegroGraph also has RDFS++ reasoning with built in Prolog.

OpenLink Virtuoso v7+ (94.2B+ explicit, uncounted virtual/inferred, in 1 instance on 1 machine)

Real-world, live queryable UniProt instance of Virtuoso Open Source Edition, 07.20.3233 (89b3ddb) now holds more than 94.2B Billion Triples (94,205,080,849, to be exact; see the About box) in a single instance on a single machine, i.e., there's no clustering of any kind.

LOD Cloud Cache is a live instance of Virtuoso Enterprise Edition 7.20.3232 (d5c98e6454) now serving more than 35.5 Billion Triples (35,539,093,982 and counting) — including the entire data.gov Catalog — on a multi-host shared-nothing Virtuoso Elastic Cluster, with one (1) Virtuoso Server Process per host, and each host with two (2) quad-core processors, 16GB RAM, and four (4) 1TB SATA-II Disks, each disk on its own channel. Latest bulk load added ~3 Billion triples in ~3 hours — roughly 275Ktps (Kilotriples-per-second) — with partial parallelization of load.

As of August 11, 2009, the LUBM 8000 load speed was 160,739 triples-per-second on a single machine with 2 x Xeon 5520 and 72G RAM. Adding a second machine with 2 x Xeon 5410 and 16G RAM, and 1 x 1GigE interconnect, the load rate increased to 214,188 triples-per-second. The software is Virtuoso 6 Cluster, set up with 8 partitions per host. No inference was made. More run details are in the Virtuoso blog post discussing the original run on the smaller host, which delivered 110,532 triples-per-second load rate on its own.

Towards Web-Scale RDF white paper discusses why Triple Scale is function of cluster configuration. 100 Billion Triples with sub-second response time can be achieved with the right cluster configuration (primarily total memory pool delivered by the cluster).

Inferred triples are uncounted because they will vary with the query. Backward-chaining is preferred method. Virtuoso's built-in reasoning currently (v7.2.x; March, 2018) includes support for owl:sameAs, rdfs:subClassOf, rdfs:subPropertyOf, owl:equivalentClass, owl:equivalentProperty, owl:InverseFunctionalProperty, owl:TransitiveProperty, owl:SymmetricalProperty, and owl:inverseOf. Vastly enhanced reasoning is available in Enterprise Edition 8.x, as the optional "Custom Reasoning & Inference Rules Module".

Benchmarks data sources

Older comments

New Bitmap Indexing white paper shows how OpenLink Virtuoso handles loading the 1 billion triple LUBM benchmark set with a sustained rate of 12,692 triples-per-second and the 47M triple Wikipedia data set at a rate of 20,800 triples-per-second. Kingsley Idehen, OpenLink Software.

"The single query stream rate with 100K triples is 14 qps at 100K triples and 11 qps at 1G triples" -- LUBM and Virtuoso

Available as Open Source Edition or Enterprise Edition.

Stardog (50B)

Clark & Parsia announced that the 2.1 release of Stardog can scale up to 50 Billion triples on a $10k server (32 cores, 256G of RAM) with load speeds of 500k triples/sec for 1B triples and over 300k for 20B triples.

Stardog is a pure Java RDF database which supports all of the OWL2 profiles using a dynamic (backward-chaining) approach. It also includes unique features such as Integrity Constraint Validation and explanations, ie proofs, for inferences and integrity constraint violations. It also integrates a full-text search index based on Lucene.

RDFox (19.5B)

A highly scalable in-memory RDF triple store and semantic reasoning engine. It supports shared memory parallel reasoning for RDF, RDFS, OWL 2 RL and Datalog. It is cross-platform software written in C++ that comes with a Java wrapper allowing for easy integration with any Java-based solution. It is supported on windows, MacOS and Linux.

RDFox is now developed by Oxford Semantic Technologies, an Oxford University spin-out. An evaluation license is available here: http://www.oxfordsemantic.tech/request-eval

As an in-memory store, ultimate capacity depends on available RAM, but RDFox is economical with memory and can store between 1 and 1.5 billion triples in 50 GB. It is very fast, and very effective at parallelisation: on a computer two Xeon E5-2650 processors with 16 physical cores it materialised LUBM 5k in only 42s, a 10x speedup compared to using a single core.

RDFox also loaded 19.47B triples (WatDiv benchmark) in 11041s on 64 threads, using 1.5TB of RAM.

RDFox also has many advanced features, including: native support for owl:sameAs; incremental update and aggregation; explainability; extensions to Datalog; and extensive SPARQL support, including named graphs.

GraphDB™ by Ontotext (17B)

http://www.ontotext.com/products/ontotext-graphdb/

GraphDB™ is the only triplestore that can perform OWL inference at scale allowing users to create new semantic facts from existing facts. GraphDB™ handles massive loads, queries and inferencing in real time.

Performance Benchmark Results

Adequate benchmarking of semantic repositories is a complex exercise involving many factors. Ontotext is involved in project LDBC – an outstanding initiative that aims to establish industry cooperation between vendors of RDF and graph database technologies in developing, endorsing, and publishing reliable and insightful benchmark results.

The benchmark results presented here aim to provide sufficient information on how GraphDB™ performs important tasks (such as loading, inference and querying) with variations in size and nature of the data, inference, query types and other relevant factors. It also presents the improvement of speed in GraphDB™ 6.1 in comparison to OWLIM 5.4.

Detailed Benchmark Study


Highlights of the study include:

  • UNIPROT : - close to 13 billion triples loaded in 57,240 seconds at a rate of 225,297 st./sec. (just under 16 hours). If data size is judged by the amount of triples in the input files (which is 17 billion), the loading speed is 295 000 st./sec. The hardware utilized was a dual-CPU server with Xeon E5-2690 CPUs, 512 GB of RAM and SSD storage array
  • DBPedia 2014 : - 566 million triples loaded in 1 hour, 10 minutes (from turtle files) at 180,000 st./sec.. The hardware utilized was a dual-CPU server with Xeon E5-2690 CPUs, 256 GB of RAM and SSD storage array.

Notes

  • In terms of Data size we refer to the number of explicit statements in the repository after the initial loading data. We exclude inferred statements, because this is only relevant for forward-chaining based engines. Some tests insert additional statements if update queries are part of the query mixes – these additional statements are ignored above. There are datasets that include a substantial amount of duplicate statements in the data dumps – for instance, the raw files of UNIPROT contain 17B statements, but only 12B of those are unique. :
  • GraphDB™ can load datasets of more than 10 billion statements on a single commodity database server at speeds exceeding 200,000 statements per second. In specific loading scenarios GraphDB™ managed to load billions of triple scale datasets at speeds of around 500,000 statements per second. GraphDB™ leverages its "Parallel Loader" for bulk and large data sets."
  • The loading speed of GraphDB™ does not degrade as the volume of the data grows – for both BSBM and LDBC, the loading speeds for the 50-100 million datasets were the same as for the 1 billion statement datasets. :
  • Under the LDBC Semantic Publishing Benchmark (SPB) 50-million dataset, GraphDB™ Standard Edition can execute 30 read queries per second, while handling more than 20 updates each second in a consistent and transactionally safe manner. This is also the case on the Amazon AWS instance with 30GB of RAM. LDBC SPB is a benchmark derived from BBC’s Dynamic Semantic Publishing projects. This benchmark simulates loads similar to the one experienced by GraphDB™ serving web page generation for the BBC Sport website. Read query performance can be scaled up linearly through the cluster architecture of GraphDB™ Enterprise. :
  • GraphDB’s Loading Tool (Parallel Loader) is much faster than any loading mechanism in prior versions. For large datasets the speed up can be more than 5 times greater. :
  • GraphDB™ is faster on update queries compared to prior versions – the increase in speed varies between 2 times (on SPB 50M) and 15 times (on SPB 1B). :

Apache Jena (16.7B)

The persistent storage layer for Jena is the TDB component. TDB works with the Jena SPARQL query engine (ARQ) to provide complete SPARQL together with a number of extensions (e.g. property functions, aggregates, arbitrary length property paths). It is a pure-Java, employing memory mapped I/O, a custom implementation of B+Trees and optimized range filters for XSD value spaces (integers, decimals, dates, dateTime).

TDB2 has been used to load Wikidata (20211222_latest-all.nt.gz) (16.7 billion triples at 44.8k triples/second in 103h 45m 15s).

TDB2 has been used to load Wikidata Truthy (2021-12) (6.6 billion triples in 40 hours, at 46k triples/second).

Previously, TDB1 has been used to load UniProt v13.4 (1.7B triples, 1.5B unique) on a single machine with 64 bit hardware (36 hours, 12k triples/s).

TDB 0.5 Results for the Berlin SPARQL Benchmark (August 2008).

Open Source: License: Apache Software License

Garlik 4store (15B)

The store is called 4store. Currently we have 4 KBs of 3-4GT each loaded in our production systems - a cluster of 9 low-end servers running CentOS Linux. Loading time for one 4GT KB is about 8 hours, but it's an interactive process that involves running lots of queries, and doing small inserts.

As of 2009-10-21 it's running with 15B triples in a production cluster to power the DataPatrol application.

4store is now available under the GPLv3 license from 4store.org.

Bigdata(R) (12.7B)

As of 02/12/2015, the Bigdata system is released as Blazegraph (http://www.blazegraph.com/).

We are in a shakedown period on the scale-out system and will post results as we get them.

6/30/2009: 1B triples stable on disk in 50 minutes (333k tps). 12.7B triples loaded. The issue with clients dying off has been resolved, as has the high client CPU utilization issue.

5/25/2009: 10.4 billion LUBM triples loaded in 47 hours (61k tps) on a 15 blade cluster (this run used 9 data servers, 5 clients, and 1 service manager). Max throughput was just above 241k triples per second. 1 billion triples was reached in 71 minutes, 2 billion in 161 minutes, 5 billion in 508 minutes. The clients are still the bottleneck and started failing one by one after 7.8B triples (throughput at that point was 141k tps).

5/22/2009: 9 billion LUBM triples loaded in 31 hours using the same hardware. The bottleneck was the clients, which were were not able to put out enough load. By the end of the trial the clients were at 100% utilization while the data services were less than 10% utilization.

5/21/2009: 5 billion LUBM triples loaded in 10 hours (135k tps) on a 15 blade cluster (10 data servers, 4 clients, 1 service manager).

Bigdata is an open-source general-purpose scale-out storage and computing fabric for ordered data (B+Trees). Scale-out is achieved via dynamic key-range partitioning of the B+Tree indices. Index partitions are split (or joined) based on partition size on disk and moved across data services on a cluster based on server load. The entire system is designed to run on commodity hardware, and additional scale can be achieved by simply plugging in more data services dynamically at runtime, which will self-register with the centralized service manager and start managing data automatically. Much like Google's BigTable, there is no theoretical maximum scale.

The bigdata RDF store is an application written on top of the bigdata core. The Bigdata RDF store is fully persistent, Sesame 2 compliant, supports SPARQL, and supports RDFS and limited OWL inference. The single-host RDF database is stable and is used at the core of an open-source harvesting system for the intelligence community. We are working towards a release of the scale-out architecture.

Please come see our presentation at the Semantic Technologies Conference in San Jose on June 18th.

More information on bigdata can be found here:

http://www.bigdata.com/blog/

And in this presentation at OSCON 2008:

http://bigdata.sourceforge.net/pubs/bigdata-oscon-7-23-08.pdf

Open source

YARS2 (7B)

YARS2: A Federated Repository for Querying Graph Structured Data from the Web describes the distributed architecture of the YARS2 quad store. With scalability experiments up to 7bn synthetically generated statements - LUBM(50000).

Proprietary, not distributed.

Jena with SDB (650M)

SDB was a SQL-backed storage layer for Jena and Apache Jena. It is no longer supported.

SDB is a SPARQL database for graphs/named graphs for Jena. Can load `UniProt` (650M). Uses `PostgreSQL`, `MySQL`, Oracle or MS SQL Server. Also, HSQLDB and Apache Derby.

Open Source: License: Apache Software License

https://jena.apache.org/

Mulgara (500M)

"The Mulgara triple store is scalable up to 0.5billion triples (with 64-bit Java)" -- Norman Gray

Open source

http://www.mulgara.org/

RDF gateway (262M)

"the UniProt protein database (262 million triples) and RDF Gateway." -- Geoff Chappell, Intellidimension

Commercial

http://www.intellidimension.com/

Jena with PostgreSQL (200M)

"Our store is pretty big -- its about 200M triples.

We're currently using Jena on Postgres. For our needs this worked out better than Jena/MySQL, Sesame, and Kowari." -- Leigh Dodds, Ingenta

Open source

http://jena.apache.org/

Kowari (160M)

"My own testing has been in the 10-20M triple range." -- Chris Wilper

Addendum from Chris on Nov 7th, 2005: Since this was written, we have successfully loaded over 160M triples into Kowari on a 64-bit machine with 6GB physical memory. A 64-bit machine is really required to bring Kowari up to this level because it uses mapped files and needs a lot of address space. In our experience in this environment, simple queries still perform fairly well (a few seconds) and complex queries involving 8-10 triple patterns perform worse (a few minutes to an hour).

Open source, unmaintained (See Mulgara fork).

http://www.kowari.org/

3store with MySQL 3 (100M)

"The store my consortium produces (3store) is used successfully up to 100M triples or so. Beyond that it gets a bit sketchy. I'm currently looking at ways to make it scale to 10^9+ without specialising the store to a particular schema."

More specifically, one user is running it with 120M triples in MySQL 4.1. At that size query works fine, but assertion time is down to about 300 triples/second, which makes growing it any bigger too painful. I should note that 3store is an RDFS store, in version 3 it's possible to disable the inference, which should make it scale to much larger sizes, but there are plenty of other stores that can run vanilla RDF storage well. -- Steve Harris, AKT

Open source (GNU GPL).

http://threestore.sourceforge.net/

Sesame (70M)

(10-20 million triples) " is a lot, but most serious triple stores can handle this I'd say. Sesame certainly can, ..." -- Jeen Broekstra, Aduna

Addendum from Jeen on Feb 10 2006: The above comment should be taken as a minimum of what the store can handle. We recently have ran a few scalability tests on Sesame's Native Store (Sesame 2.0-alpha-3). Using the Lehigh University Benchmark we successfully added a LUBM-500 dataset (consisting of about 70 million RDF triples). The machine used was a 2.8GhZ P4 (32-bits) with 1GB RAM, running Suse Linux 10.0 (kernel 2.6), Sun J2SE 1.5.0_06. Upload took about 3 hours. Query performance on the LUBM test-queries was adequate to good: unoptimized, the worst query (Q2) took 1.3 hours to complete, but most queries completed within tens of milliseconds (Q4,5,6,7,8,10,12,13) or 1-5 minutes (Q1,3,9,11,14) - though some of these queries are just fast because they return no results (the native store does no RDFS/OWL inferencing). We have yet to explore larger datasets and performance using RDFS inferencing but it seems that 70M is not the ceiling and that Sesame can easily cope with even larger sets, especially when we use bigger hardware. But that's prediction not fact so I'll leave it at that for now ;)

Open source.

http://www.openrdf.org/

Others who claim to go big

Claims without signatures or quotes. Please move them from this section when they can be linked to a signed specific capacity measurement.

Questions

I know storing 200M triples is cool. But which store can handle simultaneous queries of about 10,000 users using RDFS inferencing? -- Anonymous

200M-300M or so seems to be about the max that anybody has reported. It would be very helpful if people could state whether they tried to scale further, and if not able to, what the problems were -- i.e., does it become too slow to add, perform trivial queries, perform complex queries, all of the above, etc. Additionally, it would be extremely helpful if hardware specs were included. Anyway, this is a great resource. -- CS

It would be nice of the postings here comment on the level of inference supported. Loading with forward-chaining and materialization is *much* heavier than just loading the data. The more general question is what part of the semantics of the loaded ontology/dataset is supported by the system. There are subtle differences in what "loading" means for the four systems with highest results above. RDF gateway supports the semantics of UNIPROT through backward-chaining. OWLIM supports the semantics of LUBM through forward-chaining. The sort of reasoning required in the UNIPROT load of RDF Gateway is much more complex than the one necessary for passing LUBM. Finally, Virtuoso and AllegroGraph are fairly undetermined with respect to reasoning involved in the experiments they report on. For instance, Virtuoso reports results on LUBM but says nothing about the completeness of the query evaluation. -- Atanas Kiryakov


Related