Benchmarking the Mainstream Open Source Distributed Graph Databases at Meituan: Nebula Graph vs Dgraph vs JanusGraph

Meituan NLP Team
2020-10-20

This article is written by Gao Chen and Zhao Dengchang from the NLP team at Meituan. This article was originally published on the Nebula Graph forum:https://discuss.nebula-graph.io/t/benchmarking-the-mainstream-open-source-distributed-graph-databases-at-meituan-nebula-graph-vs-dgraph-vs-hugegraph/715

Benchmarking the Mainstream Open Source Distributed Graph Databases at Meituan: Nebula Graph vs Dgraph vs JanusGraph

The deep learning and knowledge graph technologies have been developing rapidly in recent years. Compared with the “black box” of deep learning, knowledge graphs are highly interpretable, thus are widely adopted in such scenarios as search recommendations, intelligent customer support, and financial risk management. Meituan has been digging deeply in the connections buried in the huge amount of business data over the past few years and has gradually developed the knowledge graphs in nearly ten areas, including cuisine graphs, tourism graphs, and commodity graphs. The ultimate goal is to enhance the smart local life.

Compared with the traditional RDBMS, graph databases can store and query knowledge graphs more efficiently. It gains obvious performance advantage in multi-hop queries to select graph databases as the storage engine. Currently there are dozens of graph database solutions out there on the market. It is imperative for the Meituan team to select a graph database solution that can meet the business requirements and to use the solution as the basis of Meituan’s graph storage and graph learning platform. The team has outlined the basic requirements as below per our business status quo:

  1. It should be an open source project which is also business friendly By having control over the source code, the Meituan team can ensure data security and service availability.
  2. It should support clustering and should be able to scale horizontally in terms of both storage and computation capabilities The knowledge graph data size in Meituan can reach hundreds of billions vertices and edges in total and the throughput can reach tens of thousands of QPS. With that being said, the single-node deployment cannot meet Meituan’s storage requirements.
  3. It should work under OLTP scenarios with the capability of multi-hop queries at millisecond level. To ensure the best search experience for Meituan users, the team has strictly restricted the timeout value within all chains of paths. Therefore, it is unacceptable to respond a query at second level.
  4. It should be able to import data in batch The knowledge graph data is usually stored in data warehouses like Hive. The graph database should be equipped with the capability to quickly import data from such warehouses to the graph storage to ensure service effectiveness.

The Meituan team has tried the top 30 graph databases on DB-Engines and found that most well-known graph databases only support single-node deployment with their open-source edition, for example, Neo4j, ArangoDB, Virtuoso, TigerGraph, RedisGraph. This means that the storage service cannot scale horizontally and the requirement to store large-scale knowledge graph data cannot be met. After a through research and comparison, the team has selected the following graph databases for the final round: Nebula Graph (developed by a startup team who originally came from Alibaba), Dgraph (developed by a startup team who originally came from Google), and HugeGraph (developed by Baidu).

A Summary of The Testing Process

Hardware Configuration

  1. Database instances: Docker containers running on different machines
  2. Single instance resources: 32 Cores, 64 GB Memory, 1 TB SSD (Intel(R) Xeon(R) Gold 5218 CPU @ 2.30 GHz)
  3. Number of instances: Three

Deployment Plan

Nebula Graph v1.0.1

The Meta Service is responsible to manage cluster meta data. The Query Service is responsible for query execution. And the Storage Service is responsible for storing sharded data. RocksDB acts as the storage backend. See the details below:

Instance No.1Instance No.2Instance No.3
MetadMetadMetad
GraphdGraphdGraphd
Storaged[RocksDB]Storaged[RocksDB]Storaged[RocksDB]

Dgraph v20.07.0

Zero is responsible for cluster meta data management. Alpha is responsible for query execution and data storage. The storage backend is developed by Dgraph. See the details below:

Instance No.1Instance No.2Instance No.3
ZeroZeroZero
AlphaAlphaAlpha

HugeGraph v0.10.4

HugeServer is responsible for cluster meta data management and query execution. Although HugeGraph supports RocksDB as a storage backend, it doesn’t support RocksDB as the storage backend for a cluster. Therefore, the team chooses HBase as the storage backend instead. See the details below:

Instance No.1Instance No.2Instance No.3
HugeServer[HBase]HugeServer[HBase]HugeServer[HBase]
JournalNodeJournalNodeJournalNode
DataNodeDataNodeDataNode
NodeManagerNodeManagerNodeManager
RegionServerRegionServerRegionServer
ZooKeeperZooKeeperZooKeeper
NameNodeNameNode[Backup]
ResourceManagerResourceManager[Backup]
HBase MasterHBase Master[Backup]

The Dataset Used for the Benchmarking Test

The team uses the LDBC dataset for the benchmarking test.

LDBC Dataset

Below is a brief introduction to the data within the dataset:

  1. Data generation parameters: branch=stable, version=0.3.3, scale=1000
  2. Entities: 2.6 billion entities of four types
  3. Relationships: 17.7 billion relationships of 19 types
  4. Data format: csv
  5. Data size after compaction: 194 GB

Benchmarking Test Results

The benchmarking test has been conducted from three perspectives, i.e. data import in batch, real-time data write, and data query.

Data Import in Batch

The steps of data import in batch are as follows:

  1. Generate CSV files in Hive
  2. Middle files supported by graph databases
  3. Import data to graph databases

The data import process in each graph database is described below:

  • To Nebula Graph. Execute the Spark task in Hive to generate SST files in RocksDB; and then ingest the SST files to Nebula Graph.
  • To Dgraph. Execute the Spark task in Hive to generate RDF files; and then execute the bulk load operation to generate the persistent files for each node of the cluster.
  • To HugeGraph. Execute the loader to directly insert data to HugeGraph in CSV files from Hive.

Data Import Testing Results

DatabaseImport MethodTime Consumed Each StepOccupation Before Import (gziped)Occupation After ImportSpace Amplification RatioLoad Blance Among Nodes
Nebula1. Hive->sst file2. sst file->DB1. 3.0h2. 0.4h194G518G2.67x176G/171G/171GBalanced among Nodes
Dgraph1. Hive->rdf file2. rdf file->DB1. 4.0h2. 8.7h OOM4.2GCannot be imported in full size.24G Imported user relationships only for Space Amplification Test5.71x1G/1G/22GNot balanced among Nodes
HugeGraph1. Hive->csv2. csv->DB1. 2. 0h2. 9.1h Out of Disk Space4.2GCannot be imported in full size.41G Imported user relationships only for Space Amplification Test9.76x11G/5G/25GNot balanced among Nodes

Seen from the above results, the team found that Nebula Graph performs the best because it has the lowest time consumption as well as the smallest storage amplification ratio. The data is distributed by primary key hash and the storage is balanced among the nodes.

In Dgraph, the original data size is 194 GB and the server memory is 392 GB; the bulk load operation was suspended after 8.7 hours due to OOM, which resulted in partial success. The data is distributed by the predicates in the RDF model and one type of relationship can only be stored on the same node, which resulted in severe unbalance of storage and computation among nodes.

In HugeGraph, the original data size is 194GB. When the data import operation was executed, a node of 1000GB was full and the import was partially successful. The storage amplification ratio in HugeGraph is the largest and data distribution is severely unbalanced among nodes.

Real-Time Data Write

This test was to insert vertices and edges to graph databases in real-time and to test the write performance of each database. Below is a brief description of how the metrics are defined.

  1. Response time. Send 50,000 write requests at a fixed QPS and record the time consumed for sending all the requests successfully. Obtain the time consumed from when a request is sent to when a response is received on the client side in terms of avg, p99, and p999.
  2. The largest throughput. Send 1,000,000 write requests at a gradually increasing QPS and keep querying the data. The peak QPS (successful requests) within one minute is the largest throughput.

How to Insert Nodes to Graph Databases

Nebula Graph

INSERT VERTEX t_rich_node (creation_date, first_name, last_name, gender, birthday, location_ip, browser_used) VALUES ${mid}:('2012-07-18T01:16:17.119+0000', 'Rodrigo', 'Silva', 'female', '1984-10-11', '84.194.222.86', 'Firefox')

Dgraph

{
    set {
        <${mid}> <creation_date> "2012-07-18T01:16:17.119+0000" .
        <${mid}> <first_name> "Rodrigo" .
        <${mid}> <last_name> "Silva" .
        <${mid}> <gender> "female" .
        <${mid}> <birthday> "1984-10-11" .
        <${mid}> <location_ip> "84.194.222.86" .
        <${mid}> <browser_used> "Firefox" .
    }
}

HugeGraph

g.addVertex(T.label, "t_rich_node", T.id, ${mid}, "creation_date", "2012-07-18T01:16:17.119+0000", "first_name", "Rodrigo", "last_name", "Silva", "gender", "female", "birthday", "1984-10-11", "location_ip", "84.194.222.86", "browser_used", "Firefox")

How to Insert Edges to Graph Databases

Nebula Graph

INSERT EDGE t_edge () VALUES ${mid1}->${mid2}:();

Dgraph

{
    set {
        <${mid1}> <link> <${mid2}> .
    }
}

HugeGraph

g.V(${mid1}).as('src').V(${mid2}).addE('t_edge').from('src')

Real-Time Write Testing Results

Response Time

Real-Time WriteResponse Time (ms)Insert Vertices at QPS=100avg / p99 / p999Insert Edges at QPS=100avg / p99 / p999
Nebula3 / 10 / 383 / 10 / 34
DGraph5 / 13 / 545 / 9 / 43
HugeGraph37 / 159 / 159028 / 93 / 1896

The Largest Throughput

Real-Time Writethe Largest ThroughputInsert VerticesQPSInsert EdgesQPS
Nebula8400076000
DGraph101389600
HugeGraph410457

Real-Time Write Testing Results Analysis

Seen from the above results, the response time and throughput of Nebula Graph are leading the pack in the test because the write requests can be distributed to multiple storage nodes thanks to its architecture.

In Dgraph, the response time and throughput are not good compared with Nebula Graph because one type of relationship can be stored on the same node per its structure.

HugeGraph performs the worst in terms of response time and throughput because the storage backend is HBase which has lower concurrent read and write capability than RocksDB (adopted by Nebula Graph) and BadgerDB (adopted by Dgraph).

Data Query

The data query benchmarking was to test the read performance of the graph database candidates and it was based on the following common queries: n-hop queries with ID returned, n-hop queries with properties returned, and shared friends query. Below is a brief description of how the metrics are defined.

  1. Response time. Send 50,000 read requests at a fixed QPS and record the time consumed for sending all the requests. Obtain the time consumed from when a request is sent to when a response is received on the client side in terms of avg, p99, and p999. If no response is received within 60s after a read request is sent, then a timeout error is returned.
  2. The largest throughput. Send 1,000,000 read requests at a gradually increasing QPS and keep querying the data. The peak QPS (successful requests) within one minute is the largest throughput.
  3. Cache configuration. All graph databases in this test support read from cache and the feature is on by default. The team had cleaned server cache every time before a test was conducted.

Sample Code of N-Hop Queries with ID Returned

Nebula Graph

GO ${n} STEPS FROM ${mid} OVER person_knows_person

Dgraph

{
 q(func:uid(${mid})) {
   uid
   person_knows_person { #${n}Hops = Embedded layers
     uid
   }
 }
}

HugeGraph

g.V(${mid}).out().id() #${n}Hops = out()Length of the link

Sample Code of N-Hop Queries with Properties Returned

Nebula Graph

GO ${n} STEPS FROM ${mid} OVER person_knows_person YIELDperson_knows_person.creation_date, $$.person.first_name, $$.person.last_name, $$.person.gender, $$.person.birthday, $$.person.location_ip, $$.person.browser_used

Dgraph

{
  q(func:uid(${mid})) {
    uid first_name last_name gender birthday location_ip browser_used
    person_knows_person { #${n}Hops = Embedded layers
      uid first_name last_name gender birthday location_ip browser_used
    }
  }
}

HugeGraph

g.V(${mid}).out()  #${n}Hops = out()Length of the link

Sample Code of Shared Friends Queries

Nebula Graph

GO FROM ${mid1} OVER person_knows_person INTERSECT GO FROM ${mid2} OVER person_knows_person

Dgraph

{
  var(func: uid(${mid1})) {
    person_knows_person {
      M1 as uid
    }
  }
  var(func: uid(${mid2})) {
    person_knows_person {
      M2 as uid
    }
  }
  in_common(func: uid(M1)) @filter(uid(M2)){
    uid
  }
}

HugeGraph

g.V(${mid1}).out().id().aggregate('x').V(${mid2}).out().id().where(within('x')).dedup()

Testing Results of of N-Hop Queries with ID Returned

N-Hop Queries with ID ReturnedResponse Time (ms)N = 1, QPS = 100avg / p99 / p999N = 2, QPS = 100avg / p99 / p999N = 3, QPS = 10avg / p99 / p999
Nebula4 / 13 / 4524 / 160 / 2681908 / 11304 / 11304
Dgraph4 / 12 / 3093 / 524 / 781Timeout
HugeGraph28 / 274 / 1710TimeoutTimeout
N-Hop Queries with ID ReturnedThe Largest Throughput (QPS)N = 1Avg Neighbors Returned = 62N = 2Avg Neighbors Returned = 3844N = 3Avg Neighbors Returned = 238328
Nebula80830695032
Dgraph8558100Timeout
HugeGraph804TimeoutTimeout

Testing Results of N-Hop Queries with Properties Returned

The average size of the properties on a single vertex is 200 Bytes.

N-Hop Queries with Properties ReturnedResponse Time (ms)N = 1, QPS = 100avg / p99 / p999N = 2, QPS = 100avg / p99 / p999N = 3, QPS = 10avg / p99 / p999
Nebula5 / 20 / 4999 / 475 / 164548052 / >60s / >60s
Dgraph50 / 210 / 274TimeoutTimeout
HugeGraph29 / 219 / 1555TimeoutTimeout
N-Hop Queries with ID ReturnedThe Largest Throughput (QPS)N = 1Avg Neighbors Returned = 62N = 2Avg Neighbors Returned = 3844N = 3Avg Neighbors Returned = 238328
Nebula364007302
Dgraph80TimeoutTimeout
HugeGraph802TimeoutTimeout

Testing Results of Shared Friends Queries

This test didn’t include the largest throughput.

Shared Friends QueriesResponse Time (ms)QPS = 100 avg / p99 / p999
Nebula4 / 20 / 42
Dgraph4 / 13 / 36
HugeGraph39 / 366 / 1630

Data Query Testing Results Analysis

In the test of response time (latency) of one-hop queries with ID returned, both Nebula Graph and Dgraph need to search the outgoing edges once. Due to the storage architecture in Dgraph, edges of the same type are stored in the same node. Therefore, there is no network consumption for one-hop queries in Dgraph. In Nebula Graph, edges are distributed to multiple nodes, which is why Dgraph performed a little bit better than Nebula Graph in this test.

In the test of throughput (QPS) of one-hop queries with ID returned, the CPU load of the cluster was restricted by the single node where the edges were stored, which resulted in low CPU occupation ratio in the cluster. Therefore, the largest throughput of Dgraph is only 11% of that od Nebula Graph.

In the test of response time (latency) of two-hop queries with ID returned, due to the same reason stated above, the actual load in Dgraph nearly reached the upper limit of the cluster when QPS was set to 100. Therefore, the latency in Dgraph was lengthened significantly to 3.9x of that in Nebula Graph.

In the test of one-hop queries with properties returned, Nebula Graph stored all properties on vertices as a data structure in a single node. Therefore, the number of searches was equal to the number of outgoing edges Y. In Dgraph, properties on vertices were considered as outgoing edges and were distributed to different nodes. Therefore, the number of searches was equal to the product of X, i.e. the number of properties, and Y, i.e. the number of outgoing edges. For this reason, the performance of Dgraph is worse than Nebula Graph. The same rule applies to multi-hop queries.

The test of shared friends queries was nearly the same as the test of two one-hop queries with ID returned. Therefore, the results of the two tests were quite similar.

HugeGraph’s storage backend is HBase, which has lower concurrent read and write capability than RocksDB (adopted by Nebula Graph) and BadgerDB (adopted by Dgraph). Therefore, HugeGraph performed worse than the other two in terms of response time and throughput.

Conclusion

The Meituan team has finally selected Nebula Graph as our graph storage engine because it has outperformed the other two candidates in terms of batch data import speed, real-time write performance, and n-hop query performance.

References

This benchmarking test was conducted by Zhao Dengchang and Gao Chen from the NLP team at Meituan. If you have any questions regarding the test or the results, please leave your comments in this thread: https://discuss.nebula-graph.io/t/benchmarking-the-mainstream-open-source-distributed-graph-databases-at-meituan-nebula-graph-vs-dgraph-vs-hugegraph/715

You might also like:

  1. Nebula Graph Architecture — A Bird’s Eye View
  2. An Introduction to Nebula Graph’s Storage Engine
  3. An Introduction to Nebula Graph’s Query Engine
Like what we do ? Star us on GitHub. https://github.com/vesoft-inc/nebula