Validating Import Performance of Nebula Importer


Validating Import Performance of Nebula Importer

Machine Specifications for Testing

Host NameOSCPU ArchitectureCPU CoresMemoryDisk
hadoop 10CentOS 7.6x86_6432 核128 GB1.8 TB
hadoop 11CentOS 7.6x86_6432 核64 GB1 TB
hadoop 12CentOS 7.6x86_6416 核64 GB1 TB

Environment of Nebula Graph Cluster

  • Operating System: CentOS 7.5 +
  • Necessary software for Nebula Graph Cluster, including gcc 7.1.0+, cmake 3.5.0, glibc 2.12+, and other necessary dependencies.
yum update
yum install -y make \
                 m4 \
                 git \
                 wget \
                 unzip \
                 xz \
                 readline-devel \
                 ncurses-devel \
                 zlib-devel \
                 gcc \
                 gcc-c++ \
                 cmake \
                 gettext \
                 curl \
  • Nebula Graph version: V2.0.0
  • Back-end storage: Three nodes, RocksDB
Process \ Host Namehadoop10hadoop11hadoop12
# of metad processes111
# of storaged processes111
# of graphd processes111

Preparing Data and Introducing Data Format

# of Vertices / File Size# of Edges / File Size# of Vertices and Edges / File Size
74,314,635 /4.6 G139,951,301 /6.6 G214,265,936 /11.2 G

More details about the data:

  • edge.csv: 139,951,301 records in total, 6.6 GB

  • vertex.csv: 74,314,635 records in total, 4.6 GB

  • 214,265,936 vertices and edges in total, 11.2 GB

data size

vertices and edges

[root@hadoop10 datas]# wc -l edge.csv 
139951301 edge.csv
[root@hadoop10 datas]# head -10 vertex.csv 
-1861609733419239066,别名: 饴糖、畅糖、畅、软糖。
5842706712819643509,词条(拼音:cí tiáo)也叫词目,是辞书学用语,指收列的词语及其释文。
[root@hadoop10 datas]# wc -l vertex.csv 
74314635 vertex.csv
[root@hadoop10 datas]# head -10 edge.csv 

Validating Solution

Solution: Using Nebula Importer to import data in batch.

Edit a YAML file for importing data.

version: v1rc1
description: example
  concurrency: 10 # number of graph clients
  channelBufferSize: 128
  space: test
    user: user
    password: password
logPath: ./err/test.log
  - path: ./vertex.csv
    failDataPath: ./err/vertex.csv
    batchSize: 100
    type: csv
      withHeader: false
      withLabel: false
      type: vertex
          - name: entity
              - name: name
                type: string
  - path: ./edge.csv
    failDataPath: ./err/edge.csv
    batchSize: 100
    type: csv
      withHeader: false
      withLabel: false
      type: edge
        name: relation
        withRanking: false
          - name: name
            type: string

Create schema

On Nebula Console, create a graph space, and then tags and edge types in the graph space.

# 1. Create a graph space.
 (admin@nebula) [(none)]> create space test2(vid_type = FIXED_STRING(64));
# 2. Switch to the specified graph space.
 (admin@nebula) [(none)]> use test2;
# 3. Create a tag.
(admin@nebula) [test2]> create tag entity(name string);
# 4. Create an edge type.
(admin@nebula) [test2]> create edge relation(name string);
# 5. View the definition of the tag.
 (admin@nebula) [test2]> describe tag entity;
| Field  | Type     | Null  | Default |
| "name" | "string" | "YES" |         |
Got 1 rows (time spent 703/1002 us)
# 6. View the definition of the edge type.
 (admin@nebula) [test2]> describe edge relation;
| Field  | Type     | Null  | Default |
| "name" | "string" | "YES" |         |
Got 1 rows (time spent 703/1041 us)


Compile Nebula Importer and run shell commands.

# Compile Nebula Importer.
make build
# Run the shell command where a YAML configuration file is specified.
/opt/software/nebulagraph/nebula-importer/nebula-importer --config /opt/software/datas/rdf-import2.yaml

View the output

Import results

# View part of logs.
2021/04/19 19:05:55 [INFO] statsmgr.go:61: Tick: Time(2400.00s), Finished(210207018), Failed(0), Latency AVG(32441us), Batches Req AVG(33824us), Rows AVG(87586.25/s)
2021/04/19 19:06:00 [INFO] statsmgr.go:61: Tick: Time(2405.00s), Finished(210541418), Failed(0), Latency AVG(32461us), Batches Req AVG(33844us), Rows AVG(87543.20/s)
2021/04/19 19:06:05 [INFO] statsmgr.go:61: Tick: Time(2410.00s), Finished(210901218), Failed(0), Latency AVG(32475us), Batches Req AVG(33857us), Rows AVG(87510.88/s)
2021/04/19 19:06:10 [INFO] statsmgr.go:61: Tick: Time(2415.00s), Finished(211270318), Failed(0), Latency AVG(32486us), Batches Req AVG(33869us), Rows AVG(87482.50/s)
2021/04/19 19:06:15 [INFO] statsmgr.go:61: Tick: Time(2420.00s), Finished(211685318), Failed(0), Latency AVG(32490us), Batches Req AVG(33873us), Rows AVG(87473.27/s)
2021/04/19 19:06:20 [INFO] statsmgr.go:61: Tick: Time(2425.00s), Finished(211959718), Failed(0), Latency AVG(32517us), Batches Req AVG(33900us), Rows AVG(87406.07/s)
2021/04/19 19:06:25 [INFO] statsmgr.go:61: Tick: Time(2430.00s), Finished(212220818), Failed(0), Latency AVG(32545us), Batches Req AVG(33928us), Rows AVG(87333.67/s)
2021/04/19 19:06:30 [INFO] statsmgr.go:61: Tick: Time(2435.00s), Finished(212433518), Failed(0), Latency AVG(32579us), Batches Req AVG(33963us), Rows AVG(87241.69/s)
2021/04/19 19:06:35 [INFO] statsmgr.go:61: Tick: Time(2440.00s), Finished(212780818), Failed(0), Latency AVG(32593us), Batches Req AVG(33977us), Rows AVG(87205.25/s)
2021/04/19 19:06:40 [INFO] statsmgr.go:61: Tick: Time(2445.01s), Finished(213240518), Failed(0), Latency AVG(32589us), Batches Req AVG(33973us), Rows AVG(87214.69/s)
2021/04/19 19:06:40 [INFO] reader.go:180: Total lines of file(/opt/software/datas/edge.csv) is: 139951301, error lines: 0
2021/04/19 19:06:42 [INFO] statsmgr.go:61: Done(/opt/software/datas/edge.csv): Time(2446.70s), Finished(213307919), Failed(0), Latency AVG(32585us), Batches Req AVG(33968us), Rows AVG(87181.95/s)
2021/04/19 19:06:42 Finish import data, consume time: 2447.20s
2021/04/19 19:06:43 --- END OF NEBULA IMPORTER ---

A special focus on the statistics of the statistics of results.

Time(2446.70s), Finished(213307919), Failed(0), Latency AVG(32585us), Batches Req
AVG(33968us), Rows AVG(87181.95/s)
2021/04/19 19:06:42 Finish import data, consume time: 2447.20s
2021/04/19 19:06:43 --- END OF NEBULA IMPORTER ---
Resource Requirements

High requirement of the machine specifications, including the number of CPU cores, memory size, and disk size.

  • hadoop 10

image image

hadoop 11

image image

hadoop 12

image image

Recommendations on the machine specifications:

  1. By comparing the memory consumption of the three machines, we found that the memory consumption is great when more than 200 million data are imported, so we recommend that the memory capacity should be as large as possible.
  2. For the information about the CPU cores and disk size, see the documentation:

nGQL Statements Test

The native graph query language of Nebula Graph is nGQL. It is compatible with OpenCypher. For now, nGQL has not supported traversal of the total number of vertices and edges. For example, MATCH (v) RETURN v is not supported yet. Make sure that at least one index is available in a MATCH statement. If you want to create an index when related vertices, edges, or properties exist, rebuild the index after it is created to make it effective.

To test whether nGQL is compatible with OpenCypher.

# Test OpenCypher statements.
# Import an nGQL file.
./nebula-console -addr -port 9669 -u user -p password -t 120  -f /opt/software/datas/basketballplayer-2.X.ngql

image image


This test validated the performance of importing a large amount of data to a three-node Nebula Graph cluster. The batch writing performance of Nebula Importer can meet the performance requirements of the production scenario. However, if the data is imported as CSV files, it must be stored in HDFS and a YAML configuration file is needed to specify the configuration of the tags and edge types for processing by tools.

Would like to know more about Nebula Graph? Join the Slack channel!