GlusterFS Benchmarking

Anurag Jain
24 min readDec 29, 2020

--

File, Postgres, InfluxDb, Redis, ELK, Mnesia

File Performance

LocalDisk

dd if=/dev/urandom of=/home/file1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.60998 s, 191 MB/s

dd if=/dev/urandom of=/home/file1 bs=4K count=1

1+0 records in

1+0 records out

4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000119666 s, 34.2 MB/s

Gluster

dd if=/dev/urandom of=/mnt/file1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.49356 s, 126 MB/s

dd if=/dev/urandom of=/mnt/file1 bs=4K count=1

1+0 records in

1+0 records out

4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00168779 s, 2.4 MB/s

When it’s big size file, performance is quite good but when we decrease the file size performance decreases.

Postgres

LocalDisk

pgbench -i -s 50 bench_test -h localhost -U postgres

pgbench -c 10 -j 2 -t 10000 bench_test -h localhost -U postgres
starting vacuum…end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 0.959 ms
tps = 10428.963066 (including connections establishing)
tps = 10433.306051 (excluding connections establishing)

Gluster

pgbench -c 10 -j 2 -t 10000 bench_test -h localhost -U postgres

starting vacuum…end.
client 6 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 0 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 5 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
starting vacuum…end.
client 6 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 0 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 5 aborted in state 5: ERROR: unexpected data beyond EOF in block 82120 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 8 aborted in state 5: ERROR: unexpected data beyond EOF in block 82125 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 1 aborted in state 5: ERROR: unexpected data beyond EOF in block 82125 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
client 3 aborted in state 5: ERROR: unexpected data beyond EOF in block 82125 of relation base/78590/78603
HINT: This has been seen to occur with buggy kernels; consider updating your system.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 45521/100000
latency average = 49.122 ms
tps = 203.575041 (including connections establishing)
tps = 203.581029 (excluding connections establishing)

Result — On gluster postgres doesn’t work very well and it’s not even recommended to run SQL database on Gluster.

Redis

NFSDisk

/data # redis-benchmark -q -n 100000
PING_INLINE: 77700.08 requests per second
PING_BULK: 95419.85 requests per second
SET: 96993.21 requests per second
GET: 77519.38 requests per second
INCR: 77821.02 requests per second
LPUSH: 92250.92 requests per second
RPUSH: 78064.01 requests per second
LPOP: 76687.12 requests per second
RPOP: 76745.97 requests per second
SADD: 77160.49 requests per second
HSET: 79365.08 requests per second
SPOP: 79176.56 requests per second
LPUSH (needed to benchmark LRANGE): 97370.98 requests per second
LRANGE_100 (first 100 elements): 40404.04 requests per second
LRANGE_300 (first 300 elements): 19227.07 requests per second
LRANGE_500 (first 450 elements): 13902.41 requests per second
LRANGE_600 (first 600 elements): 10610.08 requests per second
MSET (10 keys): 81037.28 requests per second

Gluster

/data # redis-benchmark -q -n 100000
PING_INLINE: 59136.61 requests per second
PING_BULK: 59988.00 requests per second
SET: 62893.08 requests per second
GET: 56850.48 requests per second
INCR: 62774.64 requests per second
LPUSH: 60422.96 requests per second
RPUSH: 55741.36 requests per second
LPOP: 55432.37 requests per second
RPOP: 56818.18 requests per second
SADD: 57273.77 requests per second
HSET: 55586.44 requests per second
SPOP: 56689.34 requests per second
LPUSH (needed to benchmark LRANGE): 57603.69 requests per second
LRANGE_100 (first 100 elements): 29832.94 requests per second
LRANGE_300 (first 300 elements): 14949.92 requests per second
LRANGE_500 (first 450 elements): 11005.94 requests per second
LRANGE_600 (first 600 elements): 8573.39 requests per second
MSET (10 keys): 58651.02 requests per second

InfluxDb

NFSDisk

./inch -v -c 20 -b 10000 -t 3000,2,1 -p 100000 -host http://admin:admin@influxdb:8086 -db stress

T=00000001 220000 points written (0%). Total throughput: 219873.1 pt/sec | 219873.1 val/sec. Current throughput: 220000 val/sec. Errors: 0

T=00000002 710000 points written (0%). Total throughput: 354916.2 pt/sec | 354916.2 val/sec. Current throughput: 490000 val/sec. Errors: 0

T=00000003 1050000 points written (0%). Total throughput: 349955.1 pt/sec | 349955.1 val/sec. Current throughput: 340000 val/sec. Errors: 0 | μ: 482.028065ms, 90%: 704.666347ms, 95%: 809.524952ms, 99%: 964.494143ms

T=00000004 1280000 points written (0%). Total throughput: 319969.2 pt/sec | 319969.2 val/sec. Current throughput: 230000 val/sec. Errors: 0 | μ: 552.358717ms, 90%: 964.494143ms, 95%: 1.008205848s, 99%: 1.203444716s

T=00000005 1340000 points written (0%). Total throughput: 267980.6 pt/sec | 267980.6 val/sec. Current throughput: 60000 val/sec. Errors: 0 | μ: 579.352428ms, 90%: 993.379523ms, 95%: 1.096542678s, 99%: 1.300832511s

Gluster

T=00000179 146610000 points written (24%). Total throughput: 819048.8 pt/sec | 819048.8 val/sec. Current throughput: 0 val/sec. Errors: 0 | μ: 80.593435ms, 90%: 105.821004ms, 95%: 153.451914ms, 99%: 1.233193194s

T=00000180 147060000 points written (24%). Total throughput: 816998.6 pt/sec | 816998.6 val/sec. Current throughput: 450000 val/sec. Errors: 0 | μ: 83.006666ms, 90%: 107.002054ms, 95%: 162.214635ms, 99%: 1.348759128s

T=00000181 148080000 points written (24%). Total throughput: 818120.2 pt/sec | 818120.2 val/sec. Current throughput: 1020000 val/sec. Errors: 0 | μ: 82.826417ms, 90%: 106.918635ms, 95%: 161.13164ms, 99%: 1.345174642s

T=00000182 148530000 points written (24%). Total throughput: 816096.9 pt/sec | 816096.9 val/sec. Current throughput: 450000 val/sec. Errors: 0 | μ: 82.745268ms, 90%: 106.918635ms, 95%: 161.00163ms, 99%: 1.345174642s

ELK

sudo docker run — rm elastic/rally — track=http_logs — test-mode — pipeline=benchmark-only — target-hosts=elk:9200

Gluster

— — — — — — — — — — — — — — — — — — — — — — — — — — —

| Metric | Task | Value | Unit |

| — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -:| — — — — — — — — — — — — — — — — — — — — — — — — — — — -:| — — — — — :| — — — — :|

| Cumulative indexing time of primary shards | | 1256.4 | min |

| Min cumulative indexing time across primary shards | | 0 | min |

| Median cumulative indexing time across primary shards | | 0.125558 | min |

| Max cumulative indexing time across primary shards | | 339.033 | min |

| Cumulative indexing throttle time of primary shards | | 0 | min |

| Min cumulative indexing throttle time across primary shards | | 0 | min |

| Median cumulative indexing throttle time across primary shards | | 0 | min |

| Max cumulative indexing throttle time across primary shards | | 0 | min |

| Cumulative merge time of primary shards | | 1943.69 | min |

| Cumulative merge count of primary shards | | 811 | |

| Min cumulative merge time across primary shards | | 0 | min |

| Median cumulative merge time across primary shards | | 0 | min |

| Max cumulative merge time across primary shards | | 639.626 | min |

| Cumulative merge throttle time of primary shards | | 1034.42 | min |

| Min cumulative merge throttle time across primary shards | | 0 | min |

| Median cumulative merge throttle time across primary shards | | 0 | min |

| Max cumulative merge throttle time across primary shards | | 326.292 | min |

| Cumulative refresh time of primary shards | | 208.596 | min |

| Cumulative refresh count of primary shards | | 11474 | |

| Min cumulative refresh time across primary shards | | 0 | min |

| Median cumulative refresh time across primary shards | | 0.0718 | min |

| Max cumulative refresh time across primary shards | | 55.1991 | min |

| Cumulative flush time of primary shards | | 71.56 | min |

| Cumulative flush count of primary shards | | 1991 | |

| Median cumulative flush time across primary shards | | 0.0269333 | min |

| Max cumulative flush time across primary shards | | 19.1704 | min |

| Total Young Gen GC time | | 1.584 | s |

| Total Young Gen GC count | | 99 | |

| Total Old Gen GC time | | 0 | s |

| Total Old Gen GC count | | 0 | |

| Store size | | 154.944 | GB |

| Translog size | | 2.09728 | GB |

| Heap used for segments | | 182.864 | MB |

| Heap used for doc values | | 0.296513 | MB |

| Heap used for terms | | 108.954 | MB |

| Heap used for norms | | 1.12823 | MB |

| Heap used for points | | 6.07351 | MB |

| Heap used for stored fields | | 66.4113 | MB |

| Segment count | | 905 | |

| Min Throughput | index-append | 44.44 | docs/s |

| Median Throughput | index-append | 534.71 | docs/s |

| Max Throughput | index-append | 802.57 | docs/s |

| 50th percentile latency | index-append | 742.095 | ms |

| 90th percentile latency | index-append | 3044.36 | ms |

| 100th percentile latency | index-append | 3244.55 | ms |

| 50th percentile service time | index-append | 742.095 | ms |

| 90th percentile service time | index-append | 3044.36 | ms |

| 100th percentile service time | index-append | 3244.55 | ms |

| error rate | index-append | 0 | % |

| Min Throughput | default | 13.51 | ops/s |

| Median Throughput | default | 13.51 | ops/s |

| Max Throughput | default | 13.51 | ops/s |

| 100th percentile latency | default | 6.84186 | ms |

| 100th percentile service time | default | 6.84186 | ms |

| error rate | default | 0 | % |

| Min Throughput | term | 48.93 | ops/s |

| Median Throughput | term | 48.93 | ops/s |

| Max Throughput | term | 48.93 | ops/s |

| 100th percentile latency | term | 7.71598 | ms |

| 100th percentile service time | term | 7.71598 | ms |

| error rate | term | 0 | % |

| Min Throughput | range | 97.22 | ops/s |

| Median Throughput | range | 97.22 | ops/s |

| Max Throughput | range | 97.22 | ops/s |

| 100th percentile latency | range | 9.28728 | ms |

| 100th percentile service time | range | 9.28728 | ms |

| error rate | range | 0 | % |

| Min Throughput | 200s-in-range | 103.87 | ops/s |

| Median Throughput | 200s-in-range | 103.87 | ops/s |

| Max Throughput | 200s-in-range | 103.87 | ops/s |

| 100th percentile latency | 200s-in-range | 6.01426 | ms |

| 100th percentile service time | 200s-in-range | 6.01426 | ms |

| error rate | 200s-in-range | 0 | % |

| Min Throughput | 400s-in-range | 119.56 | ops/s |

| Median Throughput | 400s-in-range | 119.56 | ops/s |

| Max Throughput | 400s-in-range | 119.56 | ops/s |

| 100th percentile latency | 400s-in-range | 6.19773 | ms |

| 100th percentile service time | 400s-in-range | 6.19773 | ms |

| error rate | 400s-in-range | 0 | % |

| Min Throughput | hourly_agg | 26.81 | ops/s |

| Median Throughput | hourly_agg | 26.81 | ops/s |

| Max Throughput | hourly_agg | 26.81 | ops/s |

| 100th percentile latency | hourly_agg | 32.4813 | ms |

| 100th percentile service time | hourly_agg | 32.4813 | ms |

| error rate | hourly_agg | 0 | % |

| Min Throughput | scroll | 22.71 | pages/s |

| Median Throughput | scroll | 22.71 | pages/s |

| Max Throughput | scroll | 22.71 | pages/s |

| 100th percentile latency | scroll | 187.801 | ms |

| 100th percentile service time | scroll | 187.801 | ms |

| error rate | scroll | 0 | % |

| Min Throughput | hourly_agg | 26.81 | ops/s |

| Median Throughput | hourly_agg | 26.81 | ops/s |

| Max Throughput | hourly_agg | 26.81 | ops/s |

| 100th percentile latency | hourly_agg | 32.4813 | ms |

| 100th percentile service time | hourly_agg | 32.4813 | ms |

| error rate | hourly_agg | 0 | % |

| Min Throughput | scroll | 22.71 | pages/s |

| Median Throughput | scroll | 22.71 | pages/s |

| Max Throughput | scroll | 22.71 | pages/s |

| 100th percentile latency | scroll | 187.801 | ms |

| 100th percentile service time | scroll | 187.801 | ms |

| error rate | scroll | 0 | % |

| Min Throughput | desc_sort_timestamp | 81.65 | ops/s |

| Median Throughput | desc_sort_timestamp | 81.65 | ops/s |

| Max Throughput | desc_sort_timestamp | 81.65 | ops/s |

| 100th percentile latency | desc_sort_timestamp | 5.89178 | ms |

| 100th percentile service time | desc_sort_timestamp | 5.89178 | ms |

| error rate | desc_sort_timestamp | 0 | % |

| Min Throughput | asc_sort_timestamp | 124.32 | ops/s |

| Median Throughput | asc_sort_timestamp | 124.32 | ops/s |

| Max Throughput | asc_sort_timestamp | 124.32 | ops/s |

| 100th percentile latency | asc_sort_timestamp | 6.08914 | ms |

| 100th percentile service time | asc_sort_timestamp | 6.08914 | ms |

| error rate | asc_sort_timestamp | 0 | % |

| 100th percentile latency | desc_sort_with_after_timestamp | 39.7624 | ms |

| 100th percentile service time | desc_sort_with_after_timestamp | 39.7624 | ms |

| error rate | desc_sort_with_after_timestamp | 100 | % |

| 100th percentile latency | asc_sort_with_after_timestamp | 37.3931 | ms |

| 100th percentile service time | asc_sort_with_after_timestamp | 37.3931 | ms |

| error rate | asc_sort_with_after_timestamp | 100 | % |

| Min Throughput | desc-sort-timestamp-after-force-merge-1-seg | 132.25 | ops/s |

| Median Throughput | desc-sort-timestamp-after-force-merge-1-seg | 132.25 | ops/s |

| Max Throughput | desc-sort-timestamp-after-force-merge-1-seg | 132.25 | ops/s |

| 100th percentile latency | desc-sort-timestamp-after-force-merge-1-seg | 5.87846 | ms |

| 100th percentile service time | desc-sort-timestamp-after-force-merge-1-seg | 5.87846 | ms |

| error rate | desc-sort-timestamp-after-force-merge-1-seg | 0 | % |

| Min Throughput | asc-sort-timestamp-after-force-merge-1-seg | 121.7 | ops/s |

| Median Throughput | asc-sort-timestamp-after-force-merge-1-seg | 121.7 | ops/s |

| Max Throughput | asc-sort-timestamp-after-force-merge-1-seg | 121.7 | ops/s |

| 100th percentile latency | asc-sort-timestamp-after-force-merge-1-seg | 4.26312 | ms |

| 100th percentile service time | asc-sort-timestamp-after-force-merge-1-seg | 4.26312 | ms |

| error rate | asc-sort-timestamp-after-force-merge-1-seg | 0 | % |

| 100th percentile latency | desc-sort-with-after-timestamp-after-force-merge-1-seg | 26.6299 | ms |

| 100th percentile service time | desc-sort-with-after-timestamp-after-force-merge-1-seg | 26.6299 | ms |

| error rate | desc-sort-with-after-timestamp-after-force-merge-1-seg | 100 | % |

| 100th percentile latency | asc-sort-with-after-timestamp-after-force-merge-1-seg | 25.6465 | ms |

| 100th percentile service time | asc-sort-with-after-timestamp-after-force-merge-1-seg | 25.6465 | ms |

| error rate | asc-sort-with-after-timestamp-after-force-merge-1-seg | 100 | % |

[WARNING] Error rate is 100.0 for operation ‘desc_sort_with_after_timestamp’. Please check the logs.

[WARNING] No throughput metrics available for [desc_sort_with_after_timestamp]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘asc_sort_with_after_timestamp’. Please check the logs.

[WARNING] No throughput metrics available for [asc_sort_with_after_timestamp]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘desc-sort-with-after-timestamp-after-force-merge-1-seg’. Please check the logs.

[WARNING] No throughput metrics available for [desc-sort-with-after-timestamp-after-force-merge-1-seg]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘asc-sort-with-after-timestamp-after-force-merge-1-seg’. Please check the logs.

[WARNING] No throughput metrics available for [asc-sort-with-after-timestamp-after-force-merge-1-seg]. Likely cause: Error rate is 100.0%. Please check the logs.

— — — — — — — — — — — — — — — —

[INFO] SUCCESS (took 94 seconds)

NFS

— — — — — — — — — — — — — — — — — — — — — — — — — — —

| Metric | Task | Value | Unit |

| — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -:| — — — — — — — — — — — — — — — — — — — — — — — — — — — -:| — — — — — — :| — — — — :|

| Cumulative indexing time of primary shards | | 52.7786 | min |

| Min cumulative indexing time across primary shards | | 0 | min |

| Median cumulative indexing time across primary shards | | 0.0009 | min |

| Max cumulative indexing time across primary shards | | 7.22108 | min |

| Cumulative indexing throttle time of primary shards | | 0 | min |

| Min cumulative indexing throttle time across primary shards | | 0 | min |

| Median cumulative indexing throttle time across primary shards | | 0 | min |

| Max cumulative indexing throttle time across primary shards | | 0 | min |

| Cumulative merge time of primary shards | | 1.23798 | min |

| Cumulative merge count of primary shards | | 36 | |

| Min cumulative merge time across primary shards | | 0 | min |

| Median cumulative merge time across primary shards | | 0 | min |

| Max cumulative merge time across primary shards | | 0.870067 | min |

| Cumulative merge throttle time of primary shards | | 0.5754 | min |

| Min cumulative merge throttle time across primary shards | | 0 | min |

| Median cumulative merge throttle time across primary shards | | 0 | min |

| Max cumulative merge throttle time across primary shards | | 0.410033 | min |

| Cumulative refresh time of primary shards | | 3.99172 | min |

| Cumulative refresh count of primary shards | | 2422 | |

| Min cumulative refresh time across primary shards | | 0 | min |

| Median cumulative refresh time across primary shards | | 0.00131667 | min |

| Max cumulative refresh time across primary shards | | 0.811633 | min |

| Cumulative flush time of primary shards | | 0.809567 | min |

| Cumulative flush count of primary shards | | 234 | |

| Min cumulative flush time across primary shards | | 0 | min |

| Median cumulative flush time across primary shards | | 0.000383333 | min |

| Max cumulative flush time across primary shards | | 0.103233 | min |

| Total Young Gen GC time | | 0.411 | s |

| Total Young Gen GC count | | 19 | |

| Total Old Gen GC time | | 0 | s |

| Total Old Gen GC count | | 0 | |

| Store size | | 5.16269 | GB |

| Translog size | | 9.73232e-06 | GB |

| Heap used for segments | | 12.5911 | MB |

| Heap used for doc values | | 0.160484 | MB |

| Heap used for terms | | 8.96758 | MB |

| Heap used for norms | | 0.765686 | MB |

| Heap used for points | | 0.201168 | MB |

| Heap used for stored fields | | 2.49617 | MB |

| Segment count | | 634 | |

| Min Throughput | index-append | 6212.2 | docs/s |

| Median Throughput | index-append | 6212.2 | docs/s |

| Max Throughput | index-append | 6212.2 | docs/s |

| 50th percentile latency | index-append | 150.477 | ms |

| 90th percentile latency | index-append | 175.565 | ms |

| 100th percentile latency | index-append | 180.233 | ms |

| 50th percentile service time | index-append | 150.477 | ms |

| 90th percentile service time | index-append | 175.565 | ms |

| 100th percentile service time | index-append | 180.233 | ms |

| error rate | index-append | 0 | % |

| Min Throughput | default | 102.43 | ops/s |

| Median Throughput | default | 102.43 | ops/s |

| Max Throughput | default | 102.43 | ops/s |

| 100th percentile latency | default | 5.97758 | ms |

| 100th percentile service time | default | 5.97758 | ms |

| error rate | default | 0 | % |

| Min Throughput | term | 99.45 | ops/s |

| Median Throughput | term | 99.45 | ops/s |

| Max Throughput | term | 99.45 | ops/s |

| 100th percentile latency | term | 5.39673 | ms |

| 100th percentile service time | term | 5.39673 | ms |

| error rate | term | 0 | % |

| Min Throughput | range | 95.81 | ops/s |

| Median Throughput | range | 95.81 | ops/s |

| Max Throughput | range | 95.81 | ops/s |

| 100th percentile latency | range | 7.42692 | ms |

| 100th percentile service time | range | 7.42692 | ms |

| error rate | range | 0 | % |

| Min Throughput | 200s-in-range | 106.97 | ops/s |

| Median Throughput | 200s-in-range | 106.97 | ops/s |

| Max Throughput | 200s-in-range | 106.97 | ops/s |

| 100th percentile latency | 200s-in-range | 6.44685 | ms |

| 100th percentile service time | 200s-in-range | 6.44685 | ms |

| error rate | 200s-in-range | 0 | % |

| Min Throughput | 400s-in-range | 119.52 | ops/s |

| Median Throughput | 400s-in-range | 119.52 | ops/s |

| Max Throughput | 400s-in-range | 119.52 | ops/s |

| 100th percentile latency | 400s-in-range | 5.23283 | ms |

| 100th percentile service time | 400s-in-range | 5.23283 | ms |

| error rate | 400s-in-range | 0 | % |

| Min Throughput | hourly_agg | 53.53 | ops/s |

| Median Throughput | hourly_agg | 53.53 | ops/s |

| Max Throughput | hourly_agg | 53.53 | ops/s |

| 100th percentile latency | hourly_agg | 13.5936 | ms |

| 100th percentile service time | hourly_agg | 13.5936 | ms |

| error rate | hourly_agg | 0 | % |

| Min Throughput | scroll | 22.51 | pages/s |

| Median Throughput | scroll | 22.51 | pages/s |

| Max Throughput | scroll | 22.51 | pages/s |

| 100th percentile latency | scroll | 344.62 | ms |

| 100th percentile service time | scroll | 344.62 | ms |

| error rate | scroll | 0 | % |

| Min Throughput | desc_sort_timestamp | 76.77 | ops/s |

| Median Throughput | desc_sort_timestamp | 76.77 | ops/s |

| Max Throughput | desc_sort_timestamp | 76.77 | ops/s |

| 100th percentile latency | desc_sort_timestamp | 6.85042 | ms |

| 100th percentile service time | desc_sort_timestamp | 6.85042 | ms |

| error rate | desc_sort_timestamp | 0 | % |

| Min Throughput | asc_sort_timestamp | 148.81 | ops/s |

| Median Throughput | asc_sort_timestamp | 148.81 | ops/s |

| Max Throughput | asc_sort_timestamp | 148.81 | ops/s |

| 100th percentile latency | asc_sort_timestamp | 4.24383 | ms |

| 100th percentile service time | asc_sort_timestamp | 4.24383 | ms |

| error rate | asc_sort_timestamp | 0 | % |

| 100th percentile latency | desc_sort_with_after_timestamp | 48.5279 | ms |

| 100th percentile service time | desc_sort_with_after_timestamp | 48.5279 | ms |

| error rate | desc_sort_with_after_timestamp | 100 | % |

| 100th percentile latency | asc_sort_with_after_timestamp | 21.9533 | ms |

| 100th percentile service time | asc_sort_with_after_timestamp | 21.9533 | ms |

| error rate | asc_sort_with_after_timestamp | 100 | % |

| Min Throughput | desc-sort-timestamp-after-force-merge-1-seg | 110.65 | ops/s |

| Median Throughput | desc-sort-timestamp-after-force-merge-1-seg | 110.65 | ops/s |

| Max Throughput | desc-sort-timestamp-after-force-merge-1-seg | 110.65 | ops/s |

| 100th percentile latency | desc-sort-timestamp-after-force-merge-1-seg | 6.30281 | ms |

| 100th percentile service time | desc-sort-timestamp-after-force-merge-1-seg | 6.30281 | ms |

| error rate | desc-sort-timestamp-after-force-merge-1-seg | 0 | % |

| Min Throughput | asc-sort-timestamp-after-force-merge-1-seg | 148.29 | ops/s |

| Median Throughput | asc-sort-timestamp-after-force-merge-1-seg | 148.29 | ops/s |

| Max Throughput | asc-sort-timestamp-after-force-merge-1-seg | 148.29 | ops/s |

| 100th percentile latency | asc-sort-timestamp-after-force-merge-1-seg | 4.69742 | ms |

| 100th percentile service time | asc-sort-timestamp-after-force-merge-1-seg | 4.69742 | ms |

| error rate | asc-sort-timestamp-after-force-merge-1-seg | 0 | % |

| 100th percentile latency | desc-sort-with-after-timestamp-after-force-merge-1-seg | 22.0081 | ms |

| 100th percentile service time | desc-sort-with-after-timestamp-after-force-merge-1-seg | 22.0081 | ms |

| error rate | desc-sort-with-after-timestamp-after-force-merge-1-seg | 100 | % |

| 100th percentile latency | asc-sort-with-after-timestamp-after-force-merge-1-seg | 20.926 | ms |

| 100th percentile service time | asc-sort-with-after-timestamp-after-force-merge-1-seg | 20.926 | ms |

| error rate | asc-sort-with-after-timestamp-after-force-merge-1-seg | 100 | % |

[WARNING] Error rate is 100.0 for operation ‘desc_sort_with_after_timestamp’. Please check the logs.

[WARNING] No throughput metrics available for [desc_sort_with_after_timestamp]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘asc_sort_with_after_timestamp’. Please check the logs.

[WARNING] No throughput metrics available for [asc_sort_with_after_timestamp]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘desc-sort-with-after-timestamp-after-force-merge-1-seg’. Please check the logs.

[WARNING] No throughput metrics available for [desc-sort-with-after-timestamp-after-force-merge-1-seg]. Likely cause: Error rate is 100.0%. Please check the logs.

[WARNING] Error rate is 100.0 for operation ‘asc-sort-with-after-timestamp-after-force-merge-1-seg’. Please check the logs.

[WARNING] No throughput metrics available for [asc-sort-with-after-timestamp-after-force-merge-1-seg]. Likely cause: Error rate is 100.0%. Please check the logs.

— — — — — — — — — — — — — — — —

[INFO] SUCCESS (took 25 seconds)

RabbitMq

Gluster

sudo docker run -it — rm pivotalrabbitmq/perf-test:latest -x 10 -y 20 -u “throughput-test-1” -a — id “test 1” — uri amqp://rabbitmq:5672

id: test 1, time: 1.000s, sent: 76243 msg/s, received: 7000 msg/s, min/median/75th/95th/99th consumer latency: 498/60266/192994/251385/268407 µs

id: test 1, time: 2.000s, sent: 180186 msg/s, received: 19576 msg/s, min/median/75th/95th/99th consumer latency: 310817/731189/932963/1034985/1049894 µs

id: test 1, time: 3.000s, sent: 18466 msg/s, received: 24110 msg/s, min/median/75th/95th/99th consumer latency: 1036519/1542644/1758902/1970694/2029945 µs

id: test 1, time: 4.000s, sent: 12734 msg/s, received: 23626 msg/s, min/median/75th/95th/99th consumer latency: 1936501/2502018/2745920/2923566/3004999 µs

id: test 1, time: 5.000s, sent: 45209 msg/s, received: 24676 msg/s, min/median/75th/95th/99th consumer latency: 2951362/3458023/3718003/3883463/4012019 µs

id: test 1, time: 6.000s, sent: 5729 msg/s, received: 23256 msg/s, min/median/75th/95th/99th consumer latency: 3775077/4338043/4586252/4838713/4916993 µs

id: test 1, time: 7.000s, sent: 38839 msg/s, received: 23894 msg/s, min/median/75th/95th/99th consumer latency: 4434396/5333349/5582654/5835415/5889108 µs

id: test 1, time: 8.000s, sent: 18753 msg/s, received: 26355 msg/s, min/median/75th/95th/99th consumer latency: 4760634/6110193/6369767/6637040/6690521 µs

id: test 1, time: 9.000s, sent: 27726 msg/s, received: 25557 msg/s, min/median/75th/95th/99th consumer latency: 4624412/6960685/7231004/7658002/7744146 µs

id: test 1, time: 10.000s, sent: 25469 msg/s, received: 23679 msg/s, min/median/75th/95th/99th consumer latency: 4363384/7629512/8003753/8465252/8740118 µs

id: test 1, time: 11.000s, sent: 15917 msg/s, received: 24195 msg/s, min/median/75th/95th/99th consumer latency: 4645585/7866564/8958414/9523951/9540846 µs

id: test 1, time: 12.000s, sent: 31492 msg/s, received: 23682 msg/s, min/median/75th/95th/99th consumer latency: 4394224/8446628/9999225/10514353/10608882 µs

id: test 1, time: 13.000s, sent: 33452 msg/s, received: 26040 msg/s, min/median/75th/95th/99th consumer latency: 4529155/8120994/10818429/11524441/11600629 µs

id: test 1, time: 14.000s, sent: 14644 msg/s, received: 25316 msg/s, min/median/75th/95th/99th consumer latency: 4451713/8514744/9419807/12379114/12588510 µs

id: test 1, time: 15.000s, sent: 20375 msg/s, received: 23401 msg/s, min/median/75th/95th/99th consumer latency: 4607275/7886511/10128804/13375824/13580056 µs

id: test 1, time: 16.000s, sent: 31199 msg/s, received: 23928 msg/s, min/median/75th/95th/99th consumer latency: 4607633/7535388/9035816/14144115/14361983 µs

id: test 1, time: 17.000s, sent: 18465 msg/s, received: 24353 msg/s, min/median/75th/95th/99th consumer latency: 4625853/8290031/9626372/14951237/15453669 µs

id: test 1, time: 18.000s, sent: 29926 msg/s, received: 25762 msg/s, min/median/75th/95th/99th consumer latency: 4689856/7740944/10507916/16035689/16440479 µs

id: test 1, time: 19.000s, sent: 30562 msg/s, received: 24401 msg/s, min/median/75th/95th/99th consumer latency: 4472630/8020740/9153181/17002319/17505806 µs

id: test 1, time: 20.000s, sent: 22518 msg/s, received: 24825 msg/s, min/median/75th/95th/99th consumer latency: 4683754/7761909/9906899/18018787/18511764 µs

id: test 1, time: 21.000s, sent: 10592 msg/s, received: 21575 msg/s, min/median/75th/95th/99th consumer latency: 4766616/7844374/10687646/19004030/19290629 µs

^Ctest stopped (Producer thread interrupted)

id: test 1, sending rate avg: 33107 msg/s

id: test 1, receiving rate avg: 23018 msg/s

Local

id: test 1, time: 2.000s, sent: 154470 msg/s, received: 51155 msg/s, min/median/75th/95th/99th consumer latency: 2279/346287/442955/549340/592245 µs

id: test 1, time: 3.000s, sent: 168347 msg/s, received: 64032 msg/s, min/median/75th/95th/99th consumer latency: 503536/879106/1047473/1328443/1523169 µs

id: test 1, time: 4.000s, sent: 157175 msg/s, received: 63479 msg/s, min/median/75th/95th/99th consumer latency: 975705/1523552/1709446/2209285/2400790 µs

id: test 1, time: 5.000s, sent: 111322 msg/s, received: 62275 msg/s, min/median/75th/95th/99th consumer latency: 1488501/2147657/2419804/2918864/3101340 µs

id: test 1, time: 6.000s, sent: 77435 msg/s, received: 65734 msg/s, min/median/75th/95th/99th consumer latency: 2262169/2744332/3069365/3528221/3738034 µs

id: test 1, time: 7.000s, sent: 79626 msg/s, received: 64663 msg/s, min/median/75th/95th/99th consumer latency: 2794900/3418015/3676567/4162415/4341308 µs

id: test 1, time: 8.000s, sent: 60071 msg/s, received: 66191 msg/s, min/median/75th/95th/99th consumer latency: 3283054/3911288/4222024/4850003/4950028 µs

id: test 1, time: 9.000s, sent: 85308 msg/s, received: 65203 msg/s, min/median/75th/95th/99th consumer latency: 3193992/4339996/4840053/5540378/5633734 µs

id: test 1, time: 10.000s, sent: 57633 msg/s, received: 67366 msg/s, min/median/75th/95th/99th consumer latency: 3323565/4459975/5042092/6178618/6399537 µs

id: test 1, time: 11.000s, sent: 81703 msg/s, received: 66996 msg/s, min/median/75th/95th/99th consumer latency: 3394810/5046687/5718449/6858576/7156866 µs

id: test 1, time: 12.000s, sent: 72124 msg/s, received: 67414 msg/s, min/median/75th/95th/99th consumer latency: 3416561/5406198/5920865/7648016/8038383 µs

id: test 1, time: 13.000s, sent: 62124 msg/s, received: 66736 msg/s, min/median/75th/95th/99th consumer latency: 3333046/4748484/6245422/8615134/8947641 µs

id: test 1, time: 14.000s, sent: 74673 msg/s, received: 67670 msg/s, min/median/75th/95th/99th consumer latency: 3361466/5147205/6321393/9042306/9225473 µs

id: test 1, time: 15.001s, sent: 61845 msg/s, received: 64076 msg/s, min/median/75th/95th/99th consumer latency: 3369109/5534290/6342544/9722436/10047704 µs

id: test 1, time: 16.001s, sent: 91256 msg/s, received: 64113 msg/s, min/median/75th/95th/99th consumer latency: 3616482/4776119/6345744/10445132/10872668 µs

id: test 1, time: 17.001s, sent: 52128 msg/s, received: 68183 msg/s, min/median/75th/95th/99th consumer latency: 3583798/4799449/6305382/10403371/10788774 µs

id: test 1, time: 18.001s, sent: 75479 msg/s, received: 66249 msg/s, min/median/75th/95th/99th consumer latency: 3654860/4731510/6480881/11311426/11667941 µs

id: test 1, time: 19.004s, sent: 66738 msg/s, received: 65738 msg/s, min/median/75th/95th/99th consumer latency: 3866136/5836031/6793271/11911644/12241352 µs

id: test 1, time: 20.004s, sent: 65481 msg/s, received: 68241 msg/s, min/median/75th/95th/99th consumer latency: 3703600/5448646/6583675/11292715/11607018 µs

^Ctest stopped (Producer thread interrupted)

id: test 1, sending rate avg: 84164 msg/s

id: test 1, receiving rate avg: 63249 msg/s

NFS

sudo docker run -it — rm pivotalrabbitmq/perf-test:latest -x 10 -y 20 -u “throughput-test-1” -a — id “test 1” — uri amqp://rabbitmq:5672

id: test 1, time: 1.000s, sent: 54036 msg/s, received: 11670 msg/s, min/median/75th/95th/99th consumer latency: 197/268165/302169/326686/346879 µs

id: test 1, time: 2.027s, sent: 127918 msg/s, received: 26920 msg/s, min/median/75th/95th/99th consumer latency: 372977/735639/952805/1084045/1120724 µs

id: test 1, time: 3.027s, sent: 86668 msg/s, received: 24443 msg/s, min/median/75th/95th/99th consumer latency: 1114191/1498374/1678451/1868015/1954625 µs

id: test 1, time: 4.029s, sent: 61140 msg/s, received: 25338 msg/s, min/median/75th/95th/99th consumer latency: 1838913/2259491/2366908/2544966/2616000 µs

id: test 1, time: 5.029s, sent: 56952 msg/s, received: 25281 msg/s, min/median/75th/95th/99th consumer latency: 2471240/3008502/3201076/3450805/3518016 µs

id: test 1, time: 6.029s, sent: 20797 msg/s, received: 23380 msg/s, min/median/75th/95th/99th consumer latency: 3372304/3936848/4080334/4272516/4340122 µs

id: test 1, time: 7.029s, sent: 26572 msg/s, received: 25547 msg/s, min/median/75th/95th/99th consumer latency: 4277577/4748720/4957822/5166385/5205671 µs

id: test 1, time: 8.029s, sent: 48393 msg/s, received: 23865 msg/s, min/median/75th/95th/99th consumer latency: 4728590/5488003/5709683/5929715/6073865 µs

id: test 1, time: 9.029s, sent: 28868 msg/s, received: 26340 msg/s, min/median/75th/95th/99th consumer latency: 5171583/6054516/6345349/6690546/6862718 µs

id: test 1, time: 10.033s, sent: 24827 msg/s, received: 23227 msg/s, min/median/75th/95th/99th consumer latency: 6088527/6753253/6993008/7294312/7497451 µs

id: test 1, time: 11.033s, sent: 41819 msg/s, received: 22545 msg/s, min/median/75th/95th/99th consumer latency: 6886623/7686984/7932470/8225370/8491612 µs

id: test 1, time: 12.033s, sent: 16511 msg/s, received: 22447 msg/s, min/median/75th/95th/99th consumer latency: 7832877/8493170/8747438/8995602/9147227 µs

id: test 1, time: 13.033s, sent: 20001 msg/s, received: 23622 msg/s, min/median/75th/95th/99th consumer latency: 7943092/9232899/9438714/9803144/9903630 µs

id: test 1, time: 14.033s, sent: 36543 msg/s, received: 25661 msg/s, min/median/75th/95th/99th consumer latency: 8472669/9977056/10235467/10617041/10662542 µs

id: test 1, time: 15.033s, sent: 19330 msg/s, received: 23000 msg/s, min/median/75th/95th/99th consumer latency: 8343120/10416108/10637015/11086300/11384645 µs

id: test 1, time: 16.033s, sent: 17390 msg/s, received: 24135 msg/s, min/median/75th/95th/99th consumer latency: 8766235/10905823/11179014/11474656/11536893 µs

^Ctest stopped (Producer thread interrupted)

id: test 1, sending rate avg: 42656 msg/s

id: test 1, receiving rate avg: 23513 msg/s

Mnesia

used benchee

Gluster

Benchmark suite executing with the following configuration:

warmup: 2 s

time: 1 min

memory time: 0 ns

parallel: 60

inputs: none specified

Estimated total run time: 4.13 min

Benchmarking multiple_writes_on_random_coordinates…

Benchmarking read…

Benchmarking read_and_write…

Benchmarking write…

Name ips average deviation median 99th %

read 131.84 7.59 ms ±86.18% 5.77 ms 33.76 ms

write 90.25 11.08 ms ±128.44% 6.10 ms 56.38 ms

read_and_write 56.46 17.71 ms ±78.07% 11.87 ms 53.91 ms

multiple_writes_on_random_coordinates 14.84 67.38 ms ±103.99% 45.03 ms 296.27 ms

Comparison:

read 131.84

write 90.25–1.46x slower +3.50 ms

read_and_write 56.46–2.34x slower +10.13 ms

multiple_writes_on_random_coordinates 14.84–8.88x slower +59.79 ms

Local

Benchmark suite executing with the following configuration:

warmup: 2 s

time: 1 min

memory time: 0 ns

parallel: 60

inputs: none specified

Estimated total run time: 4.13 min

Benchmarking multiple_writes_on_random_coordinates…

Benchmarking read…

Benchmarking read_and_write…

Benchmarking write…

Name ips average deviation median 99th %

read 208.28 4.80 ms ±31.66% 4.61 ms 9.18 ms

write 190.73 5.24 ms ±28.86% 5.11 ms 9.46 ms

read_and_write 100.23 9.98 ms ±21.89% 9.81 ms 15.82 ms

multiple_writes_on_random_coordinates 19.34 51.70 ms ±115.14% 31.97 ms 241.50 ms

Comparison:

read 208.28

write 190.73–1.09x slower +0.44 ms

read_and_write 100.23–2.08x slower +5.18 ms

multiple_writes_on_random_coordinates 19.34–10.77x slower +46.90 ms

NFS

--

--