Blobfuse2 is an open source project developed to provide a virtual filesystem backed by the Azure Storage. It uses the libfuse open source library (fuse3) to communicate with the Linux FUSE kernel module, and implements the filesystem operations using the Azure Storage REST APIs.
To keep track of performance regression introduced by any commit in the main branch, we run a continuous benchmark test suite. This test suite is executed periodically and for each commit done to main branch. Suite uses fio, a industry standard storage benchmarking tool, and few custom applications to perform various tests. Results are then published in our gh-pages in this repository for a visual representaiton.
X86_64 tests are performed on Standard D96ds_v5 (96 vcpus, 384 GiB memory) Azure VM running in eastus2 region. Specifications of this VM can be found here.
ARM64 tests are performed on Standard D96pds_v6 (96 vcpus, 384 GiB memory) Azure VM running in eastus2 region. Specifications of this VM can be found here.
| VM | Read (MB/s) | Write (MB/s) |
|---|---|---|
| Standard D96ds_v5 | Throughput | Throughput |
| Standard D96pds_v6 | Throughput | Throughput |
A Premium Blob Storage Account and a Standard Blob Storage Account in eastus2 region were used to conduct all tests. HNS was disabled on both these accounts.
Blobfuse2 is configured with file-cache, block-cache for all tests.
Master test script that simulates this benchmarking test suite is located here. To execute a specific test case download the script and execute below command:
fio_bench.sh <mount-path> <test-name>
config.yamlfio and jq before you execute the scripttest-name are: read / writeBelow table provides latency/time and bandwidth results for various tests on respective account types. Each test has a linked section describing details of that test case.
| Storage Account Type | Read Performance | Write Performance |
|---|---|---|
| Standard | Throughput | Throughput |
| Premium | Throughput | Throughput |
| Storage Account Type | Read Performance | Write Performance |
|---|---|---|
| Standard | Throughput | Throughput |
| Premium | Throughput | Throughput |
| Storage Account Type | Read Performance | Write Performance |
|---|---|---|
| Standard | Throughput | Throughput |
| Premium | Throughput | Throughput |
| Storage Account Type | Read Performance | Write Performance |
|---|---|---|
| Standard | Throughput | Throughput |
| Premium | Throughput | Throughput |
In this test fio command is used to run various write workflows. As part of the test bandwidth and latency are measured. Each test is run for 30 seconds and average of 3 such iteration is taken. Both bandwidth and latency are taken from fio output directly and projected in the charts. To simulate different write work-flows following cases are performed:
- Sequential write on a 100G file with direct-io
- Sequential write on a 100G file with kernel-cache enabled.
- Sequential write by 4 parallel threads on 4 different files of 100G size.
- Sequential write by 16 parallel threads on 16 different files of 100G size.
All fio config files used during these tests are located here.
In this test fio command is used to run various read workflows. As part of the test bandwidth and latency are measured. Each test is run for 30 seconds and average of 3 such iteration is taken. Both bandwidth and latency are taken from fio output directly and projected in the charts. To simulate different read work-flows following cases are performed:
- Sequential read on a 100G file with direct-io
- Random read on a 100G file with direct-io
- Sequential read on a 100G file with kernel-cache enabled.
- Random read on a 100G file with kernel-cache enabled.
- Sequential read on a small 5M file
- Random read on a small 5M file
- Sequential read on a 100G file by 4 parallel threads
- Sequential read on a 100G file by 16 parallel threads
- Random read on a 100G file by 4 parallel threads
All fio config files used during these tests are located here.