fio: File System Performance Benchmarking
fio is a versatile tool for stress testing and benchmarking disks and filesystems. It can generate a wide variety of I/O patterns, report latency and throughput statistics, and help you compare hardware or configuration changes.
Installation
Debian/Ubuntu
sudo apt-get update
sudo apt-get install fio
CentOS/RHEL
sudo yum install fio
macOS (Homebrew)
brew install fio
To build from source, clone the repository and run make && sudo make install:
git clone https://github.com/axboe/fio.git
cd fio
make
sudo make install
Verify the installation with fio --version.
Basic Benchmark
Create and run a mixed read/write test on a 4 GiB file:
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --bs=4k --iodepth=64 \
--readwrite=randrw --rwmixread=75 --size=4G \
--filename=test_io_with_fio
After the run, remove the test file to reclaim space:
rm test_io_with_fio
Sample Output
The output varies with the version of fio and your hardware. A sample run on a networked SSD (fio v2.2.10) looks like:
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [131.4MB/44868KB/0KB /s] [33.7K/11.3K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10287: Sat Feb 2 17:40:10 2019
read : io=784996KB, bw=133662KB/s, iops=33415, runt= 5873msec
write: io=263580KB, bw=44880KB/s, iops=11219, runt= 5873msec
cpu : usr=6.56%, sys=23.11%, ctx=266267, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=196249/w=65895/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=784996KB, aggrb=133661KB/s, minb=133661KB/s, maxb=133661KB/s, mint=5873msec, maxt=5873msec
WRITE: io=263580KB, aggrb=44879KB/s, minb=44879KB/s, maxb=44879KB/s, mint=5873msec, maxt=5873msec
Look at the IOPS figures to gauge device capability. In this example the SSD delivered ~33k read IOPS and ~11k write IOPS. A spinning disk might only manage a few thousand.
Real-World Results
NVMe Drive
read: IOPS=107k, BW=418MiB/s
write: IOPS=35.7k, BW=140MiB/s
500 GB HDD
read: IOPS=210, BW=842KiB/s
write: IOPS=70, BW=281KiB/s
EC2 i3.large (NVMe SSD)
read: IOPS=62.9k, BW=251599KB/s
write: IOPS=21k, BW=84052KB/s
These numbers highlight the gap between modern NVMe storage and traditional hard drives.
Tips
- Run benchmarks on an idle system to avoid interference.
- Use
--sizelarge enough to exceed caches. - Explore different block sizes (
--bs) and access patterns to match your workload. - For network storage, consider using
ioengine=netorrbd.
fio is a powerful utility that can validate performance claims, compare instances, or catch regressions after changes.
