Measuring Backup Performance

This page provides insights into measuring backup performance with T4K

How can customers measure backup performance?

This is one of the common questions we face from the prospects daily. Lots of times, we hand wave around this question which does not give much comfort to the end user. It is also a tricky question because Trilio is software only solution overlaid on the platform so its performance heavily depends on the platform itself, like its network bandwidth, storage performance etc. However, it is important that we provide concrete tooling so they can determine themselves what performance they can expect from Trilio on their platform.

S3 Backends:

Trilio does not directly upload backup image to s3 as one object. Instead it breaks the backup images into 32 MB chunks and uploads these chunks on to s3 as individual objects. So to determine the performance of Trilio with their s3 backend, they can run the following sequence of commands on the platform:

  • For k8s, run the following code segment on a POD

  • For OpenStack, run the following code segment on the compute node

  • For RHV, run the following code segment on the RHV node

# create 100 GB file using dd command
[root@compute ~]# dd if=/dev/urandom of=100gb bs=1M count=102400

# Split the file into 32MB chunks and move them to a folder called mysegments
[root@compute ~]# split -b 32m 100gb segment
[root@compute ~]# mkdir segments
[root@compute ~]# mv segment* mysegments/

# upload segments using aws s3 command to the s3 bucket you might use as backup target
[root@compute ~]# time aws s3 cp mysegments/ s3://trilio-qa/perftest/ --recursive
real    41m51.511s
user    13m12.434s
sys     10m38.640s

# Trilio uses qemu-img command to convert disk images to qcow2 and uploads to s3
[root@compute ~]# time qemu-img convert -p -O qcow2 100gb /var/trilio/triliovault-mounts/perftest/100gb.qcow2
    (100.00/100%)
real    37m10.657s
user    0m14.999s
sys     2m49.130s

As you can see, the AWS s3 command's performance mirrors that of qemu-img convert command. This gives the user a rough estimate on how much time it takes to backup a 100 GB workload. There will of course be an overhead (few minutes) for each backup.

NFS backend

Similarly, run the following code segment on the platform. Make sure NFS share is mapped to the platform. Assuming the NFS share is mapped at /var/triliovault-mounts directory

# Trilio uses qemu-img command to convert disk images to qcow2 on NFS backend
[root@compute ~]# time qemu-img convert -p -O qcow2 100gb /var/triliovault-mounts/100gb.qcow2
    (100.00/100%)
real    37m10.657s
user    0m14.999s
sys     2m49.130s