Measuring Backup Performance
This page provides insights into measuring backup performance with T4K
How can customers measure backup performance?
This is one of the common questions we face from the prospects daily. Lots of times, we hand wave around this question which does not give much comfort to the end user. It is also a tricky question because Trilio is software only solution overlaid on the platform so its performance heavily depends on the platform itself, like its network bandwidth, storage performance etc. However, it is important that we provide concrete tooling so they can determine themselves what performance they can expect from Trilio on their platform.
S3 Backends:
Trilio does not directly upload backup image to s3 as one object. Instead it breaks the backup images into 32 MB chunks and uploads these chunks on to s3 as individual objects. So to determine the performance of Trilio with their s3 backend, they can run the following sequence of commands on the platform:
For k8s, run the following code segment on a POD
For OpenStack, run the following code segment on the compute node
For RHV, run the following code segment on the RHV node
As you can see, the AWS s3 command's performance mirrors that of qemu-img convert command. This gives the user a rough estimate on how much time it takes to backup a 100 GB workload. There will of course be an overhead (few minutes) for each backup.
NFS backend
Similarly, run the following code segment on the platform. Make sure NFS share is mapped to the platform. Assuming the NFS share is mapped at /var/triliovault-mounts directory