compute2. A dedicated 1 GB network connects the two servers to minimize interference from other traffic.
compute1is the client and
compute2is the server. A server in our context runs an s3 endpoint and an NFS endpoint. MinIO, perhaps the most easy-to-use s3 implementation, can be launched with a simple command, and hence we will use
minioas our object-store.
/mnt/nfs_shareis exported as the NFS share and
/mnt/miniodatais used for
minioservice for storing all objects.
/mnt/nfs_shareand also runs the S3 FUSE plugin and provides a mount point
/mnt/miniomnt. Our objective is to measure our S3 FUSE overhead w.r.t to NFS and AWS S3 API.
splitcommand and copy individual segments into the
segmentsdirectory. We then upload the segments directory using the
aws s3 cpcommand as shown below. The throughput we achieved here is 1.42GB/min, which means a 30% overhead compared to NFS results.
aws s3 cpcommand. Hence we can confidently conclude that the S3 FUSE implementation adds little to no overhead compared to pure
qemu-img convertperformance on the S3 FUSE mount. The throughput for generating QCOW2 is almost the same as copying a file to s3 FUSE mount.