compute1
and compute2
. A dedicated 1 GB network connects the two servers to minimize interference from other traffic. compute1
is the client and compute2
is the server. A server in our context runs an s3 endpoint and an NFS endpoint. MinIO, perhaps the most easy-to-use s3 implementation, can be launched with a simple command, and hence we will useminio
as our object-store.compute2
. /mnt/nfs_share
is exported as the NFS share and/mnt/miniodata
is used forminio
service for storing all objects. compute1
mounts /mnt/nfs_share
and also runs the S3 FUSE plugin and provides a mount point /mnt/miniomnt
. Our objective is to measure our S3 FUSE overhead w.r.t to NFS and AWS S3 API.split
command and copy individual segments into thesegments
directory. We then upload the segments directory using theaws s3 cp
command as shown below. The throughput we achieved here is 1.42GB/min, which means a 30% overhead compared to NFS results.aws s3 cp
command. Hence we can confidently conclude that the S3 FUSE implementation adds little to no overhead compared to pure aws api
.qemu-img convert
performance on the S3 FUSE mount. The throughput for generating QCOW2 is almost the same as copying a file to s3 FUSE mount.