LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • How can customers measure backup performance?
  • S3 Backends:
  • NFS backend

Was this helpful?

  1. Performance

Measuring Backup Performance

This page provides insights into measuring backup performance with T4K

How can customers measure backup performance?

This is one of the common questions we face from the prospects daily. Lots of times, we hand wave around this question which does not give much comfort to the end user. It is also a tricky question because Trilio is software only solution overlaid on the platform so its performance heavily depends on the platform itself, like its network bandwidth, storage performance etc. However, it is important that we provide concrete tooling so they can determine themselves what performance they can expect from Trilio on their platform.

S3 Backends:

Trilio does not directly upload backup image to s3 as one object. Instead it breaks the backup images into 32 MB chunks and uploads these chunks on to s3 as individual objects. So to determine the performance of Trilio with their s3 backend, they can run the following sequence of commands on the platform:

  • For k8s, run the following code segment on a POD

  • For OpenStack, run the following code segment on the compute node

  • For RHV, run the following code segment on the RHV node

# create 100 GB file using dd command
[root@compute ~]# dd if=/dev/urandom of=100gb bs=1M count=102400

# Split the file into 32MB chunks and move them to a folder called mysegments
[root@compute ~]# split -b 32m 100gb segment
[root@compute ~]# mkdir segments
[root@compute ~]# mv segment* mysegments/

# upload segments using aws s3 command to the s3 bucket you might use as backup target
[root@compute ~]# time aws s3 cp mysegments/ s3://trilio-qa/perftest/ --recursive
real    41m51.511s
user    13m12.434s
sys     10m38.640s

# Trilio uses qemu-img command to convert disk images to qcow2 and uploads to s3
[root@compute ~]# time qemu-img convert -p -O qcow2 100gb /var/trilio/triliovault-mounts/perftest/100gb.qcow2
    (100.00/100%)
real    37m10.657s
user    0m14.999s
sys     2m49.130s

As you can see, the AWS s3 command's performance mirrors that of qemu-img convert command. This gives the user a rough estimate on how much time it takes to backup a 100 GB workload. There will of course be an overhead (few minutes) for each backup.

NFS backend

Similarly, run the following code segment on the platform. Make sure NFS share is mapped to the platform. Assuming the NFS share is mapped at /var/triliovault-mounts directory

# Trilio uses qemu-img command to convert disk images to qcow2 on NFS backend
[root@compute ~]# time qemu-img convert -p -O qcow2 100gb /var/triliovault-mounts/100gb.qcow2
    (100.00/100%)
real    37m10.657s
user    0m14.999s
sys     2m49.130s
PreviousT4K S3 Fuse Plugin performanceNextT4K Integration with Slack using BotKube

Was this helpful?