LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • Introduction
  • Pre-requisites
  • Configuration

Was this helpful?

  1. Appendix

T4K For Volumes with Generic Storage

This page describes how to configure T4K to work for volumes with non-CSI storage (e.g., local disks, NFS, EFS) or where T4K does not currently support the underlying storage provider.

Introduction

In non-csi volumes, T4K historically failed to backup and snapshot errors presented. Due to this historic limitation, provisioners like NFS and EFS were not supported. However, such provisioners can share storage; i.e. the same storage can be consumed by multiple persistent volumes. So, for these provisioners, support can be simply added by backing up the PV/PVC configuration without data and then applying the backed up PV/PVCs configuration during restore. There is no need for snapshots.

In other words, non-CSI volume support is added to T4K where metadata backup is taken along with PersistentVolumeClaim and PersistentVolume configuration.

Pre-requisites

Refer to Modifying Default T4K Configuration to understand how to edit the k8s-triliovault-config configuration map.

Configuration

Add the provisioner in the k8s-triliovault-config configmap:

  • If you are using T4K with upstream operator, then you need to update the Trilio Manager CR to add the provisioner.

  • If you are using Operator Lifecycle Manager (OLM) operator then you need to edit the configmap directly.

If you are installing/updating the T4K, and you want to add the provisioner which doesn't support snapshot, then you can either create or edit the Trilio Manager CR include list. If you are installing T4K using upstream operator, then create the Trilio Manager CR referring to the below example. If editing an already installed T4K, then edit the Trilio Manager CR and apply the changes. In both cases, refer to the following:

spec:
  applicationScope: Cluster
  tvkInstanceName: tvk
  csiConfig:
    include:
      - cluster.local/nfs-nfs-subdir-external-provisioner
      - kubernetes.io/gce-pd    

By default, attempt is always made to take a snapshot if the provisioner is not specified in the include list.

Similarly, like include list, if you have provisioners for which snapshot is supported, then you can add these in an exclude list, using the following for reference:

spec:
  applicationScope: Cluster
  tvkInstanceName: tvk
  csiConfig:
    include:
      - cluster.local/nfs-nfs-subdir-external-provisioner
      - kubernetes.io/gce-pd
    exclude:
      - driver.longhorn.io
      - pd.csi.storage.gke.io

As well as being able to add to the include and exclude lists of provisioners, csiConfig also lists the default list of provisioners which do not support snapshots. Presently, the default list is just the nfs.csi.k8s.io provisioner. User should avoid configuring/modifying the default list in the configmap.

You must edit the configmap using the kubectl edit command. Whilst editing, you can add the include section under csiConfig and populate it with required provisioners (for which snapshot is not supported). After editing, the configmap in the case of OLM Operator can look like the following example:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: k8s-triliovault-config
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: k8s-triliovault
    app.kubernetes.io/part-of: k8s-triliovault
    operators.coreos.com/k8s-triliovault.openshift-operators: ""
   name: k8s-triliovault-config
   namespace: openshift-operators
   ownerReferences:
   - apiVersion: operators.coreos.com/v1alpha1
     blockOwnerDeletion: false
     controller: false
     kind: ClusterServiceVersion
     name: k8s-triliovault-stable.2.9.2
     uid: 2685a060-8302-47c3-b4a2-44ddccafd75a
data:
  csiConfig: |-
    default:
      - nfs.csi.k8s.io
    include:
      - cluster.local/nfs-nfs-subdir-external-provisioner
      - kubernetes.io/gce-pd
    exclude:
      - driver.longhorn.io
      - pd.csi.storage.gke.io
  resources: |-
    default:
      metadataJobResources:
        limits:
          cpu: 500m
          memory: 512Mi
        requests:
          cpu: 10m
          memory: 10Mi
      dataJobResources:
        limits:
          cpu: 1500m
          memory: 5120Mi
        requests:
          cpu: 100m
          memory: 800Mi
    custom:
  tvkConfig: |-
    name: tvk-instance
    logLevel: Info

Once the non-csi provisioner is included in the configmap, T4K reads the configuration and skips the data backup of volumes provided by such provisioners. Backup of only metadata happens along with PersistentVolume and PersistentVolumeClaim configuration. Here the assumption is that the data is available at the source, when the PersistentVolumeClaim and PersistentVolume are created at the time of restore.

To understand this point, let's take example of volumes provided by NFS storage provisioners. In the case of NFS, the storage point can be shared by multiple PersistentVolumes. The PersistentVolume configuration backup is taken as it is and it is created at the time of restore with the same name, (or different name if the same name PersistentVolume is already present). Whilst creating it with same configuration, the storage point is still the same and data can be accessed in the restored application.

PreviousBackup and Restore Virtual Machines running on OpenShiftNextT4K Best Practices

Last updated 6 months ago

Was this helpful?