T4K For Volumes with Generic Storage

This page describes how to configure T4K to work for volumes with non-CSI storage (e.g., local disks, NFS, EFS) or where T4K does not currently support the underlying storage provider.

Deprecated Documentation

This document is deprecated and no longer supported. For accurate, up-to-date information, please refer to the documentation for the latest version of Trilio.

T4K For Volumes with Generic Storage

Introduction

In non-csi volumes, T4K historically failed to backup and snapshot errors presented. Due to this historic limitation, provisioners like NFS and EFS were not supported. However, such provisioners can share storage; i.e. the same storage can be consumed by multiple persistent volumes. So, for these provisioners, support can be simply added by backing up the PV/PVC configuration without data and then applying the backed up PV/PVCs configuration during restore. There is no need for snapshots.

In other words, non-CSI volume support is added to T4K where metadata backup is taken along with PersistentVolumeClaim and PersistentVolume configuration.

Pre-requisites

Refer to Modifying Default T4K Configuration to understand how to edit the k8s-triliovault-config configuration map.

Configuration

Add the provisioner in the k8s-triliovault-config configmap:

  • If you are using T4K with upstream operator, then you need to update the Trilio Manager CR to add the provisioner.

  • If you are using Operator Lifecycle Manager (OLM) operator then you need to edit the configmap directly.

If you are installing/updating the T4K, and you want to add the provisioner which doesn't support snapshot, then you can either create or edit the Trilio Manager CR include list. If you are installing T4K using upstream operator, then create the Trilio Manager CR referring to the below example. If editing an already installed T4K, then edit the Trilio Manager CR and apply the changes. In both cases, refer to the following:

spec:
  applicationScope: Cluster
  tvkInstanceName: tvk
  csiConfig:
    include:
      - cluster.local/nfs-nfs-subdir-external-provisioner
      - kubernetes.io/gce-pd    

By default, attempt is always made to take a snapshot if the provisioner is not specified in the include list.

Similarly, like include list, if you have provisioners for which snapshot is supported, then you can add these in an exclude list, using the following for reference:

spec:
  applicationScope: Cluster
  tvkInstanceName: tvk
  csiConfig:
    include:
      - cluster.local/nfs-nfs-subdir-external-provisioner
      - kubernetes.io/gce-pd
    exclude:
      - driver.longhorn.io
      - pd.csi.storage.gke.io

As well as being able to add to the include and exclude lists of provisioners, csiConfig also lists the default list of provisioners which do not support snapshots. Presently, the default list is just the nfs.csi.k8s.io provisioner. User should avoid configuring/modifying the default list in the configmap.

Last updated