Comment on page
T4K For Volumes with Generic Storage
This page describes how to configure T4K to work for volumes with non-CSI storage (e.g., local disks, NFS, EFS) or where T4K does not currently support the underlying storage provider.
In non-csi volumes, T4K historically failed to backup and snapshot errors presented. Due to this historic limitation, provisioners like NFS and EFS were not supported. However, such provisioners can share storage; i.e. the same storage can be consumed by multiple persistent volumes. So, for these provisioners, support can be simply added by backing up the PV/PVC configuration without data and then applying the backed up PV/PVCs configuration during restore. There is no need for snapshots.
In other words, non-CSI volume support is added to T4K where metadata backup is taken along with
Add the provisioner in the k8s-triliovault-config configmap:
- If you are using T4K with upstream operator, then you need to update the Trilio Manager CR to add the provisioner.
- If you are using Operator Lifecycle Manager (OLM) operator then you need to edit the configmap directly.
If you are installing/updating the T4K, and you want to add the provisioner which doesn't support snapshot, then you can either create or edit the Trilio Manager CR
includelist. If you are installing T4K using upstream operator, then create the Trilio Manager CR referring to the below example. If editing an already installed T4K, then edit the Trilio Manager CR and apply the changes. In both cases, refer to the following:
By default, attempt is always made to take a snapshot if the provisioner is not specified in the
includelist, if you have provisioners for which snapshot is supported, then you can add these in an
excludelist, using the following for reference:
As well as being able to add to the include and exclude lists of provisioners,
csiConfigalso lists the default list of provisioners which do not support snapshots. Presently, the default list is just the
nfs.csi.k8s.ioprovisioner. User should avoid configuring/modifying the default list in the configmap.
You must edit the configmap using the kubectl edit command. Whilst editing, you can add the include section under csiConfig and populate it with required provisioners (for which snapshot is not supported). After editing, the configmap in the case of OLM Operator can look like the following example:
- apiVersion: operators.coreos.com/v1alpha1
Once the non-csi provisioner is included in the configmap, T4K reads the configuration and skips the data backup of volumes provided by such provisioners. Backup of only metadata happens along with PersistentVolume and PersistentVolumeClaim configuration. Here the assumption is that the data is available at the source, when the PersistentVolumeClaim and PersistentVolume are created at the time of restore.
To understand this point, let's take example of volumes provided by NFS storage provisioners. In the case of NFS, the storage point can be shared by multiple PersistentVolumes. The PersistentVolume configuration backup is taken as it is and it is created at the time of restore with the same name, (or different name if the same name PersistentVolume is already present). Whilst creating it with same configuration, the storage point is still the same and data can be accessed in the restored application.