3.0.X
Search…
⌃K

TVK For Volumes with Generic Storage

This page describes how to configure TVK to work for volumes with non-CSI storage (e.g., local disks, NFS, EFS) or where TVK does not currently support the underlying storage provider.

Introduction

In non-csi volumes, TVK historically failed to backup and snapshot errors presented. Due to this historic limitation, provisioners like NFS and EFS were not supported. However, such provisioners can share storage; i.e. the same storage can be consumed by multiple persistent volumes. So, for these provisioners, support can be simply added by backing up the PV/PVC configuration without data and then applying the backed up PV/PVCs configuration during restore. There is no need for snapshots.
In other words, non-CSI volume support is added to TVK where metadata backup is taken along with PersistentVolumeClaim and PersistentVolume configuration.

Pre-requisites

Refer to Modifying Default TVK Configuration to understand how to edit the k8s-triliovault-config configuration map.

Configuration

Add the provisioner in the k8s-triliovault-config configmap:
  • If you are using TVK with upstream operator, then you need to update the TrilioVault Manager CR to add the provisioner.
  • If you are using Operator Lifecycle Manager (OLM) operator then you need to edit the configmap directly.
Upstream Operator
OLM Operator
If you are installing/updating the TVK, and you want to add the provisioner which doesn't support snapshot, then you can either create or edit the TrilioVault Manager CR include list. If you are installing TVK using upstream operator, then create the TrilioVault Manager CR referring to the below example. If editing an already installed TVK, then edit the TrilioVault Manager CR and apply the changes. In both cases, refer to the following:
spec:
applicationScope: Cluster
tvkInstanceName: tvk
csiConfig:
include:
- cluster.local/nfs-nfs-subdir-external-provisioner
- kubernetes.io/gce-pd
By default, attempt is always made to take a snapshot if the provisioner is not specified in the include list.
Similarly, like include list, if you have provisioners for which snapshot is supported, then you can add these in an exclude list, using the following for reference:
spec:
applicationScope: Cluster
tvkInstanceName: tvk
csiConfig:
include:
- cluster.local/nfs-nfs-subdir-external-provisioner
- kubernetes.io/gce-pd
exclude:
- driver.longhorn.io
- pd.csi.storage.gke.io
As well as being able to add to the include and exclude lists of provisioners, csiConfig also lists the default list of provisioners which do not support snapshots. Presently, the default list is just the nfs.csi.k8s.io provisioner. User should avoid configuring/modifying the default list in the configmap.
You must edit the configmap using the kubectl edit command. Whilst editing, you can add the include section under csiConfig and populate it with required provisioners (for which snapshot is not supported). After editing, the configmap in the case of OLM Operator can look like the following example:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: k8s-triliovault-config
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: k8s-triliovault
app.kubernetes.io/part-of: k8s-triliovault
operators.coreos.com/k8s-triliovault.openshift-operators: ""
name: k8s-triliovault-config
namespace: openshift-operators
ownerReferences:
- apiVersion: operators.coreos.com/v1alpha1
blockOwnerDeletion: false
controller: false
kind: ClusterServiceVersion
name: k8s-triliovault-stable.2.9.2
uid: 2685a060-8302-47c3-b4a2-44ddccafd75a
data:
csiConfig: |-
default:
- nfs.csi.k8s.io
include:
- cluster.local/nfs-nfs-subdir-external-provisioner
- kubernetes.io/gce-pd
exclude:
- driver.longhorn.io
- pd.csi.storage.gke.io
resources: |-
default:
metadataJobResources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 10m
memory: 10Mi
dataJobResources:
limits:
cpu: 1500m
memory: 5120Mi
requests:
cpu: 100m
memory: 800Mi
custom:
tvkConfig: |-
name: tvk-instance
logLevel: Info
Once the non-csi provisioner is included in the configmap, TVK reads the configuration and skips the data backup of volumes provided by such provisioners. Backup of only metadata happens along with PersistentVolume and PersistentVolumeClaim configuration. Here the assumption is that the data is available at the source, when the PersistentVolumeClaim and PersistentVolume are created at the time of restore.
To understand this point, let's take example of volumes provided by NFS storage provisioners. In the case of NFS, the storage point can be shared by multiple PersistentVolumes. The PersistentVolume configuration backup is taken as it is and it is created at the time of restore with the same name, (or different name if the same name PersistentVolume is already present). Whilst creating it with same configuration, the storage point is still the same and data can be accessed in the restored application.