Getting Started

This section discusses how to get jump-started with TrilioVault for Kubernetes in a customer environment

Overview

In order to get started with TrilioVault for Kubernetes in your environment, the following steps are needed:

  1. Install Test CSI Driver - Leverage the test hostpath driver, if your environment doesn't support one with Snapshot capability today.

  2. Software Access and Installation - Access software and install it based on specific directions for your environment.

  3. License - Leverage the free or basic license (if not using an Enterprise) following instructions from the licensing page.

  4. Run Example

    1. Follow the steps to create a target, sample applications and backup/restore the applications.

  5. Replace CSI Driver - If leveraging the hostpath CSI driver, replace it with your Enterprise-grade CSI driver.

Install CSI Driver

Skip this step if your environment already has a CSI driver installed with snapshot capability.

Follow the instructions provided in Appendix HostPath for TVK to install the Hostpath CSI driver. The proper functioning of the software will be determined by running an example with the hostpath driver.

Software Access and Installation

Please refer to the Installing TrilioVault section to install TrilioVault for Kubernetes in your environment.

Run Example

The following section will cover installing of a test CSI driver, creating a sample application and backup/restore of it via labels, Helm and Operator.

If your environment already has a CSI driver (supporting snapshots) installed, storage class and volumesnapshotclass configured, then you can skip to Create a Sample Application section below. If not, then follow the steps below to install the hostpath driver temporarily to test TrilioVault for Kubernetes till you have a CSI driver that supports snapshot, at which point you can migrate to the new CSI driver.

More details around CRDs and their usage/explanation can be found in the Custom Resource Definition Section.

Create a Target

Create a secret containing the credentials for data stores to store backups.

$ kubectl create -f tv-backup-target.yaml

Please use one of the Target examples provided in the Custom Resource Definition section as a template for creating a NFS, Amazon S3, or any S3 compatible storage target. An Amazon S3 target example is provided below as well.

cat /root/tv-backup-target.yaml
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
name: demo-s3-target
spec:
type: ObjectStore
vendor: AWS
objectStoreCredentials:
url: "https://s3.amazonaws.com"
accessKey: "AaBbCcDdEeFf"
secretKey: "BogusKeyEntry"
bucketName: "S3_Bucket_US_East"
region: "us-east-1"

Create a Policy

Below is a backup cleanup policy. User can define the amount of time after which backups should be deleted.

Following example, defines Policy to delete snapshots of last 30 days.

apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
name: demo-policy
spec:
type: Cleanup
default: false
cleanupConfig:
backupDays: 30

Policy is not available as part of 1.0.0 release. Backups must be deleted manually.

Label Example

The following sections will create a sample application (tagged it with labels), backup the application via labels and then restore the application.

The following steps will be performed.

  1. Create a sample MySQL application

  2. Create a BackupPlan CR that specifies the mysql application to protect via labels

  3. Create a Backup CR with a reference to the BackupPlan CR

  4. Create a Restore CR with a reference to the Backup CR.

Create a Sample Application

Create the following file as mysql.yaml. Note the labels used to tag the different components of the application.

## Secret for mysql password
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
labels:
app: k8s-demo-app
tier: frontend
type: Opaque
data:
password: dHJpbGlvcGFzcwo=
## password base64 encoded, plain text: triliopass
---
## PVC for mysql PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: k8s-demo-app
tier: mysql
spec:
storageClassName: "csi-hostpath-sc"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
## Mysql app deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: k8s-demo-app-mysql
labels:
app: k8s-demo-app
tier: mysql
spec:
selector:
matchLabels:
app: k8s-demo-app
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-demo-app
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
## Service for mysql app
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-app-mysql
labels:
app: k8s-demo-app
tier: mysql
spec:
ports:
- port: 3306
selector:
app: k8s-demo-app
tier: mysql
clusterIP: None
---
## Deployment for frontend webserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-demo-app-frontend
labels:
app: k8s-demo-app
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: k8s-demo-app
tier: frontend
template:
metadata:
labels:
app: k8s-demo-app
tier: frontend
spec:
containers:
- name: demoapp-frontend
image: docker.io/trilio/k8s-demo-app:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
## Service for frontend
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-app-frontend
labels:
app: k8s-demo-app
tier: frontend
spec:
type: LoadBalancer
ports:
- name: web
nodePort: 30900
port: 80
selector:
app: k8s-demo-app
tier: frontend

Create a BackupPlan

Create a BackupPlan CR that references the application created in the previous step via matching labels.

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: mysql-label-backupplan
spec:
backupNamespace: default
backupConfig:
target:
name: demo-s3-target
backupPlanComponents:
custom:
- matchLabels:
app: k8s-demo-app

Create a Backup

Create a Backup CR to protect the BackupPlan. Type of the backup can be either full or incremental. Note: The first backup into a target location will always be a Full backup.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: mysql-label-backup
spec:
type: Full
scheduleType: Periodic
backupPlan:
name: mysql-label-backupplan
namespace: default

Restore the Backup/Application

Finally use the Restore CR to restore the Backup. In the example provided below, mysql-label-backup is being restored into the "default" namespace.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: demo-restore
spec:
backupPlan:
name: mysql-label-backupplan
source:
type: Backup
backup:
name: mysql-label-backup
target:
name: demo-s3-target
restoreNamespace: restore-ns
skipIfAlreadyExists: true

Helm Example

The following sections will create a sample application via Helm, backup the application via Helm selector fields and then restore the application.

The following steps will be performed.

  1. Create a cockroachdb instance using Helm

  2. Create a BackupPlan CR that specifies the cockroachdb application to protect.

  3. Create a Backup CR with a reference to the BackupPlan CR

  4. Create a Restore CR with a reference to the Backup CR.

Create a sample application via Helm

In this example, we will use Helm Tooling to create a "cockroachdb" application.

Run the following commands against a Kubernetes cluster. Use the attached cockroachdb-values.yaml file for the installation by copying it to your local directory.

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
## If user has helm2
helm install --name cockroachdb-app --values cockroachdb-values.yaml stable/cockroachdb
## If user has helm3
helm install cockroachdb-app --values cockroachdb-values.yaml stable/cockroachdb

After running helm install, confirm the installation was successful by running helm ls

helm ls
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cockroachdb-app default 1 2020-07-08 05:04:17.498739741 +0000 UTC deployed cockroachdb-3.0.19.2.5

Create a BackupPlan

Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: cockroachdb-backup-plan
spec:
backupNamespace: default
backupConfig:
target:
name: demo-s3-target
backupPlanComponents:
helmReleases:
- cockroachdb-app

Create a Backup

Use the following content to create a backup CR.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: cockroachdb-full-backup
spec:
type: Full
scheduleType: Periodic
backupPlan:
name: cockroachdb-backup-plan

Restore Backup/Application

After the backup has completed successfully, create a Restore CR to restore the application.

Before restoring the app, we need to clean the existing app. This is required cluster level resources of the app can create conflict during restore operation.

## If user has helm2
helm delete --purge cockroachdb-app
## If user has helm3
helm delete cockroachdb-app

Similar to the Label example above:

Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of application are deleted.

Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: cockroachdb-restore
spec:
backupPlan:
name: cockroachdb-backup-plan
source:
type: Backup
backup:
name: cockroachdb-full-backup
target:
name: demo-s3-target
restoreNamespace: default
skipIfAlreadyExists: true

Operator Example

The following steps will be performed.

  1. Install a sample etcd Operator

  2. Create a etcd cluster

  3. Create a BackupPlan CR that specifies the etcd application to protect.

  4. Create a Backup CR with a reference to the BackupPlan CR

  5. Create a Restore CR with a reference to the Backup CR.

We are demonstrating standard 'etcd-operator' here. First we need to deploy the operator using it's helm chart. Then we need to deploy etcd cluster. Following are the commands for it.

Install etcd-operator

helm install demo-etcd-operator stable/etcd-operator

Create an etcd cluster using following yaml definition

apiVersion: etcd.database.coreos.com/v1beta2
kind: EtcdCluster
metadata:
name: demo-etcd-cluster
namespace: default
labels:
app: demo-app
spec:
size: 3
version: 3.2.13
pod:
persistentVolumeClaimSpec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

Create a BackupPlan

  • Create 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the following yaml definition.

  • 'operatorResourceName': This field holds the name of the operator whose backup we are going to take. In this case our operator name is "demo-etcd-cluster"

  • 'operatorResourceSelector': This field selects the operator resources (whose backup we are going to take) using labels. In this case, operator is 'etcd-operator' and all it's resources like pods, services, deployments have a unique label - "release: trilio-demo-etcd-operator"

  • 'applicationResourceSelector': This field selects the resources of the application launched using the operator. In this case, etcd-cluster is the application launched using etcd-operator. All the resources of this application have a unique label - "app: etcd"

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: backup-job-k8s-demo-app
spec:
backupNamespace: default
backupConfig:
target:
name: demo-s3-target
retentionPolicy:
name: demo-policy
backupPlanComponents:
operators:
- operatorId: demo-etcd-cluster
customResources:
- groupVersionKind:
group: "etcd.database.coreos.com"
version: "v1beta2"
kind: "EtcdCluster"
objects:
- demo-etcd-cluster
operatorResourceSelector:
- matchLabels:
release: demo-etcd-operator
applicationResourceSelector:
- matchLabels:
app: etcd

Create a Backup

  • Take a backup of above 'BackupPlan'. Use following YAML definition to create a 'Backup' resource.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: demo-full-backup
spec:
type: Full
scheduleType: Periodic
backupPlan:
name: backup-job-k8s-demo-app

Restore the Backup/Application

  • After the backup completes successfully, you can perform the Restore of it.

  • To restore the etcd-operator and it's clusters from the above backup taken use the following yaml definition.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: demo-restore
spec:
backupPlan:
name: backup-job-k8s-demo-app
source:
type: Backup
backup:
name: demo-full-backup
target:
name: demo-s3-target
restoreNamespace: restore-ns
skipIfAlreadyExists: true

Replace Hostpath CSI Driver with Enterprise CSI Driver

If you have an Enterprise CSI driver available, you can replace the driver in the StorageClass and VolumeSnapshotClass to point to your Enterprise CSI driver. Please review the CSI Driver Appendix page to view the different CSI drivers that are available today with quick links to install popular CSI drivers.

If you do not have an Enterprise CSI driver with Snapshot support, continue using the Hostpath CSI Driver for evaluating TrilioVault for Kubernetes.

When replacing drivers ensure the correct schema, as per the API version that the Enterprise driver supports, is used.