Search…
Getting Started with Trilio APIs
This section discusses how to get jump-started with TrilioVault for Kubernetes (TVK) in a customer environment

Overview

In order to get started with TrilioVault for Kubernetes in your environment, the following steps will be performed:

Operating the Product

    1.
    Install Test CSI Driver - Leverage the test hostpath driver, if your environment doesn't support one with Snapshot capability today.
    2.
    Create a TVK Target - Location where backups will be stored.
    3.
    Create a retention policy (Optional) - To specify how long to keep the backups for.
    4.
    Run Example
      1.
      Label Example
      2.
      Helm Example
      3.
      Operator Example
      4.
      Namespace Example

Step 1: Install Test CSI Driver

Skip this step if your environment already has a CSI driver installed with snapshot capability.
Follow the instructions provided in Appendix HostPath for TVK to install the Hostpath CSI driver. The proper functioning of the software will be determined by running an example with the hostpath driver.

Step 2: Create a Target

Create a secret containing the credentials for data stores to store backups. An example is provided below.
1
apiVersion: v1
2
kind: Secret
3
metadata:
4
name: sample-secret
5
type: Opaque
6
stringData:
7
accessKey: AKIAS5B35DGFSTY7T55D
8
secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVD
Copied!
Please use one of the Target examples provided in the Custom Resource Definition section as a template for creating a NFS, Amazon S3, or any S3 compatible storage target.
Supported values for S3 vendors include:
1
"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"
Copied!
An Amazon S3 target example is provided below:
1
apiVersion: triliovault.trilio.io/v1
2
kind: Target
3
metadata:
4
name: demo-s3-target
5
spec:
6
type: ObjectStore
7
vendor: AWS
8
objectStoreCredentials:
9
region: us-east-1
10
bucketName: trilio-browser-test
11
url: "https://s3.amazonaws.com"
12
credentialSecret:
13
name: sample-secret
14
namespace: TARGET_NAMESPACE
15
thresholdCapacity: 1000Gi
Copied!
1
kubectl create -f tv-backup-target.yaml
Copied!
Note: With the above configuration, target would get created in current user namespace unless specified. Also, additional information on Bucket permissions can be found in the Appendix Section: AWS S3 Target Permissions

Step 3: Create a Retention Policy (Optional)

While the example backup custom resources created by following this Getting Started page can be deleted manually via kubectl commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Policy
3
metadata:
4
name: demo-policy
5
spec:
6
type: Retention
7
default: false
8
retentionConfig:
9
latest: 2
10
weekly: 1
11
dayOfWeek: Wednesday
12
monthly: 1
13
dateOfMonth: 15
14
monthOfYear: March
15
yearly: 1
Copied!
More information on the Retention Policy spec can be found in the application CRD reference section. A retention policy is referenced in the backupPlan CR .
Note: With the above configuration, policy would get created in default namespace unless specified.

Step 4: Run Example

This section covers backup and restore examples based on Labels, Helm charts, Operators and Namespaces
More details around CRDs and their usage/explanation can be found in the Custom Resource Definition Section.
Note:
    1.
    Backup and BackupPlan should be in the same namespace.
    2.
    For restore operation, the resources will get restored in the namespace where restore CR is created.
    3.
    Specifying backupPlan information in restore manifest, will automatically select the latest successful backup for that backupPlan.

Step 4.1: Label Example

The following sections will create a sample application (tagged it with labels), backup the application via labels and then restore the application.
The following steps will be performed.
    1.
    Create a sample MySQL application
    2.
    Create a BackupPlan CR that specifies the mysql application to protect via labels
    3.
    Create a Backup CR with a reference to the BackupPlan CR
    4.
    Create a Restore CR with a reference to the Backup CR.

Create a Sample Application

Create the following file as mysql.yaml. Note the labels used to tag the different components of the application.
1
## Secret for mysql password
2
apiVersion: v1
3
kind: Secret
4
metadata:
5
name: mysql-pass
6
labels:
7
app: k8s-demo-app
8
tier: frontend
9
type: Opaque
10
data:
11
password: dHJpbGlvcGFzcwo=
12
## password base64 encoded, plain text: triliopass
13
---
14
## PVC for mysql PV
15
apiVersion: v1
16
kind: PersistentVolumeClaim
17
metadata:
18
name: mysql-pv-claim
19
labels:
20
app: k8s-demo-app
21
tier: mysql
22
spec:
23
storageClassName: "csi-hostpath-sc"
24
accessModes:
25
- ReadWriteOnce
26
resources:
27
requests:
28
storage: 5Gi
29
---
30
## Mysql app deployment
31
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
32
kind: Deployment
33
metadata:
34
name: k8s-demo-app-mysql
35
labels:
36
app: k8s-demo-app
37
tier: mysql
38
spec:
39
selector:
40
matchLabels:
41
app: k8s-demo-app
42
tier: mysql
43
strategy:
44
type: Recreate
45
template:
46
metadata:
47
labels:
48
app: k8s-demo-app
49
tier: mysql
50
spec:
51
containers:
52
- image: mysql:5.6
53
name: mysql
54
env:
55
- name: MYSQL_ROOT_PASSWORD
56
valueFrom:
57
secretKeyRef:
58
name: mysql-pass
59
key: password
60
ports:
61
- containerPort: 3306
62
name: mysql
63
volumeMounts:
64
- name: mysql-persistent-storage
65
mountPath: /var/lib/mysql
66
volumes:
67
- name: mysql-persistent-storage
68
persistentVolumeClaim:
69
claimName: mysql-pv-claim
70
---
71
## Service for mysql app
72
apiVersion: v1
73
kind: Service
74
metadata:
75
name: k8s-demo-app-mysql
76
labels:
77
app: k8s-demo-app
78
tier: mysql
79
spec:
80
ports:
81
- port: 3306
82
selector:
83
app: k8s-demo-app
84
tier: mysql
85
clusterIP: None
86
---
87
## Deployment for frontend webserver
88
apiVersion: apps/v1
89
kind: Deployment
90
metadata:
91
name: k8s-demo-app-frontend
92
labels:
93
app: k8s-demo-app
94
tier: frontend
95
spec:
96
replicas: 3
97
selector:
98
matchLabels:
99
app: k8s-demo-app
100
tier: frontend
101
template:
102
metadata:
103
labels:
104
app: k8s-demo-app
105
tier: frontend
106
spec:
107
containers:
108
- name: demoapp-frontend
109
image: docker.io/trilio/k8s-demo-app:v1
110
imagePullPolicy: IfNotPresent
111
ports:
112
- containerPort: 80
113
---
114
## Service for frontend
115
apiVersion: v1
116
kind: Service
117
metadata:
118
name: k8s-demo-app-frontend
119
labels:
120
app: k8s-demo-app
121
tier: frontend
122
spec:
123
ports:
124
- name: web
125
port: 80
126
selector:
127
app: k8s-demo-app
128
tier: frontend
Copied!

Create a BackupPlan

Create a BackupPlan CR that references the application created in the previous step via matching labels in the same namespace where application resides.
1
apiVersion: triliovault.trilio.io/v1
2
kind: BackupPlan
3
metadata:
4
name: mysql-label-backupplan
5
spec:
6
backupConfig:
7
target:
8
namespace: default
9
name: demo-s3-target
10
retentionPolicy:
11
namespace: default
12
name: demo-policy
13
backupPlanComponents:
14
custom:
15
- matchLabels:
16
app: k8s-demo-app
Copied!

Create a Backup

Create a Backup CR to protect the BackupPlan. Type of the backup can be either full or incremental.
Note: The first backup into a target location will always be a Full backup.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Backup
3
metadata:
4
name: mysql-label-backup
5
spec:
6
type: Full
7
backupPlan:
8
name: mysql-label-backupplan
9
namespace: default
Copied!

Restore the Backup/Application

Finally create the Restore CR to restore the Backup, in the same namespace where Backup CR is created. In the example provided below, mysql-label-backup is being restored into the "default" namespace.
Note: If restoring into the same namespace, ensure that the original application components have been removed.
Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Restore
3
metadata:
4
name: demo-restore
5
spec:
6
source:
7
type: Backup
8
backup:
9
name: mysql-label-backup
10
namespace: default
11
skipIfAlreadyExists: true
Copied!
Note: If restoring into another namespace in a same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Step 4.2: Helm Example

The following sections will create a sample application via Helm, backup the application via Helm selector fields and then restore the application.
The following steps will be performed.
    1.
    Create a cockroachdb instance using Helm
    2.
    Create a BackupPlan CR that specifies the cockroachdb application to protect.
    3.
    Create a Backup CR with a reference to the BackupPlan CR
    4.
    Create a Restore CR with a reference to the Backup CR.

Create a sample application via Helm

In this example, we will use Helm Tooling to create a "cockroachdb" application.
Run the following commands against a Kubernetes cluster. Use the attached cockroachdb-values.yaml file for the installation by copying it to your local directory.
cockroachdb-values.yaml
144B
Binary
1
helm repo add stable https://charts.helm.sh/stable
2
helm repo update
Copied!
1
helm install cockroachdb-app --values cockroachdb-values.yaml stable/cockroachdb
Copied!
After running helm install, confirm the installation was successful by running helm ls
1
helm ls
Copied!
1
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
2
cockroachdb-app default 1 2020-07-08 05:04:17.498739741 +0000 UTC deployed cockroachdb-3.0.19.2.5
Copied!

Create a BackupPlan

Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.
1
apiVersion: triliovault.trilio.io/v1
2
kind: BackupPlan
3
metadata:
4
name: cockroachdb-backup-plan
5
spec:
6
backupConfig:
7
target:
8
namespace: default
9
name: demo-s3-target
10
backupPlanComponents:
11
helmReleases:
12
- cockroachdb-app
Copied!

Create a Backup

Use the following content to create a backup CR.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Backup
3
metadata:
4
name: cockroachdb-full-backup
5
spec:
6
type: Full
7
scheduleType: Periodic
8
backupPlan:
9
name: cockroachdb-backup-plan
10
namespace: default
Copied!

Restore Backup/Application

After the backup has completed successfully, create a Restore CR to restore the application in the same namespace where BackupPlan and Backup CRs are created.
Before restoring the app, we need to clean the existing app. This is required cluster level resources of the app can create conflict during restore operation.
1
helm delete cockroachdb-app
Copied!
Similar to the Label example above:
Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of application are deleted.
Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Restore
3
metadata:
4
name: cockroachdb-restore
5
spec:
6
source:
7
type: Backup
8
backup:
9
name: cockroachdb-full-backup
10
namespace: default
11
skipIfAlreadyExists: true
Copied!

Step 4.3: Operator Example

The following steps will be performed.
    1.
    Install a sample etcd Operator
    2.
    Create a etcd cluster
    3.
    Create a BackupPlan CR that specifies the etcd application to protect.
    4.
    Create a Backup CR with a reference to the BackupPlan CR
    5.
    Create a Restore CR with a reference to the Backup CR.
We are demonstrating standard 'etcd-operator' here. First we need to deploy the operator using it's helm chart. Then we need to deploy etcd cluster. Following are the commands for it.

Install etcd-operator

1
helm install demo-etcd-operator stable/etcd-operator
Copied!

Create an etcd cluster using following yaml definition

1
apiVersion: etcd.database.coreos.com/v1beta2
2
kind: EtcdCluster
3
metadata:
4
name: demo-etcd-cluster
5
namespace: default
6
labels:
7
app: demo-app
8
spec:
9
size: 3
10
version: 3.2.13
11
pod:
12
persistentVolumeClaimSpec:
13
storageClassName: csi-hostpath-sc
14
accessModes:
15
- ReadWriteOnce
16
resources:
17
requests:
18
storage: 2Gi
Copied!

Create a BackupPlan

    Create 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the following yaml definition.
    'operatorResourceName': This field holds the name of the operator whose backup we are going to take. In this case our operator name is "demo-etcd-cluster"
    'operatorResourceSelector': This field selects the operator resources (whose backup we are going to take) using labels. In this case, operator is 'etcd-operator' and all it's resources like pods, services, deployments have a unique label - "release: trilio-demo-etcd-operator"
    'applicationResourceSelector': This field selects the resources of the application launched using the operator. In this case, etcd-cluster is the application launched using etcd-operator. All the resources of this application have a unique label - "app: etcd"
1
apiVersion: triliovault.trilio.io/v1
2
kind: BackupPlan
3
metadata:
4
name: backup-job-k8s-demo-app
5
spec:
6
backupConfig:
7
target:
8
namespace: default
9
name: demo-s3-target
10
backupPlanComponents:
11
operators:
12
- operatorId: demo-etcd-cluster
13
customResources:
14
- groupVersionKind:
15
group: "etcd.database.coreos.com"
16
version: "v1beta2"
17
kind: "EtcdCluster"
18
objects:
19
- demo-etcd-cluster
20
operatorResourceSelector:
21
- matchLabels:
22
release: demo-etcd-operator
23
applicationResourceSelector:
24
- matchLabels:
25
app: etcd
Copied!

Create a Backup

    Take a backup of above 'BackupPlan'. Use following YAML definition to create a 'Backup' resource.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Backup
3
metadata:
4
name: demo-full-backup
5
spec:
6
type: Full
7
scheduleType: Periodic
8
backupPlan:
9
name: backup-job-k8s-demo-app
10
namespace: default
Copied!

Restore the Backup/Application

    After the backup completes successfully, you can perform the Restore of it.
    To restore the etcd-operator and it's clusters from the above backup, use the following YAML definition.
Note: If restoring into the same namespace, ensure that the original application components have been removed.
Note: If restoring to another cluster (migration scenario), ensure that TrilioVault for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Restore
3
metadata:
4
name: demo-restore
5
spec:
6
source:
7
type: Backup
8
backup:
9
name: demo-full-backup
10
namespace: default
11
skipIfAlreadyExists: true
Copied!

Step 4.4: Namespace Example

    1.
    Create a namespace called 'wordpress'
    2.
    Use helm to deploy a wordpress application into the namespace.
    3.
    Perform a backup of the namespace
    4.
    Delete the namespace/application
    5.
    Create a new namespace 'wordpress-restore'
    6.
    Perform a Restore of the namespace

Create a namespace and application

Create the namespace called 'wordpress'
1
kubectl create ns wordpress
Copied!
Install the wordpress Helm Chart
1
helm repo add bitnami https://charts.bitnami.com/bitnami
Copied!
1
helm install my-release bitnami/wordpress -n wordpress
Copied!
You can launch the wordpress app via a browser and make changes to the sample page to ensure changes are captured when you restore.

Create a BackupPlan

Create a backupPlan to backup the namespace
1
apiVersion: triliovault.trilio.io/v1
2
kind: BackupPlan
3
metadata:
4
name: ns-backupplan-1
5
namespace: wordpress
6
spec:
7
backupConfig:
8
target:
9
namespace: wordpress
10
name: demo-s3-target
Copied!

Backup the Namespace

Use the following YAML to build the Backup CR
1
apiVersion: triliovault.trilio.io/v1
2
kind: Backup
3
metadata:
4
name: wordpress-ns-backup-1
5
namespace: wordpress
6
spec:
7
type: Full
8
scheduleType: OneTime
9
backupPlan:
10
name: ns-backupplan-1
11
namespace: wordpress
Copied!

Restore the Backup/Namespace

Perform restore of the namespace backup
1
apiVersion: triliovault.trilio.io/v1
2
kind: Restore
3
metadata:
4
name: ns-restore
5
namespace: wordpress
6
spec:
7
source:
8
type: Backup
9
backup:
10
name: wordpress-ns-backup-1
11
namespace: wordpress
12
restoreNamespace: wordpress-restore
Copied!
Validate the pods are up and running after restore completes.

Validate Restore

1
kubectl get restore -n wordpress-restore
Copied!
1
$ kubectl get restore -n wordpress-restore
2
NAME BACKUP STATUS DATA SIZE START TIME END TIME PERCENTAGE COMPLETED
3
ns-restore wordpress-ns-backup-1 Completed 188312911 2020-11-13T18:47:33Z 2020-11-13T18:49:58Z 100
Copied!

Validate Application Pods

1
kubectl get pods -n wordpress-restore
Copied!
1
$ kubectl get pods -n wordpress-restore
2
NAME READY STATUS RESTARTS AGE
3
wordy-mariadb-0 1/1 Running 0 4m21s
4
wordy-wordpress-5cc764564-mngrm 1/1 Running 0 4m21s
Copied!
Finally, confirm the changes on the WordPress launch pages that were made earlier.
Last modified 14d ago