Command-Line Interface
Learn about using Trilio for Kubernetes from the command-line interface
Overview
In order to get started with Trilio for Kubernetes in your environment, the following steps will be performed:
Operating the Product
Install a compatible CSI Driver
Create a T4K Target - Location where backups will be stored.
Create a retention policy (Optional) - To specify how long to keep the backups for.
Run Example
Label Example
Helm Example
Operator Example
Namespace Example
Step 1: Install Test CSI Driver
Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.
You should check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.
Step 2: Create a Target
Create a secret containing the credentials for data stores to store backups. An example is provided below.
apiVersion: v1
kind: Secret
metadata:
name: sample-secret
type: Opaque
stringData:
accessKey: AKIAS5B35DGFSTY7T55D
secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDPlease use one of the Target examples provided in the Custom Resource Definition section as a template for creating an NFS, Amazon S3, or any S3-compatible storage target.
Supported values for S3 vendors include:
"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"An Amazon S3 target example is provided below:
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
name: demo-s3-target
spec:
type: ObjectStore
vendor: AWS
objectStoreCredentials:
region: us-east-1
bucketName: trilio-browser-test
credentialSecret:
name: sample-secret
namespace: TARGET_NAMESPACE
thresholdCapacity: 1000Gikubectl create -f tv-backup-target.yamlNote: With the above configuration, the target would get created in the current user namespace unless specified. Also, additional information on Bucket permissions can be found in the Appendix Section: AWS S3 Target Permissions****
Step 3: Create a Retention Policy (Optional)
While the example backup custom resources created by following this Getting Started page can be deleted manually via kubectl commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
name: demo-policy
spec:
type: Retention
default: false
retentionConfig:
latest: 2
weekly: 1
dayOfWeek: Wednesday
monthly: 1
dateOfMonth: 15
monthOfYear: March
yearly: 1More information on the Retention Policy spec can be found in the application CRD reference section. A retention policy is referenced in the backupPlan CR .
Note: With the above configuration, policy would get created in default namespace unless specified.
Step 4: Run Example
This section covers backup and restores examples based on Labels, Helm charts, Operators, and Namespaces
More details about CRDs and their usage/explanation can be found in the Custom Resource Definition Section.
Note:
Backup and BackupPlan should be in the same namespace.
For the restore operation, the resources will get restored in the namespace where restore CR is created.
Specifying backupPlan information in the restore manifest will automatically select the latest successful backup for that backupPlan.
Step 4.1: Label Example
The following sections will create a sample application (tag it with labels), backup the application via labels, and then restore the application.
The following steps will be performed.
Create a sample MySQL application
Create a BackupPlan CR that specifies the MySQL application to protect via labels
Create a Backup CR with reference to the BackupPlan CR
Create a Restore CR with reference to the Backup CR.
Create a Sample Application
Create the following file as mysql.yaml. Note the labels used to tag the different components of the application.
## Secret for mysql password
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
labels:
app: k8s-demo-app
tier: frontend
type: Opaque
data:
password: dHJpbGlvcGFzcw==
## password base64 encoded, plain text: triliopass
## "echo -n triliopass | base64" -> to get the encoded password
---
## PVC for mysql PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: k8s-demo-app
tier: mysql
spec:
storageClassName: "csi-hostpath-sc"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
## Mysql app deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: k8s-demo-app-mysql
labels:
app: k8s-demo-app
tier: mysql
spec:
selector:
matchLabels:
app: k8s-demo-app
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-demo-app
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
## Service for mysql app
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-app-mysql
labels:
app: k8s-demo-app
tier: mysql
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: k8s-demo-app
tier: mysql
---
## Deployment for frontend webserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-demo-app-frontend
labels:
app: k8s-demo-app
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: k8s-demo-app
tier: frontend
template:
metadata:
labels:
app: k8s-demo-app
tier: frontend
spec:
containers:
- name: demoapp-frontend
image: docker.io/trilio/k8s-demo-app:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
## Service for frontend
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-app-frontend
labels:
app: k8s-demo-app
tier: frontend
spec:
ports:
- name: web
port: 80
selector:
app: k8s-demo-app
tier: frontendRun below command to access the mysql DB using a mysql client from the host. This command works for "default" namespace. Please change the namespace context or use "-n <namespace>" if the demo app is installed in some other namespace.
kubectl port-forward --address 0.0.0.0 service/k8s-demo-app-mysql 3306:3306 &>/dev/null &
## To connect to mysql DB using a mysql client
mysql -h 127.0.0.1 -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.6.51 MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>Create a BackupPlan
Create a BackupPlan CR that references the application created in the previous step via matching labels in the same namespace where the application resides.
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: mysql-label-backupplan
spec:
backupConfig:
target:
namespace: default
name: demo-s3-target
retentionPolicy:
namespace: default
name: demo-policy
backupPlanComponents:
custom:
- matchLabels:
app: k8s-demo-appCreate a Backup
Create a Backup CR to protect the BackupPlan. Type of the backup can be either full or incremental.
Note: The first backup into a target location will always be a Full backup.
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: mysql-label-backup
spec:
type: Full
backupPlan:
name: mysql-label-backupplan
namespace: defaultRestore the Backup/Application
Finally, create the Restore CR to restore the Backup in the same namespace where the Backup CR is created. In the example provided below, mysql-label-backup is being restored into the "default" namespace.
Note: If restoring into the same namespace, ensure that the original application components have been removed.
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: demo-restore
spec:
source:
type: Backup
backup:
name: mysql-label-backup
namespace: default
skipIfAlreadyExists: trueNote: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared, like ports, should be freed, or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.
Step 4.2: Helm Example
The following sections will create a sample application via Helm, back up the application via Helm selector fields, and then restore the application.
The following steps will be performed.
Create a cockroachdb instance using Helm
Create a BackupPlan CR that specifies the cockroachdb application to protect.
Create a Backup CR with a reference to the BackupPlan CR
Create a Restore CR with a reference to the Backup CR.
Create a sample application via Helm
In this example, we will use Helm Tooling to create a "cockroachdb" application.
Run the following commands against a Kubernetes cluster. Use the attached cockroachdb_-values.yaml_ file for the installation by copying it to your local directory.
helm repo add stable https://charts.helm.sh/stable
helm repo updatehelm install cockroachdb-app --values cockroachdb-values.yaml stable/cockroachdbAfter running helm install, confirm the installation was successful by running helm ls
helm lsNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cockroachdb-app default 1 2020-07-08 05:04:17.498739741 +0000 UTC deployed cockroachdb-3.0.19.2.5Create a BackupPlan
Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: cockroachdb-backup-plan
spec:
backupConfig:
target:
namespace: default
name: demo-s3-target
backupPlanComponents:
helmReleases:
- cockroachdb-appCreate a Backup
Use the following content to create a backup CR.
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: cockroachdb-full-backup
spec:
type: Full
scheduleType: Periodic
backupPlan:
name: cockroachdb-backup-plan
namespace: defaultRestore Backup/Application
After the backup has completed successfully, create a Restore CR to restore the application in the same namespace where BackupPlan and Backup CRs are created.
Before restoring the app, we need to clean the existing app. This is required cluster level resources of the app can create conflict during restore operation.
helm delete cockroachdb-appSimilar to the Label example above:
Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of application are deleted.
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: cockroachdb-restore
spec:
source:
type: Backup
backup:
name: cockroachdb-full-backup
namespace: default
skipIfAlreadyExists: trueStep 4.3: Operator Example
The following steps will be performed.
Install a sample etcd Operator
Create a etcd cluster
Create a BackupPlan CR that specifies the etcd application to protect.
Create a Backup CR with a reference to the BackupPlan CR
Create a Restore CR with a reference to the Backup CR.
We are demonstrating standard 'etcd-operator' here. First we need to deploy the operator using it's helm chart. Then we need to deploy etcd cluster. Following are the commands for it.
Install etcd-operator
helm install demo-etcd-operator stable/etcd-operatorCreate an etcd cluster using following yaml definition
apiVersion: etcd.database.coreos.com/v1beta2
kind: EtcdCluster
metadata:
name: demo-etcd-cluster
namespace: default
labels:
app: demo-app
spec:
size: 3
version: 3.2.13
pod:
persistentVolumeClaimSpec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2GiCreate a BackupPlan
Create 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the following yaml definition.
'operatorResourceName': This field holds the name of the operator whose backup we are going to take. In this case our operator name is "demo-etcd-cluster"
'operatorResourceSelector': This field selects the operator resources (whose backup we are going to take) using labels. In this case, operator is 'etcd-operator' and all it's resources like pods, services, deployments have a unique label - "release: trilio-demo-etcd-operator"
'applicationResourceSelector': This field selects the resources of the application launched using the operator. In this case, etcd-cluster is the application launched using etcd-operator. All the resources of this application have a unique label - "app: etcd"
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: backup-job-k8s-demo-app
spec:
backupConfig:
target:
namespace: default
name: demo-s3-target
backupPlanComponents:
operators:
- operatorId: demo-etcd-cluster
customResources:
- groupVersionKind:
group: "etcd.database.coreos.com"
version: "v1beta2"
kind: "EtcdCluster"
objects:
- demo-etcd-cluster
operatorResourceSelector:
- matchLabels:
release: demo-etcd-operator
applicationResourceSelector:
- matchLabels:
app: etcdCreate a Backup
Take a backup of above 'BackupPlan'. Use following YAML definition to create a 'Backup' resource.
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: demo-full-backup
spec:
type: Full
scheduleType: Periodic
backupPlan:
name: backup-job-k8s-demo-app
namespace: defaultRestore the Backup/Application
After the backup completes successfully, you can perform the Restore of it.
To restore the etcd-operator and it's clusters from the above backup, use the following YAML definition.
Note: If restoring into the same namespace, ensure that the original application components have been removed.
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the Custom Resource Definition Restore Section to view a restore by location example.
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: demo-restore
spec:
source:
type: Backup
backup:
name: demo-full-backup
namespace: default
skipIfAlreadyExists: trueStep 4.4: Namespace Example
Create a namespace called 'wordpress'
Use helm to deploy a wordpress application into the namespace.
Perform a backup of the namespace
Delete the namespace/application
Create a new namespace 'wordpress-restore'
Perform a Restore of the namespace
Create a namespace and application
Create the namespace called 'wordpress'
kubectl create ns wordpressInstall the wordpress Helm Chart
helm repo add bitnami https://charts.bitnami.com/bitnamihelm install my-release bitnami/wordpress -n wordpressYou can launch the wordpress app via a browser and make changes to the sample page to ensure changes are captured when you restore.
Create a BackupPlan
Create a backupPlan to backup the namespace
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: ns-backupplan-1
namespace: wordpress
spec:
backupConfig:
target:
namespace: wordpress
name: demo-s3-targetBackup the Namespace
Use the following YAML to build the Backup CR
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: wordpress-ns-backup-1
namespace: wordpress
spec:
type: Full
scheduleType: OneTime
backupPlan:
name: ns-backupplan-1
namespace: wordpressRestore the Backup/Namespace
Perform restore of the namespace backup
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: ns-restore
namespace: wordpress
spec:
source:
type: Backup
backup:
name: wordpress-ns-backup-1
namespace: wordpress
restoreNamespace: wordpress-restoreValidate the pods are up and running after restore completes.
Validate Restore
kubectl get restore -n wordpress-restore$ kubectl get restore -n wordpress-restore
NAME BACKUP STATUS DATA SIZE START TIME END TIME PERCENTAGE COMPLETED
ns-restore wordpress-ns-backup-1 Completed 188312911 2020-11-13T18:47:33Z 2020-11-13T18:49:58Z 100Validate Application Pods
kubectl get pods -n wordpress-restore$ kubectl get pods -n wordpress-restore
NAME READY STATUS RESTARTS AGE
wordy-mariadb-0 1/1 Running 0 4m21s
wordy-wordpress-5cc764564-mngrm 1/1 Running 0 4m21sFinally, confirm the changes on the WordPress launch pages that were made earlier.
Last updated
Was this helpful?