LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • Overview
  • Operating the Product
  • Step 1: Install Test CSI Driver
  • Step 2: Create a Target
  • Step 3: Create a Retention Policy (Optional)
  • Step 4: Run Example
  • Step 4.1: Label Example
  • Create a Sample Application
  • Create a BackupPlan
  • Create a Backup
  • Restore the Backup/Application
  • Step 4.2: Helm Example
  • Create a sample application via Helm
  • Create a BackupPlan
  • Create a Backup
  • Restore Backup/Application
  • Step 4.3: Operator Example
  • Install etcd-operator
  • Create an etcd cluster using following yaml definition
  • Create a BackupPlan
  • Create a Backup
  • Restore the Backup/Application
  • Step 4.4: Namespace Example
  • Create a namespace and application
  • Create a BackupPlan
  • Backup the Namespace
  • Restore the Backup/Namespace

Was this helpful?

  1. Getting Started
  2. Using Trilio

Command-Line Interface

Learn about using Trilio for Kubernetes from the command-line interface

PreviousContinuous RestoreNextYAML Examples

Last updated 10 months ago

Was this helpful?

Overview

In order to get started with Trilio for Kubernetes in your environment, the following steps will be performed:

Operating the Product

  1. Install a compatible CSI Driver

  2. Create a T4K Target - Location where backups will be stored.

  3. Create a retention policy (Optional) - To specify how long to keep the backups for.

  4. Run Example

    1. Label Example

    2. Helm Example

    3. Operator Example

    4. Namespace Example

Step 1: Install Test CSI Driver

Skip this step if your environment already has a CSI driver installed with snapshot capability.

Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.

You should check the to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.

Step 2: Create a Target

Create a secret containing the credentials for data stores to store backups. An example is provided below.

apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVD

Supported values for S3 vendors include:

"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"

An Amazon S3 target example is provided below:

apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
  name: demo-s3-target
spec:
  type: ObjectStore
  vendor: AWS
  objectStoreCredentials:
    region: us-east-1
    bucketName: trilio-browser-test
    credentialSecret:
      name: sample-secret
      namespace: TARGET_NAMESPACE
  thresholdCapacity: 1000Gi
kubectl create -f tv-backup-target.yaml

Step 3: Create a Retention Policy (Optional)

While the example backup custom resources created by following this Getting Started page can be deleted manually via kubectl commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.

apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
  name: demo-policy
spec:
  type: Retention
  default: false
  retentionConfig:
    latest: 2
    weekly: 1
    dayOfWeek: Wednesday
    monthly: 1
    dateOfMonth: 15
    monthOfYear: March
    yearly: 1

Note: With the above configuration, policy would get created in default namespace unless specified.

Step 4: Run Example

This section covers backup and restores examples based on Labels, Helm charts, Operators, and Namespaces

Note:

  1. Backup and BackupPlan should be in the same namespace.

  2. For the restore operation, the resources will get restored in the namespace where restore CR is created.

  3. Specifying backupPlan information in the restore manifest will automatically select the latest successful backup for that backupPlan.

Step 4.1: Label Example

The following sections will create a sample application (tag it with labels), backup the application via labels, and then restore the application.

The following steps will be performed.

  1. Create a sample MySQL application

  2. Create a BackupPlan CR that specifies the MySQL application to protect via labels

  3. Create a Backup CR with reference to the BackupPlan CR

  4. Create a Restore CR with reference to the Backup CR.

Create a Sample Application

Create the following file as mysql.yaml. Note the labels used to tag the different components of the application.

## Secret for mysql password
apiVersion: v1
kind: Secret
metadata:
  name: mysql-pass
  labels:
    app: k8s-demo-app
    tier: frontend
type: Opaque
data:
  password: dHJpbGlvcGFzcw==
## password base64 encoded, plain text: triliopass
## "echo -n triliopass | base64" -> to get the encoded password
---
## PVC for mysql PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  storageClassName: "csi-hostpath-sc"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
## Mysql app deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: k8s-demo-app-mysql
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  selector:
    matchLabels:
      app: k8s-demo-app
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: k8s-demo-app
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
---
## Service for mysql app
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo-app-mysql
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: k8s-demo-app
    tier: mysql
---
## Deployment for frontend webserver
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-demo-app-frontend
  labels:
    app: k8s-demo-app
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: k8s-demo-app
      tier: frontend
  template:
    metadata:
      labels:
        app: k8s-demo-app
        tier: frontend
    spec:
      containers:
      - name: demoapp-frontend
        image: docker.io/trilio/k8s-demo-app:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
## Service for frontend
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo-app-frontend
  labels:
    app: k8s-demo-app
    tier: frontend
spec:
  ports:
  - name: web
    port: 80
  selector:
    app: k8s-demo-app
    tier: frontend

Run below command to access the mysql DB using a mysql client from the host. This command works for "default" namespace. Please change the namespace context or use "-n <namespace>" if the demo app is installed in some other namespace.

kubectl port-forward --address 0.0.0.0 service/k8s-demo-app-mysql 3306:3306 &>/dev/null &

## To connect to mysql DB using a mysql client
mysql -h 127.0.0.1 -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Create a BackupPlan

Create a BackupPlan CR that references the application created in the previous step via matching labels in the same namespace where the application resides.

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: mysql-label-backupplan
spec:
  backupConfig:
    target:
      namespace: default
      name: demo-s3-target
    retentionPolicy:
      namespace: default
      name: demo-policy
  backupPlanComponents:
    custom:
      - matchLabels:
          app: k8s-demo-app

Create a Backup

Create a Backup CR to protect the BackupPlan. Type of the backup can be either full or incremental.

Note: The first backup into a target location will always be a Full backup.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: mysql-label-backup
spec:
  type: Full
  backupPlan:
    name: mysql-label-backupplan
    namespace: default

Restore the Backup/Application

Finally, create the Restore CR to restore the Backup in the same namespace where the Backup CR is created. In the example provided below, mysql-label-backup is being restored into the "default" namespace.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: demo-restore
spec:
  source:
    type: Backup
    backup:
      name: mysql-label-backup
      namespace: default
  skipIfAlreadyExists: true

Step 4.2: Helm Example

The following sections will create a sample application via Helm, back up the application via Helm selector fields, and then restore the application.

The following steps will be performed.

  1. Create a cockroachdb instance using Helm

  2. Create a BackupPlan CR that specifies the cockroachdb application to protect.

  3. Create a Backup CR with a reference to the BackupPlan CR

  4. Create a Restore CR with a reference to the Backup CR.

Create a sample application via Helm

In this example, we will use Helm Tooling to create a "cockroachdb" application.

Run the following commands against a Kubernetes cluster. Use the attached cockroachdb_-values.yaml_ file for the installation by copying it to your local directory.

helm repo add stable https://charts.helm.sh/stable
helm repo update
helm install cockroachdb-app --values cockroachdb-values.yaml stable/cockroachdb

After running helm install, confirm the installation was successful by running helm ls

helm ls
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
cockroachdb-app  default         1               2020-07-08 05:04:17.498739741 +0000 UTC deployed        cockroachdb-3.0.19.2.5

Create a BackupPlan

Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: cockroachdb-backup-plan
spec:
  backupConfig:
    target:
      namespace: default
      name: demo-s3-target
  backupPlanComponents:
    helmReleases:
      - cockroachdb-app

Create a Backup

Use the following content to create a backup CR.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: cockroachdb-full-backup
spec:
  type: Full
  scheduleType: Periodic
  backupPlan:
    name: cockroachdb-backup-plan
    namespace: default

Restore Backup/Application

After the backup has completed successfully, create a Restore CR to restore the application in the same namespace where BackupPlan and Backup CRs are created.

Before restoring the app, we need to clean the existing app. This is required cluster level resources of the app can create conflict during restore operation.

helm delete cockroachdb-app

Similar to the Label example above:

Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of application are deleted.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: cockroachdb-restore
spec:
  source:
    type: Backup
    backup:
      name: cockroachdb-full-backup
      namespace: default
  skipIfAlreadyExists: true

Step 4.3: Operator Example

The following steps will be performed.

  1. Install a sample etcd Operator

  2. Create a etcd cluster

  3. Create a BackupPlan CR that specifies the etcd application to protect.

  4. Create a Backup CR with a reference to the BackupPlan CR

  5. Create a Restore CR with a reference to the Backup CR.

We are demonstrating standard 'etcd-operator' here. First we need to deploy the operator using it's helm chart. Then we need to deploy etcd cluster. Following are the commands for it.

Install etcd-operator

helm install demo-etcd-operator stable/etcd-operator

Create an etcd cluster using following yaml definition

apiVersion: etcd.database.coreos.com/v1beta2
kind: EtcdCluster
metadata:
  name: demo-etcd-cluster
  namespace: default
  labels:
    app: demo-app 
spec:
  size: 3
  version: 3.2.13
  pod:
    persistentVolumeClaimSpec:
      storageClassName: csi-hostpath-sc
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi

Create a BackupPlan

  • Create 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the following yaml definition.

  • 'operatorResourceName': This field holds the name of the operator whose backup we are going to take. In this case our operator name is "demo-etcd-cluster"

  • 'operatorResourceSelector': This field selects the operator resources (whose backup we are going to take) using labels. In this case, operator is 'etcd-operator' and all it's resources like pods, services, deployments have a unique label - "release: trilio-demo-etcd-operator"

  • 'applicationResourceSelector': This field selects the resources of the application launched using the operator. In this case, etcd-cluster is the application launched using etcd-operator. All the resources of this application have a unique label - "app: etcd"

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: backup-job-k8s-demo-app
spec:
  backupConfig:
    target:
      namespace: default
      name: demo-s3-target
  backupPlanComponents:
    operators:
      - operatorId: demo-etcd-cluster
        customResources:
          - groupVersionKind:
              group: "etcd.database.coreos.com"
              version: "v1beta2"
              kind: "EtcdCluster"
            objects:
            - demo-etcd-cluster
        operatorResourceSelector:
        - matchLabels:
            release: demo-etcd-operator
        applicationResourceSelector:
        - matchLabels:
            app: etcd

Create a Backup

  • Take a backup of above 'BackupPlan'. Use following YAML definition to create a 'Backup' resource.

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: demo-full-backup
spec:
  type: Full
  scheduleType: Periodic
  backupPlan:
    name: backup-job-k8s-demo-app
    namespace: default

Restore the Backup/Application

  • After the backup completes successfully, you can perform the Restore of it.

  • To restore the etcd-operator and it's clusters from the above backup, use the following YAML definition.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: demo-restore
spec:
  source:
    type: Backup
    backup:
      name: demo-full-backup
      namespace: default
  skipIfAlreadyExists: true

Step 4.4: Namespace Example

  1. Create a namespace called 'wordpress'

  2. Use helm to deploy a wordpress application into the namespace.

  3. Perform a backup of the namespace

  4. Delete the namespace/application

  5. Create a new namespace 'wordpress-restore'

  6. Perform a Restore of the namespace

Create a namespace and application

Create the namespace called 'wordpress'

kubectl create ns wordpress

Install the wordpress Helm Chart

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/wordpress -n wordpress

You can launch the wordpress app via a browser and make changes to the sample page to ensure changes are captured when you restore.

Create a BackupPlan

Create a backupPlan to backup the namespace

apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: ns-backupplan-1
  namespace: wordpress
spec:
  backupConfig:
    target:
      namespace: wordpress
      name: demo-s3-target

Backup the Namespace

Use the following YAML to build the Backup CR

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: wordpress-ns-backup-1
  namespace: wordpress
spec:
  type: Full
  scheduleType: OneTime
  backupPlan:
    name: ns-backupplan-1
    namespace: wordpress

Restore the Backup/Namespace

Perform restore of the namespace backup

apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: ns-restore
  namespace: wordpress
spec:
  source:
    type: Backup
    backup:
      name: wordpress-ns-backup-1
      namespace: wordpress
  restoreNamespace: wordpress-restore

Validate the pods are up and running after restore completes.

Validate Restore

kubectl get restore -n wordpress-restore
$ kubectl get restore -n wordpress-restore
NAME         BACKUP                  STATUS      DATA SIZE   START TIME             END TIME               PERCENTAGE COMPLETED
ns-restore   wordpress-ns-backup-1   Completed   188312911   2020-11-13T18:47:33Z   2020-11-13T18:49:58Z   100

Validate Application Pods

kubectl get pods -n wordpress-restore
$ kubectl get pods -n wordpress-restore
NAME                              READY   STATUS    RESTARTS   AGE
wordy-mariadb-0                   1/1     Running   0          4m21s
wordy-wordpress-5cc764564-mngrm   1/1     Running   0          4m21s

Finally, confirm the changes on the WordPress launch pages that were made earlier.

Please use one of the provided in the Custom Resource Definition section as a template for creating an NFS, Amazon S3, or any S3-compatible storage target.

Note: With the above configuration, the target would get created in the current user namespace unless specified. Also, additional information on Bucket permissions can be found in the ****

More information on the Retention Policy spec can be found in the section. A retention policy is referenced in the .

More details about CRDs and their usage/explanation can be found in the .

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the to view a restore by location example.

Note: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared, like ports, should be freed, or transformation should be used to avoid conflict. More information about transformation can be found at .

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the to view a restore by location example.

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), source.type must be set to location. Please refer to the to view a restore by location example.

Kubernetes CSI Developer Documentation
Appendix Section: AWS S3 Target Permissions
Custom Resource Definition Section
Restore Transformation
backupPlan CR
Target examples
Custom Resource Definition Restore Section
Custom Resource Definition Restore Section
Custom Resource Definition Restore Section
144B
cockroachdb-values.yaml
application CRD reference