Getting Started with Trilio for Upstream Kubernetes (K8S)
Learn how to install, license and test Trilio for Kubernetes (T4K) in an upstream Kubernetes environment.
Table of Contents
What is Trilio for Kubernetes?
Trilio for Kubernetes is a cloud-native backup and restore application. Being a cloud-native application for Kubernetes, all operations are managed with CRDs (Customer Resource Definitions).
Trilio utilizes Control Plane and Data Plane controllers to carry out the backup and restore operations defined by the associated CRDs. When a CRD is created or modified the controller reconciles the definitions to the cluster.
Trilio gives you the power and flexibility to backup your entire cluster or select a specific namespace(s), label, Helm chart, or Operator as the scope for your backup operations.
In this tutorial, we'll show you how to install and test operation of Trilio for Kubernetes on your Upstream Kubernetes deployment.
Prerequisites
Before installing Trilio for Kubernetes, please review the compatibility matrix to ensure Trilio can function smoothly in your Kubernetes environment.
Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.
Check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.
Trilio will assume that the selected storage driver is a supported CSI driver when the volumesnapshotclass
and storageclass
are utilized.
Trilio for Kubernetes requires the following Custom Resource Definitions (CRD) to be installed on your cluster:VolumeSnapshot
, VolumeSnapshotContent
, and VolumeSnapshotClass.
For non-air-gapped environments, the following URLs must be accessed from your Kubernetes cluster.
Access to the S3 endpoint if the backup target happens to be S3
Access to application artifacts registry for image backup/restore
If the Kubernetes cluster's control plane and worker nodes are separated by a firewall, then the firewall must allow traffic on the following port(s)
9443
Verify Prerequisites with the Trilio Preflight Check
Make sure your cluster is ready to Install Trilio for Kubernetes by installing the Preflight Check Plugin and running the Trilio Preflight Check.
Trilio provides a preflight check tool that allows customers to validate their environment for Trilio installation.
The tool generates a report detailing all the requirements and whether they are met or not.
If you encounter any failures, please send the Preflight Check output to your Trilio Professional Services and Solutions Architect so we may assist you in satisfying any missing requirements before proceeding with the installation.
Installation
Follow the instructions in this section to Install Trilio for Kubernetes in an upstream Kubernetes environment. This section assumes that you have installed kubectl
and helm
installed and correctly configured to work with desired Kubernetes cluster. T4K supports v3
version of helm.
There are multiple methods of installing:
Helm Quickstart Installation
In this installation method for upstream operator, a cluster scope TVM custom resource triliovault-manager
is created. Perform the following steps to install:
To add the repository where the triliovault-operator helm chart is located, use the command:
2. Install the chart from the added repository:
To install the chart from the added repository using default configurations, use the following command:
Installation Configuration Options
installTVK.enabled
T4K-Quickstart install feature is enabled
true
installTVK.applicationScope
scope of T4K application created
Cluster
installTVK.tvkInstanceName
tvk instance name
"
"tvk-instance"
installTVK.ingressConfig.host
host of the ingress resource created
""
installTVK.ingressConfig.tlsSecretName
tls secret name which contains ingress certs
""
installTVK.ingressConfig.annotations
annotations to be added on ingress resource
""
installTVK.ingressConfig.ingressClass
ingress class name for the ingress resource
""
installTVK.ComponentConfiguration.ingressController.enabled
T4K ingress controller should be deployed
true
installTVK.ComponentConfiguration.ingressController.service.type
T4K ingress controller service type
"NodePort"
preflight.enabled
enables preflight check for tvk
false
preflight.storageClass
Name of storage class to use for preflight checks (Required)
""
preflight.cleanupOnFailure
Cleanup the resources on cluster if preflight checks fail (Optional)
false
preflight.imagePullSecret
Name of the secret for authentication while pulling the images from the local registry (Optional)
""
preflight.limits
Pod memory and cpu resource limits for DNS and volume snapshot preflight check (Optional)
""
"cpu=600m,memory=256Mi"
preflight.localRegistry
Name of the local registry from where the images will be pulled (Optional)
""
preflight.nodeSelector
Node selector labels for pods to schedule on a specific nodes of cluster (Optional)
""
"key=value"
preflight.pvcStorageRequest
PVC storage request for volume snapshot preflight check (Optional)
""
"2Gi"
preflight.requests
Pod memory and cpu resource requests for DNS and volume snapshot preflight check (Optional)
""
"cpu=300m,memory=128Mi"
preflight.volumeSnapshotClass
Name of volume snapshot class to use for preflight checks (Optional)
""
priorityClassName
Name of the Priority Class which is used by all the triliovault deployment (Optional)
""
"high-priority"
global.urlPath
Expose UI on custom Path
"/"
"/t4k"
Check out Modifying Default T4K Configuration for explore more tvk configuration.
Check out T4K Integration with Observability Stack for additional options to enable observability stack for T4K.
If using an external ingress controller, you must use the following command:
--set installTVK.ComponentConfiguration.ingressController.enabled=false --set installTVK.ingressConfig.ingressClass="" --set installTVK.ingressConfig.host="" --set installTVK.ingressConfig.tlsSecretName=""
Expose UI on Custom Path, you must use the following command: --set golbal.urlPath=/<custom-path>/
4. Check the output from the previous command and ensure that the installation was successful.
5. Check the TVM CR configuration using the following command:
7. Once the operator pod is in a running state, confirm that the T4K pods are up and running:
8. If the install was not successful or the T4K pods were not spawned as expected:
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting the auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
Cleanup Service Account:
Cleanup Cluster Role Binding:
Cleanup Cluster Role:
Manual Installation
To install the operator manually, run the latest helm charts from the following repository:
To add the repository where the triliovault-operator helm chart is located, use the command:
2. Install the chart from the added repository, but with the quick install method flag set to false, so that users can have more control over the installation:
Note that in step 2, you can also set additional parameters as set out inInstallation Configuration Options above.
3. Copy the sample TrilioVaultManager CR contents below and paste them into a new YAML file.
5. Customize the T4K resources configuration in the YAML file and then save it.
If using an external ingress controller, you must set these parameters in the yaml:
ingress-controller:
enabled: false
Expose UI on Custom Path, you must use set parameter in the yaml: helmValues: urlPath: /<custom-path>/
6. Now apply the CR YAML file using the command:
7. Once the operator pod is in a running state, confirm that the T4K pods are up.
8. If the install was not successful or the T4K pods were not spawned as expected:
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
Cleanup Service Account:
Cleanup Cluster Role Binding:
Cleanup Cluster Role:
9. Finally, check the T4K install:
Proxy Enabled Environments
As a Prerequisite, configure a Proxy Server. For example - Squid Proxy
In order to install Trilio for Kubernetes in proxy-enabled environments. Install the operator (step 2 above) by providing the proxy settings:
HTTP_PROXY
Proxy address to use when initiating HTTP connection(s)
HTTPS_PROXY
Proxy address to use when initiating HTTPS connection(s)
NO_PROXY
Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s)
Note NO_PROXY must be in uppercase to use network range (CIDR) notation.
proxySettings.PROXY_ENABLED=true
proxySettings.HTTP_PROXY=http://<uname>:<password>@<IP>:<Port>
proxySettings.HTTPS_PROXY=https://<uname>:<password>@<IP>:<Port>
proxySettings.NO_PROXY="<according to user>"
proxySettings.CA_BUNDLE_CONFIGMAP="<configmap>"
For HTTPS proxy, create CA Bundle Proxy configMap in Install Namespace
Proxy CA certificate file key should be
ca-bundle.crt
After the operator is created by specifying proxy settings, the TVM will pick up these settings and leverage them directly for operations. No other configuration is required.
Licensing Trilio for Kubernetes
To generate and apply the Trilio license, perform the following steps:
Although a cluster license enables Trilio features across all namespaces in a cluster, the license only needs to be applied in the namespace where Trilio is installed. For example, trilio-system namespace.
1. Obtain a license by getting in touch with us here. The license file will contain the license key.
2. Apply the license file to a Trilio instance using the command line or UI:
Execute the following command:
2. If the previous step is successful, check that the output generated is similar to the following:
Additional license details can be obtained using the following:
kubectl get license -o json -m trilio-system
Upgrading a license
A license upgrade is required when moving from one license type to another.
Trilio maintains only one instance of a license for every installation of Trilio for Kubernetes.
To upgrade a license, run kubectl apply -f <licensefile> -n <install-namespace>
against a new license file to activate it. The previous license will be replaced automatically.
Create a Target
The Target CR (Customer Resource) is defined from the Trilio Management Console or from your own self-prepared YAML.
The Target object references the NFS or S3 storage share you provide as a target for your backups/snapshots. Trilio will create a validation pod in the namespace where Trilio is installed and attempt to validate the NFS or S3 settings you have defined in the Target CR.
Trilio makes it easy to automatically create your Target CR from the Management Console.
Learn how to Create a Target from the Management Console
Take control of Trilio and define your own self-prepared YAML and apply it to the cluster using the oc/kubectl tool.
Example S3 Target
See more Example Target YAML
Testing Backup, Snapshot and Restore Operation
Trilio is a cloud-native application for Kubernetes, therefore all operations are managed with CRDs (Custom Resource Definitions). We will discuss the purpose of each Trilio CRs and provide examples of how to create these objects Automatically in the Trilio Management Console or from the oc/kubectl tool.
About Backup Plans
The Backup Plan CR is defined from the Trilio Management Console or from your own self-prepared YAML.
The Backup Plan CR must reference the following:
Your Application Data (label/helm/operator)
BackupConfig
Target CR
Scheduling Policy CR
Retention Policy CR
SnapshotConfig
Target CR
Scheduling Policy CR
Retention Policy CR
A Target CR is defined from the Trilio Management Console or from your own self-prepared YAML. Trilio will test the backup target to insure it is reachable and writable. Look at Trilio validation pod logs to troubleshoot any backup target creation issues.
Retention and Schedule Policy CRs are defined from the Trilio Management Console or from your own self-prepared YAML.
Scheduling Policies allow users to automate the backup/Snapshot of Kubernetes applications on a periodic basis. With this feature, users can create a scheduling policy that includes multiple cron strings to specify the frequency of backups.
Retention Policies make it easy for users to define the number of backups/snapshots they want to retain and the rate at which old backups/snapshots should be deleted. With the retention policy CR, users can use a simple YAML specification to define the number of backups/snapshots to retain in terms of days, weeks, months, years, or the latest backup/snapshots. This provides a flexible and customizable way to manage your backup/snapshots retention policy and ensure you meet your compliance requirements.
The Backup and Snapshot CR is defined from the Trilio Management Console or from your own self-prepared YAML.
The backup/snapshot object references the actual backup Trilio creates on the Target. The backup is taken as either a Full or Incremental backup as defined by the user in the Backup CR. The snapshpt is taken as Full snapshot only.
Creating a Backup Plan
Trilio makes it easy to automatically create your backup plans and all required target and policy CRDs from the Management Console.
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.
Example Namespace Scope BackupPlan:
Target in the backupConfig and snapshotConfig needs to be the same. User can specify different retention and schedule policies under backupConfig and snapshotConfig.
See more Examples of Backup Plan YAML
Creating a Backup
Learn more about Creating Backups from the Management Console
Creating a Snapshot
Learn more about Creating Snapshots from the Management Console
About Restore
A Restore CR (Custom Resource) is defined from the Trilio Management Console or from your own self-prepared YAML. The Restore CR references a backup object which has been created previously from a Backup CR.
In a Migration scenario, the location of the backup/snapshot should be specified within the desired target as there will be no Backup/Snapshot CR defining the location. if you are migrating Snapshot then make sure that then then actual Persistent Volume snapshots are accessible from the other cluster.
Trilio restores the backup/snapshot into a specified namespace and upon completion of the restore operation, the application is ready to be used on the cluster.
Creating a Restore
Trilio makes it easy to automatically create your Restore CRDs from the Management Console.
Learn more about Creating Restores from the Management Console
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.
See more Examples of Restore YAML
Troubleshooting
Problems? Learn about Troubleshooting Trilio for Kubernetes
Last updated