Getting Started with Trilio for Upstream Kubernetes (K8S)
Learn how to install, license and test Trilio for Kubernetes (T4K) in an upstream Kubernetes environment.
Trilio for Kubernetes is a cloud-native backup and restore application. Being a cloud-native application for Kubernetes, all operations are managed with CRDs (Customer Resource Definitions).
Trilio utilizes Control Plane and Data Plane controllers to carry out the backup and restore operations defined by the associated CRDs. When a CRD is created or modified the controller reconciles the definitions to the cluster.
Trilio gives you the power and flexibility to backup your entire cluster or select a specific namespace(s), label, Helm chart, or Operator as the scope for your backup operations.
In this tutorial, we'll show you how to install and test operation of Trilio for Kubernetes on your Upstream Kubernetes deployment.
- Confirm Compatibility
Before installing Trilio for Kubernetes, please review the compatibility matrix to ensure Trilio can function smoothly in your Kubernetes environment.
- Verify that a CSI Driver which provides Snapshot functionality
Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.
Check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.
Trilio will assume that the selected storage driver is a supported CSI driver when the
volumesnapshotclass
and storageclass
are utilized.- Verify that the Required Custom Resource Definitions (CRD) are installed
Trilio for Kubernetes requires the following Custom Resource Definitions (CRD) to be installed on your cluster:
VolumeSnapshot
, VolumeSnapshotContent
, and VolumeSnapshotClass.
Before attempting to install the VolumeSnapshot CRDs, it is important to confirm that the CRDs are not already present on the system.
To do this, run the following command:
kubectl api-resources | grep volumesnapshot
If CRDs are already present, the output should be similar to the output displayed below. The second column displays the version of the CRD installed (v1beta1 in this case). Ensure that it is the correct version required by the CSI driver being used.
volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
Installing CRDs
Be sure to only install one version of VolumeSnapshot CRDs
- 1.Read the external-snapshotter GitHub project documentation. This is compatible with both v1.19 and v1.20+.
- 2.Run the following commands to install directly, check the repo for the latest version:
RELEASE_VERSION=6.0
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
- Network Access Requirements
For non-air-gapped environments, the following URLs must be accessed from your Kubernetes cluster.
- Access to the S3 endpoint if the backup target happens to be S3
- Access to application artifacts registry for image backup/restore
- Network Port Requirements
If the Kubernetes cluster's control plane and worker nodes are separated by a firewall, then the firewall must allow traffic on the following port(s)
- 9443
Make sure your cluster is ready to Install Trilio for Kubernetes by installing the Preflight Check Plugin and running the Trilio Preflight Check.
Trilio provides a preflight check tool that allows customers to validate their environment for Trilio installation.
The tool generates a report detailing all the requirements and whether they are met or not.
If you encounter any failures, please send the Preflight Check output to your Trilio Professional Services and Solutions Architect so we may assist you in satisfying any missing requirements before proceeding with the installation.
Follow the instructions in this section to Install Trilio for Kubernetes in an upstream Kubernetes environment. This section assumes that you have installed
kubectl
and helm
installed and correctly configured to work with desired Kubernetes cluster. T4K supports v3
version of helm.
There are multiple methods of installing:
In this installation method for upstream operator, a cluster scope TVM custom resource
triliovault-manager
is created. Perform the following steps to install:- 1.To add the repository where the triliovault-operator helm chart is located, use the command:
helm repo add triliovault-operator https://charts.k8strilio.net/trilio-stable/k8s-triliovault-operator
2. Install the chart from the added repository:
Using Default Configurations
User-defined Configurations
To install the chart from the added repository using default configurations, use the following command:
helm install tvm triliovault-operator/k8s-triliovault-operator
Instead of using the default configurations provided, you can configure optional parameters by adding to the default install command in the first tab. Refer to the following Installation Configuration Options table, which lists the configuration parameters of the upstream operator install feature as well as preflight check flags, their default values and usage. Also refer to the following example of the install command, with various configuration parameters set:
helm install tvm triliovault-operator/k8s-triliovault-operator --set preflight.enabled=true,preflight.cleanupOnFailure=true,preflight.storageClass=<storage-class-name>
Parameter | Description | Default | Example |
---|---|---|---|
installTVK.enabled | T4K-Quickstart install feature is enabled | true | |
installTVK.applicationScope | scope of T4K application created | Cluster | |
installTVK.tvkInstanceName | tvk instance name | " | "tvk-instance" |
installTVK.ingressConfig.host | host of the ingress resource created | "" | |
installTVK.ingressConfig.tlsSecretName | tls secret name which contains ingress certs | "" | |
installTVK.ingressConfig.annotations | annotations to be added on ingress resource | "" | |
installTVK.ingressConfig.ingressClass | ingress class name for the ingress resource | "" | |
installTVK.ComponentConfiguration.ingressController.enabled | T4K ingress controller should be deployed | true | |
installTVK.ComponentConfiguration.ingressController.service.type | T4K ingress controller service type | "NodePort" | |
preflight.enabled | enables preflight check for tvk | false | |
preflight.storageClass | Name of storage class to use for preflight checks (Required) | "" | |
preflight.cleanupOnFailure | Cleanup the resources on cluster if preflight checks fail (Optional) | false | |
preflight.imagePullSecret | Name of the secret for authentication while pulling the images from the local registry (Optional) | "" | |
preflight.limits | Pod memory and cpu resource limits for DNS and volume snapshot preflight check (Optional) | "" | "cpu=600m,memory=256Mi" |
preflight.localRegistry | Name of the local registry from where the images will be pulled (Optional) | "" | |
preflight.nodeSelector | Node selector labels for pods to schedule on a specific nodes of cluster (Optional) | "" | "key=value" |
preflight.pvcStorageRequest | PVC storage request for volume snapshot preflight check (Optional) | "" | "2Gi" |
preflight.requests | Pod memory and cpu resource requests for DNS and volume snapshot preflight check (Optional) | "" | "cpu=300m,memory=128Mi" |
preflight.volumeSnapshotClass | Name of volume snapshot class to use for preflight checks (Optional) | "" | |
Check out T4K Integration with Observability Stack for additional options to enable observability stack for T4K.
If using an external ingress controller, you must use the following command:
--set installTVK.ComponentConfiguration.ingressController.enabled=false --set installTVK.ingressConfig.ingressClass="" --set installTVK.ingressConfig.host="" --set installTVK.ingressConfig.tlsSecretName=""
4. Check the output from the previous command and ensure that the installation was successful.
5. Check the TVM CR configuration using the following command:
kubectl get triliovaultmanagers.triliovault.trilio.io triliovault-manager -o yaml
6. Optionally, if you wish to access the T4K UI via HTTPS, you must create a TLS password and edit the TVM CR configuration. Refer toAccess over HTTPS - Prerequisite for more details.
7. Once the operator pod is in a running state, confirm that the T4K pods are up and running:
- Firstly, check that the pods were created:
kubectl get pods
The readout should be similar to this:
NAME READY STATUS RESTARTS AGE
k8s-triliovault-admission-webhook-6ff5f98c8-qwmfc 1/1 Running 0 81s
k8s-triliovault-web-backend-6f66b6b8d5-gxtmz 1/1 Running 0 81s
k8s-triliovault-control-plane-6c464c5d78-ftk6g 1/1 Running 0 81s
k8s-triliovault-exporter-59566f97dd-gs4xc 1/1 Running 0 81s
k8s-triliovault-ingress-nginx-controller-867c764cd5-qhpx6 1/1 Running 0 18s
k8s-triliovault-web-967c8475-m7pc6 1/1 Running 0 81s
k8s-triliovault-operator-66bd7d86d5-dvhzb 1/1 Running 0 6m48s
- Secondly, check that ingress controller service is of type nodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-triliovault-admission-webhook ClusterIP 10.7.243.24 <none> 443/TCP 129m
k8s-triliovault-ingress-nginx-controller nodePort 10.7.246.193 35.203.155.148 80:30362/TCP,443:32327/TCP 129m
k8s-triliovault-ingress-nginx-controller-admission ClusterIP 10.7.250.31 <none> 443/TCP 129m
k8s-triliovault-web ClusterIP 10.7.254.41 <none> 80/TCP 129m
k8s-triliovault-web-backend ClusterIP 10.7.252.146 <none> 80/TCP 129m
k8s-triliovault-operator-webhook-service ClusterIP 10.7.248.163 <none> 443/TCP 130m
- Thirdly, check that ingress resources have the host defined by the user:
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-triliovault k8s-triliovault-default-nginx * 35.203.155.148 80 129m
- Lastly, check that you can access the T4K UI by typing this address in your browser: https://35.203.155.148 Trilio is now successfully installed on your cluster.
8. If the install was not successful or the T4K pods were not spawned as expected:
Cluster version 1.21 or above
Cluster version below 1.21
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting the auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
- Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
- Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
- Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
For cluster versions below 1.21, you must manually clean up failed preflight jobs. To delete a job manually, run the following command:
kubectl delete job -f <job-name> -n <helm-release-namespace>
The above job name should also start with:
<helm-release-name>-preflight-job-preinstall-hook
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
- Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
- Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
- Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
To install the operator manually, run the latest helm charts from the following repository:
- 1.To add the repository where the triliovault-operator helm chart is located, use the command:
helm repo add triliovault-operator https://charts.k8strilio.net/trilio-stable/k8s-triliovault-operator
2. Install the chart from the added repository, but with the quick install method flag set to false, so that users can have more control over the installation:
helm install tvm triliovault-operator/k8s-triliovault-operator --set installTVK.enabled=false
Note that in step 2, you can also set additional parameters as set out inInstallation Configuration Options above.
3. Copy the sample TrilioVaultManager CR contents below and paste them into a new YAML file.
apiVersion: triliovault.trilio.io/v1
kind: TrilioVaultManager
metadata:
labels:
triliovault: k8s
name: tvk
spec:
trilioVaultAppVersion: latest
applicationScope: Cluster
# User can configure tvk instance name
tvkInstanceName: tvk-instance
# User can configure the ingress hosts, annotations and TLS secret through the ingressConfig section
ingressConfig:
host: ""
tlsSecretName: "secret-name"
# T4K components configuration, currently supports control-plane, web, exporter, web-backend, ingress-controller, admission-webhook.
# User can configure resources for all components and can configure service type and host for the ingress-controller
componentConfiguration:
web-backend:
resources:
requests:
memory: "400Mi"
cpu: "200m"
limits:
memory: "2584Mi"
cpu: "1000m"
ingress-controller:
enabled: true
service:
type: LoadBalancer
4. Optionally, if you wish to access the T4K UI via HTTPS, you must create a TLS password for use in the next step. Refer toAccess over HTTPS - Prerequisite for more details.
5. Customize the T4K resources configuration in the YAML file and then save it.
If using an external ingress controller, you must set these parameters in the yaml:
ingress-controller:
enabled: false
6. Now apply the CR YAML file using the command:
kubectl create -f TVM.yaml
8. If the install was not successful or the T4K pods were not spawned as expected:
Cluster version 1.21 or above
Cluster version below 1.21
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
- Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
- Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
- Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
For cluster versions below 1.21, you must manually clean up failed preflight jobs. To delete a job manually, run the following command:
kubectl delete job -f <job-name> -n <helm-release-namespace>
The above job name should also start with:
<helm-release-name>-preflight-job-preinstall-hook
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
- Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
- Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
- Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
9. Finally, check the T4K install:
- Firstly, check that the pods were created:
kubectl get pods
The readout should be similar to this:
NAME READY STATUS RESTARTS AGE
k8s-triliovault-admission-webhook-6ff5f98c8-qwmfc 1/1 Running 0 81s
k8s-triliovault-web-backend-6f66b6b8d5-gxtmz 1/1 Running 0 81s
k8s-triliovault-control-plane-6c464c5d78-ftk6g 1/1 Running 0 81s
k8s-triliovault-exporter-59566f97dd-gs4xc 1/1 Running 0 81s
k8s-triliovault-ingress-nginx-controller-867c764cd5-qhpx6 1/1 Running 0 18s
k8s-triliovault-web-967c8475-m7pc6 1/1 Running 0 81s
k8s-triliovault-operator-66bd7d86d5-dvhzb 1/1 Running 0 6m48s
- Secondly, check that the ingress controller service is of type LoadBalancer.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-triliovault-admission-webhook ClusterIP 10.7.243.24 <none> 443/TCP 129m
k8s-triliovault-ingress-nginx-controller LoadBalancer 10.7.246.193 35.203.155.148 80:30362/TCP,443:32327/TCP 129m
k8s-triliovault-ingress-nginx-controller-admission ClusterIP 10.7.250.31 <none> 443/TCP 129m
k8s-triliovault-web ClusterIP 10.7.254.41 <none> 80/TCP 129m
k8s-triliovault-web-backend ClusterIP 10.7.252.146 <none> 80/TCP 129m
k8s-triliovault-operator-webhook-service ClusterIP 10.7.248.163 <none> 443/TCP 130m
- Thirdly, check that ingress resources have the host defined by the user:
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-triliovault k8s-triliovault-default-nginx * 35.203.155.148 80 129m
- Lastly, check that you can access the T4K UI by typing this address in your browser: https://35.203.155.148 Trilio is now successfully installed on your cluster.
In order to install Trilio for Kubernetes in proxy-enabled environments. Install the operator (step 2 above) by providing the proxy settings:
Environment Variable | Purpose |
---|---|
HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) |
Note NO_PROXY must be in uppercase to use network range (CIDR) notation.
- proxySettings.PROXY_ENABLED=true
- proxySettings.HTTP_PROXY=http://<uname>:<password>@<IP>:<Port>
- proxySettings.HTTPS_PROXY=https://<uname>:<password>@<IP>:<Port>
- proxySettings.NO_PROXY="<according to user>"
- proxySettings.CA_BUNDLE_CONFIGMAP="<configmap>"
- For HTTPS proxy, create CA Bundle Proxy configMap in Install Namespace
- Proxy CA certificate file key should be
ca-bundle.crt
helm install tvm trilio-vault-operator/k8s-triliovault-operator \
--set proxySettings.PROXY_ENABLED=true \
--set proxySettings.NO_PROXY="localhost\,127.0.0.1\,10.239.112.0\/20\,10.240.0.0\/14" \
--set proxySettings.HTTP_PROXY=http://<uname>:<password>@<IP>:<Port> \
--set proxySettings.HTTPS_PROXY=https://<uname>:<password>@<IP>:<Port> \
--set proxySettings.CA_BUNDLE_CONFIGMAP="<proxy-configmap>"
After the operator is created by specifying proxy settings, the TVM will pick up these settings and leverage them directly for operations. No other configuration is required.
To generate and apply the Trilio license, perform the following steps:
- 1.You must have your kube-system UUID available before generating a license. This can be achieved as follows:
kubectl get ns kube-system -o jsonpath='{.metadata.uid}'
Though a cluster license enables Trilio features across all namespaces in a cluster, the license should only be created in the Trilio install namespace.
2. A license file must be generated for your specific environment.
a) Navigate to your Trilio Welcome email.

b) Click on the License link.
c) On the Trilio for Kubernetes License page, choose the Clustered scope.

d) Provide the kube-system Namespace UUID obtained in Step 1.
e) Click Generate License.
f) On the details confirmation page, copy or download the highlighted text to a file.

You can use the download button to save the highlighted text as a local file or use the copy button to copy the text and create your file manually.
3. Once the license file has been created, apply it to a Trilio instance using the command line or UI:
Command line
Management Console (UI)
- 1.Execute the following command:
kubectl apply -f <licensefile> -n trilio-system
2. If the previous step is successful, check that output generated is similar to the following:
NAMESPACE NAME STATUS MESSAGE CURRENT CPU COUNT GRACE PERIOD END TIME EDITION CAPACITY EXPIRATION TIME MAX CPUS
trilio-system license-sample Active Cluster License Activated successfully. 4 FreeTrial 10 2025-07-08T00:00:00Z 8
Additional license details can be obtained using the following:
kubectl
get license -o json -m trilio-system
Prerequisites:
- 1.
- 2.
If you have already executed the above prerequisites, then refer to the guide for applying a license in the UI: Actions: License Update
A license upgrade is required when moving from one license type to another.
Trilio maintains only one instance of a license for every installation of Trilio for Kubernetes.
To upgrade a license, run
kubectl apply -f <licensefile> -n <install-namespace>
against a new license file to activate it. The previous license will be replaced automatically.The Target CR (Customer Resource) is defined from the Trilio Management Console or from your own self-prepared YAML.
The Target object references the NFS or S3 backup storage share you provide as a target for your backups. Trilio will create a validation pod in the corresponding namespace and attempt to validate the NFS or S3 settings you have defined in the Target CR.
Trilio makes it easy to automatically create your backup Target CRD from the Management Console.
Take control of Trilio and define your own self-prepared YAML and apply it to the cluster using the kubectl tool.
kubectl apply -f sample-secret.yaml
kubectl apply -f demo-s3-target.yaml
apiVersion: v1
kind: Secret
metadata:
name: sample-secret
type: Opaque
stringData:
accessKey: AKIAS5B35DGFSTY7T55D
secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
name: demo-s3-target
spec:
type: ObjectStore
vendor: AWS
objectStoreCredentials:
region: us-east-1
bucketName: trilio-browser-test
credentialSecret:
name: sample-secret
namespace: TARGET_NAMESPACE
thresholdCapacity: 5Gi
Trilio is a cloud-native application for Kubernetes, therefore all operations are managed with CRDs (Custom Resource Definitions). We will discuss the purpose of each Trilio CRD and provide examples of how to create these objects Automatically in the Trilio Management Console or from the kubectl tool.
- The Backup Plan CR is defined from the Trilio Management Console or from your own self-prepared YAML.
The Backup Plan CR must reference the following:
- 1.Your Application Data (label/helm/operator)
- 2.Backup Target CR
- 3.Scheduling Policy CR
- 4.Retention Policy CR
- A Target CR is defined from the Trilio Management Console or from your own self-prepared YAML. Trilio will test the backup target to insure it is reachable and writable. Look at Trilio validation pod logs to troubleshoot any backup target creation issues.
- Retention and Schedule Policy CRs are defined from the Trilio Management Console or from your own self-prepared YAML.
- Scheduling Policies allow users to automate the backup of Kubernetes applications on a periodic basis. With this feature, users can create a scheduling policy that includes multiple cron strings to specify the frequency of backups.
- Retention Policies make it easy for users to define the number of backups they want to retain and the rate at which old backups should be deleted. With the retention policy CR, users can use a simple YAML specification to define the number of backups to retain in terms of days, weeks, months, years, or the latest backup. This provides a flexible and customizable way to manage your backup retention policy and ensure you meet your compliance requirements.
- The Backup CR is defined from the Trilio Management Console or from your own self-prepared YAML.The backup object references the actual backup Trilio creates on the Target. The backup is taken as either a Full or Incremental backup as defined by the user in the Backup CR.
Trilio makes it easy to automatically create your backup plans and all required target and policy CRDs from the Management Console.
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the kubectl tool.
# kubectl apply -f ns-backupplan.yaml
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: ns-backupplan
spec:
backupConfig:
target:
namespace: default
name: demo-s3-target
kubectl apply -f sample-schedule.yaml
kind: "Policy"
apiVersion: "triliovault.trilio.io/v1"
metadata:
name: "sample-schedule"
spec:
type: "Schedule"
scheduleConfig:
schedule:
- "0 0 * * *"
- "0 */1 * * *"
- "0 0 * * 0"
- "0 0 1 * *"
- "0 0 1 1 *"
kubectl apply -f sample-retention.yaml
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
name: sample-retention
spec:
type: Retention
default: false
retentionConfig:
latest: 2
weekly: 1
dayOfWeek: Wednesday
monthly: 1
dateOfMonth: 15
monthOfYear: March
yearly: 1
kubectl apply -f sample-backup.yaml
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: sample-backup
spec:
type: Full
backupPlan:
name: sample-application
namespace: default
A Restore CR (Custom Resource) is defined from the Trilio Management Console or from your own self-prepared YAML. The Restore CR references a backup object which has been created previously from a Backup CR.
In a Migration scenario, the location of the backup should be specified within the desired target as there will be no Backup CR defining the location.
Trilio restores the backup into a specified namespace and upon completion of the restore operation, the application is ready to be used on the cluster.
Trilio makes it easy to automatically create your Restore CRDs from the Management Console.
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the kubectl tool.
kubectl apply -f sample-restore.yaml
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: sample-restore
spec:
source:
type: Backup
backup:
name: sample-backup
namespace: default