Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
Learn how to install, license and test Trilio for Kubernetes (T4K) in the AWS Elastic Kubernetes Service (EKS) environment.
Trilio for Kubernetes is a cloud-native backup and restore application. Being a cloud-native application for Kubernetes, all operations are managed with CRDs (Customer Resource Definitions).
Trilio utilizes Control Plane and Data Plane controllers to carry out the backup and restore operations defined by the associated CRDs. When a CRD is created or modified the controller reconciles the definitions to the cluster.
Trilio gives you the power and flexibility to backup your entire cluster or select a specific namespace(s), label, Helm chart, or Operator as the scope for your backup operations.
In this tutorial, we'll show you how to install and test operation of Trilio for Kubernetes on your EKS deployment.
- Confirm Compatibility
Before installing Trilio for Kubernetes, please review the compatibility matrix to ensure Trilio can function smoothly in your Kubernetes environment.
- Verify that a CSI Driver which provides Snapshot functionality
Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.
Check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.
Trilio will assume that the selected storage driver is a supported CSI driver when the
volumesnapshotclass
and storageclass
are utilized.- Verify that the Required Custom Resource Definitions (CRD) are installed
Trilio for Kubernetes requires the following Custom Resource Definitions (CRD) to be installed on your cluster:
VolumeSnapshot
, VolumeSnapshotContent
, and VolumeSnapshotClass.
Before attempting to install the VolumeSnapshot CRDs, it is important to confirm that the CRDs are not already present on the system.
To do this, run the following command:
kubectl api-resources | grep volumesnapshot
If CRDs are already present, the output should be similar to the output displayed below. The second column displays the version of the CRD installed (v1beta1 in this case). Ensure that it is the correct version required by the CSI driver being used.
volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
Installing CRDs
Be sure to only install one version of VolumeSnapshot CRDs
- 1.Read the external-snapshotter GitHub project documentation. This is compatible with both v1.19 and v1.20+.
- 2.Run the following commands to install directly, check the repo for the latest version:
RELEASE_VERSION=6.0
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
- Network Access Requirements
For non-air-gapped environments, the following URLs must be accessed from your Kubernetes cluster.
- Access to the S3 endpoint if the backup target happens to be S3
- Access to application artifacts registry for image backup/restore
- Network Port Requirements
If the Kubernetes cluster's control plane and worker nodes are separated by a firewall, then the firewall must allow traffic on the following port(s)
- 9443
Make sure your cluster is ready to Install Trilio for Kubernetes by installing the Preflight Check Plugin and running the Trilio Preflight Check.
Trilio provides a preflight check tool that allows customers to validate their environment for Trilio installation.
The tool generates a report detailing all the requirements and whether they are met or not.
If you encounter any failures, please send the Preflight Check output to your Trilio Professional Services and Solutions Architect so we may assist you in satisfying any missing requirements before proceeding with the installation.
There are two methods that can be used to install T4K on the AWS EKS cluster:
Trilio for the Kubernetes application is listed in the AWS Marketplace, where users can opt for a Long Term Subscription to the product.
- 1.
Users can follow the exact installation instructions provided for Getting Started with Trilio for Upstream Kubernetes (K8S)environments for installing T4K into EKS clusters.
As part of both types of installations, it installs -
- 1.Trilio for Kubernetes Operator is installed in the
tvk
namespace - 2.Trilio for Kubernetes Manager is installed in the
tvk
namespace - 3.
Follow the step-by-step instructions below to install T4K from the AWS marketplace:
- 1.Search for
Trilio
on the AWS Marketplace and selectTrilio for Kubernetes
application offer. - 2.This offer is built for the long-term contractual license. It is valid for one year with the price of $1000 per node (By default one node is considered as 4 vCPUs.)
- 3.Helm chart is used to perform the product installation. The user can install the product on the existing EKS cluster or use Cloud Formation Template (CFT) to automatically create a new EKS cluster with T4K installed on it.
- 4.After T4K is installed, the user can apply the license they have acquired from the Trilio Professional Services and Solutions Architecture team.
- 5.If user faces any issues they can contact the Support team using the information present in the Support tab.
- 6.Click on the
Continue to Subscribe
button from the product listing page.\ - 7.Verify that the BYOL offer price is mentioned as $0 and click on
Accept Terms
button to proceed. - 8.Once the terms are accepted, the
Effective Date
will be updated in the offer. Now, click on theContinue to Configuration
button to proceed with the installation commands. - 9.Choose the
Helm Installation
asFulfilment option
and select the desiredSoftware version
from the listed versions. Click on theContinue to Launch
button. - 10.In the
Launch method
you can select from two options-- 1.Launch on existing cluster -
- 1.Install T4K on your existing EKS cluster
- 2.Login to the existing EKS cluster through CLI and connect to AWS through awscli.
- 3.Follow the commands to create the
AWS IAM role
andKubernetes Service Account
on AWS - 4.Follow the command under
Launch the Software
section to pull the helm chart and install the product.
- 2.Launch a new EKS cluster with QuickLaunch -
- 1.Click on the
QuickLaunch with Cloudformation
to trigger the template deployment. - 2.Provide the
Stack name
andEKS cluster name
to create the stack. - 3.Click on the
Create stack
button at the button to start the stack deployment.
The T4K user interface facilitates authentication through kubeconfig files, which house elements such as tokens, certificates, and auth-provider information. However, in some Kubernetes cluster distributions, the kubeconfig might include cloud-specific exec actions or auth-provider configurations to retrieve the authentication token via the credentials file. By default, this is not supported.
When using kubeconfig on the local system, any cloud-specific action or config in the user section of the kubeconfig will seek the credentials file in a specific location. This allows the kubectl/client-go library to generate an authentication token for use in authentication. However, when the T4K Backend is deployed in the Cluster Pod, the credentials file necessary for token generation is not accessible within the Pod.
To rectify this, T4K features cloud distribution-specific support to manage and generate tokens from these credential files.
- 1.In an EKS cluster, a local binary known as aws (aws-cli) is used pull the credentials from a file named credentials.
- 2.This file is located under the path $HOME/.aws and is used to generate an authentication token.
- 3.When a user attempts to log into the T4K user interface deployed in an EKS cluster, they are expected to supply the credentials file from the location $HOME/.aws for successful authentication.
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFR
server: https://6C74ACD3CA40CFCB719CF3464423ADA9.gr7.us-east-1.eks.amazonaws.com
name: vinod-eks.us-east-1.eksctl.io
contexts:
- context:
cluster: vinod-eks.us-east-1.eksctl.io
user: [email protected]@vinod-eks.us-east-1.eksctl.io
name: [email protected]@vinod-eks.us-east-1.eksctl.io
current-context: [email protected]@vinod-eks.us-east-1.eksctl.io
kind: Config
preferences: {}
users:
- name: [email protected]@vinod-eks.us-east-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- vinod-eks
- --region
- us-east-1
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional

Credentials pulled from credentials
To generate and apply the Trilio license, perform the following steps:
- 1.You must have your kube-system UUID available before generating a license. This can be achieved as follows:
kubectl get ns kube-system -o jsonpath='{.metadata.uid}'
Though a cluster license enables Trilio features across all namespaces in a cluster, the license should only be created in the Trilio install namespace.
2. A license file must be generated for your specific environment.
a) Navigate to your Trilio Welcome email.

b) Click on the License link.
c) On the Trilio for Kubernetes License page, choose the Clustered scope.

d) Provide the kube-system Namespace UUID obtained in Step 1.
e) Click Generate License.
f) On the details confirmation page, copy or download the highlighted text to a file.

You can use the download button to save the highlighted text as a local file or use the copy button to copy the text and create your file manually.
3. Once the license file has been created, apply it to a Trilio instance using the command line or UI:
Command line
Management Console (UI)
- 1.Execute the following command:
kubectl apply -f <licensefile> -n trilio-system
2. If the previous step is successful, check that output generated is similar to the following:
NAMESPACE NAME STATUS MESSAGE CURRENT CPU COUNT GRACE PERIOD END TIME EDITION CAPACITY EXPIRATION TIME MAX CPUS
trilio-system license-sample Active Cluster License Activated successfully. 4 FreeTrial 10 2025-07-08T00:00:00Z 8
Additional license details can be obtained using the following:
kubectl
get license -o json -m trilio-system
Prerequisites:
- 1.
- 2.
If you have already executed the above prerequisites, then refer to the guide for applying a license in the UI: Actions: License Update
A license upgrade is required when moving from one license type to another.
Trilio maintains only one instance of a license for every installation of Trilio for Kubernetes.
To upgrade a license, run
kubectl apply -f <licensefile> -n <install-namespace>
against a new license file to activate it. The previous license will be replaced automatically.The Target CR (Customer Resource) is defined from the Trilio Management Console or from your own self-prepared YAML.
The Target object references the NFS or S3 backup storage share you provide as a target for your backups. Trilio will create a validation pod in the corresponding namespace and attempt to validate the NFS or S3 settings you have defined in the Target CR.
Trilio makes it easy to automatically create your backup Target CRD from the Management Console.
Take control of Trilio and define your own self-prepared YAML and apply it to the cluster using the kubectl tool.
kubectl apply -f sample-secret.yaml
kubectl apply -f demo-s3-target.yaml
apiVersion: v1
kind: Secret
metadata:
name: sample-secret
type: Opaque
stringData:
accessKey: AKIAS5B35DGFSTY7T55D
secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
name: demo-s3-target
spec:
type: ObjectStore
vendor: AWS
objectStoreCredentials:
region: us-east-1
bucketName: trilio-browser-test
credentialSecret:
name: sample-secret
namespace: TARGET_NAMESPACE
thresholdCapacity: 5Gi
Trilio is a cloud-native application for Kubernetes, therefore all operations are managed with CRDs (Custom Resource Definitions). We will discuss the purpose of each Trilio CRD and provide examples of how to create these objects Automatically in the Trilio Management Console or from the kubectl tool.
- The Backup Plan CR is defined from the Trilio Management Console or from your own self-prepared YAML.
The Backup Plan CR must reference the following:
- 1.Your Application Data (label/helm/operator)
- 2.Backup Target CR
- 3.Scheduling Policy CR
- 4.Retention Policy CR
- A Target CR is defined from the Trilio Management Console or from your own self-prepared YAML. Trilio will test the backup target to insure it is reachable and writable. Look at Trilio validation pod logs to troubleshoot any backup target creation issues.
- Retention and Schedule Policy CRs are defined from the Trilio Management Console or from your own self-prepared YAML.
- Scheduling Policies allow users to automate the backup of Kubernetes applications on a periodic basis. With this feature, users can create a scheduling policy that includes multiple cron strings to specify the frequency of backups.
- Retention Policies make it easy for users to define the number of backups they want to retain and the rate at which old backups should be deleted. With the retention policy CR, users can use a simple YAML specification to define the number of backups to retain in terms of days, weeks, months, years, or the latest backup. This provides a flexible and customizable way to manage your backup retention policy and ensure you meet your compliance requirements.
- The Backup CR is defined from the Trilio Management Console or from your own self-prepared YAML.The backup object references the actual backup Trilio creates on the Target. The backup is taken as either a Full or Incremental backup as defined by the user in the Backup CR.
Trilio makes it easy to automatically create your backup plans and all required target and policy CRDs from the Management Console.
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the kubectl tool.
# kubectl apply -f ns-backupplan.yaml
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
name: ns-backupplan
spec:
backupConfig:
target:
namespace: default
name: demo-s3-target
kubectl apply -f sample-schedule.yaml
kind: "Policy"
apiVersion: "triliovault.trilio.io/v1"
metadata:
name: "sample-schedule"
spec:
type: "Schedule"
scheduleConfig:
schedule:
- "0 0 * * *"
- "0 */1 * * *"
- "0 0 * * 0"
- "0 0 1 * *"
- "0 0 1 1 *"
kubectl apply -f sample-retention.yaml
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
name: sample-retention
spec:
type: Retention
default: false
retentionConfig:
latest: 2
weekly: 1
dayOfWeek: Wednesday
monthly: 1
dateOfMonth: 15
monthOfYear: March
yearly: 1
kubectl apply -f sample-backup.yaml
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
name: sample-backup
spec:
type: Full
backupPlan:
name: sample-application
namespace: default
A Restore CR (Custom Resource) is defined from the Trilio Management Console or from your own self-prepared YAML. The Restore CR references a backup object which has been created previously from a Backup CR.
In a Migration scenario, the location of the backup should be specified within the desired target as there will be no Backup CR defining the location.
Trilio restores the backup into a specified namespace and upon completion of the restore operation, the application is ready to be used on the cluster.
Trilio makes it easy to automatically create your Restore CRDs from the Management Console.
Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the kubectl tool.
kubectl apply -f sample-restore.yaml
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
name: sample-restore
spec:
source:
type: Backup
backup:
name: sample-backup
namespace: default