Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)

Learn how to install, license and test Trilio for Kubernetes (T4K) in the AWS Elastic Kubernetes Service (EKS) environment.

Table of Contents

What is Trilio for Kubernetes?

Trilio for Kubernetes is a cloud-native backup and restore application. Being a cloud-native application for Kubernetes, all operations are managed with CRDs (Customer Resource Definitions).

Trilio utilizes Control Plane and Data Plane controllers to carry out the backup and restore operations defined by the associated CRDs. When a CRD is created or modified the controller reconciles the definitions to the cluster.

Trilio gives you the power and flexibility to backup your entire cluster or select a specific namespace(s), label, Helm chart, or Operator as the scope for your backup operations.

In this tutorial, we'll show you how to install and test operation of Trilio for Kubernetes on your EKS deployment.

Prerequisites

Before installing Trilio for Kubernetes, please review the compatibility matrix to ensure Trilio can function smoothly in your Kubernetes environment.

Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.

Check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.

Trilio will assume that the selected storage driver is a supported CSI driver when the volumesnapshotclass and storageclassare utilized.

Trilio for Kubernetes requires the following Custom Resource Definitions (CRD) to be installed on your cluster:VolumeSnapshot, VolumeSnapshotContent, and VolumeSnapshotClass.

Installing the Required VolumeSnapshot CRDs

Before attempting to install the VolumeSnapshot CRDs, it is important to confirm that the CRDs are not already present on the system.

To do this, run the following command:

kubectl api-resources | grep volumesnapshot

If CRDs are already present, the output should be similar to the output displayed below. The second column displays the version of the CRD installed (v1 in this case). Ensure that it is the correct version required by the CSI driver being used.

volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot

Installing CRDs

Be sure to only install v1 version of VolumeSnapshot CRDs

  1. Run the following commands to install directly, check the repo for the latest version:

RELEASE_VERSION=6.3
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

For non-air-gapped environments, the following URLs must be accessed from your Kubernetes cluster.

If the Kubernetes cluster's control plane and worker nodes are separated by a firewall, then the firewall must allow traffic on the following port(s)

  • 9443

Verify Prerequisites with the Trilio Preflight Check

Installation Methods

There are two methods that can be used to install T4K on the AWS EKS cluster:

Install from the AWS Marketplace -

Trilio for the Kubernetes application is listed in the AWS Marketplace, where users can opt for a Long Term Subscription to the product.

  1. Trilio for Kubernetes (Long-Term Contractual Pricing)

​Install manually from the CLI -

Users can follow the exact installation instructions provided for Getting Started with Trilio for Upstream Kubernetes (K8S)environments for installing T4K into EKS clusters.

As part of both types of installations, it installs -

  1. Trilio for Kubernetes Operator is installed in the tvk namespace

  2. Trilio for Kubernetes Manager is installed in the tvk namespace

  3. Trilio ingress is configured to access the T4K Management UI. Refer toConfiguring the UI.

Follow the step-by-step instructions below to install T4K from the AWS marketplace:

1. Trilio for Kubernetes (Long-Term Contractual Pricing)

  1. Search for Trilio on the AWS Marketplace and select Trilio for Kubernetes application offer.

  2. This offer is built for the long-term contractual license. It is valid for one year with the price of $1000 per node (By default one node is considered as 4 vCPUs.)

  3. Helm chart is used to perform the product installation. The user can install the product on the existing EKS cluster or use Cloud Formation Template (CFT) to automatically create a new EKS cluster with T4K installed on it.

  4. After T4K is installed, the user can apply the license they have acquired from the Trilio Professional Services and Solutions Architecture team.

  5. If user faces any issues they can contact the Support team using the information present in the Support tab.

  6. Click on the Continue to Subscribe button from the product listing page.\

  7. Verify that the BYOL offer price is mentioned as $0 and click on Accept Terms button to proceed.

  8. Once the terms are accepted, the Effective Date will be updated in the offer. Now, click on the Continue to Configuration button to proceed with the installation commands.

  9. Choose the Helm Installation as Fulfilment option and select the desired Software version from the listed versions. Click on the Continue to Launch button.

  10. In the Launch method you can select from two options-

    1. Launch on existing cluster -

      1. Install T4K on your existing EKS cluster

      2. Login to the existing EKS cluster through CLI and connect to AWS through awscli.

      3. Follow the commands to create the AWS IAM role and Kubernetes Service Account on AWS

      4. Follow the command under Launch the Software section to pull the helm chart and install the product.

    2. Launch a new EKS cluster with QuickLaunch -

      1. Click on the QuickLaunch with Cloudformation to trigger the template deployment.

      2. Provide the Stack name and EKS cluster name to create the stack.

      3. Click on the Create stack button at the button to start the stack deployment.

Authentication

The T4K user interface facilitates authentication through kubeconfig files, which house elements such as tokens, certificates, and auth-provider information. However, in some Kubernetes cluster distributions, the kubeconfig might include cloud-specific exec actions or auth-provider configurations to retrieve the authentication token via the credentials file. By default, this is not supported.

When using kubeconfig on the local system, any cloud-specific action or config in the user section of the kubeconfig will seek the credentials file in a specific location. This allows the kubectl/client-go library to generate an authentication token for use in authentication. However, when the T4K Backend is deployed in the Cluster Pod, the credentials file necessary for token generation is not accessible within the Pod.

To rectify this, T4K features cloud distribution-specific support to manage and generate tokens from these credential files.

Using credentials for login

  1. In an EKS cluster, a local binary known as aws (aws-cli) is used pull the credentials from a file named credentials.

  2. This file is located under the path $HOME/.aws and is used to generate an authentication token.

  3. When a user attempts to log into the T4K user interface deployed in an EKS cluster, they are expected to supply the credentials file from the location $HOME/.aws for successful authentication.

Example of Default kubeconfig


clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFR
    server: https://6C74ACD3CA40CFCB719CF3464423ADA9.gr7.us-east-1.eks.amazonaws.com
  name: vinod-eks.us-east-1.eksctl.io
contexts:
- context:
    cluster: vinod-eks.us-east-1.eksctl.io
    user: [email protected]@vinod-eks.us-east-1.eksctl.io
  name: [email protected]@vinod-eks.us-east-1.eksctl.io
current-context: [email protected]@vinod-eks.us-east-1.eksctl.io
kind: Config
preferences: {}
users:
- name: [email protected]@vinod-eks.us-east-1.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - vinod-eks
      - --region
      - us-east-1
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional

Example of Credentials pulled from credentials

Credentials pulled from credentials

Access Entry/ Auth Configmap configurations to support impersonation

T4K uses user impersonation in admission webhooks to validate that users have access to referenced resources (namespaces, secrets, other T4K objects) when creating or modifying T4K resources. This prevents privilege escalation by ensuring operations respect user permissions rather than using T4K's elevated service account privileges.

When using EKS clusters, T4K webhooks may fail during user impersonation if users are not properly configured with Kubernetes groups. This applies to both Access Entries (newer method) and aws-auth ConfigMap (legacy method). This section explains how to properly configure EKS authentication to ensure T4K operates correctly.

For impersonation to work in EKS, users must have either:

  1. Kubernetes Groups - Through Access Entries or aws-auth ConfigMap

  2. Direct ClusterRoleBinding - Binding permissions directly to user ARN

EKS Authentication Methods

Method 1: Access Entries (Recommended)

EKS Access Entries can grant permissions via:

  1. Access Policies : This is AWS managed authorization, it works for direct API calls but doesn't work for kubernetes impersonation.

  2. Kubernetes Groups : This is kubernetes native authorization, since it is kubernetes native it allows T4k to impersonate the user in order to perform certain actions on their behalf.

Configure Access Entry with Kubernetes Groups:

# Create access entry with custom groups
aws eks create-access-entry --cluster-name <cluster-name> \
 --principal-arn <iam-identity-arn> \
 --type STANDARD --kubernetes-groups <groups>

Limitations:

EKS Access Entries do not allow groups with system:* prefix. You cannot use system:masters, system:authenticated. Custom group names like tvk-users, tvk-admins, cluster-admins need to be created that will be assciated with the access entry

Method 2: aws-auth ConfigMap (Legacy)

If using the aws-auth ConfigMap approach, users must be configured with groups:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapUsers: |
    - userarn: arn:aws:iam::ACCOUNT:user/my-user
      username: my-user
      groups:
        - tvk-users  # Required for impersonation

This configuration will not work for T4K impersonation if the user doesn't have direct clusterRoleBinding associated:

# Missing groups - impersonation will fail unless user has a direct clusterRoleBinding
mapUsers: |
  - userarn: arn:aws:iam::ACCOUNT:user/my-user
    username: my-user
    # No groups specified

Required Kubernetes RBAC Configuration

Regardless of which authentication method you use, create the necessary RBAC:

# Create ClusterRole for T4K users
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tvk-user-role
rules:
- apiGroups: ["triliovault.trilio.io"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["namespaces", "persistentvolumes", "persistentvolumeclaims"]
  verbs: ["get", "list", "watch"]
---
# Bind the role to the group
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tvk-user-binding
subjects:
- kind: Group
  name: tvk-users
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: tvk-user-role
  apiGroup: rbac.authorization.k8s.io

If you cannot use groups, you can create a direct ClusterRoleBinding to the user ARN, in this case no group configuration is needed in Access Entry/ aws-auth Configmap. This approach is discouraged since it's hard to manage direct user binding for each user and doesn't scale with increasing users.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tvk-user-direct-binding
subjects:
- kind: User
  name: arn:aws:iam::ACCOUNT:user/my-user  # Direct user binding
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: tvk-user-role
  apiGroup: rbac.authorization.k8s.io

Installation Methods

Licensing Trilio for Kubernetes

To generate and apply the Trilio license, perform the following steps:

Although a cluster license enables Trilio features across all namespaces in a cluster, the license only needs to be applied in the namespace where Trilio is installed. For example, trilio-system namespace.

1. Obtain a license by getting in touch with us here. The license file will contain the license key.

2. Apply the license file to a Trilio instance using the command line or UI:

  1. Execute the following command:

kubectl apply -f <licensefile> -n trilio-system

2. If the previous step is successful, check that the output generated is similar to the following:

NAMESPACE            NAME         STATUS   MESSAGE                                   CURRENT CPU COUNT   GRACE PERIOD END TIME   EDITION     CAPACITY   EXPIRATION TIME        MAX CPUS
trilio-system     license-sample   Active   Cluster License Activated successfully.   4                                           FreeTrial   10         2025-07-08T00:00:00Z   8

Additional license details can be obtained using the following:

kubectl get license -o json -m trilio-system

Upgrading a license

A license upgrade is required when moving from one license type to another.

Trilio maintains only one instance of a license for every installation of Trilio for Kubernetes.

To upgrade a license, run kubectl apply -f <licensefile> -n <install-namespace> against a new license file to activate it. The previous license will be replaced automatically.

Create a Target

The Target CR (Customer Resource) is defined from the Trilio Management Console or from your own self-prepared YAML.

The Target object references the NFS or S3 storage share you provide as a target for your backups/snapshots. Trilio will create a validation pod in the namespace where Trilio is installed and attempt to validate the NFS or S3 settings you have defined in the Target CR.

Trilio makes it easy to automatically create your Target CR from the Management Console.

Learn how to Create a Target from the Management Console

Take control of Trilio and define your own self-prepared YAML and apply it to the cluster using the oc/kubectl tool.

Example S3 Target

kubectl apply -f sample-secret.yaml
kubectl apply -f demo-s3-target.yaml
apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
  name: demo-s3-target
spec:
  type: ObjectStore
  vendor: AWS
  objectStoreCredentials:
    region: us-east-1
    bucketName: trilio-browser-test
    credentialSecret:
      name: sample-secret
      namespace: TARGET_NAMESPACE
  thresholdCapacity: 5Gi

Testing Backup, Snapshot and Restore Operation

Trilio is a cloud-native application for Kubernetes, therefore all operations are managed with CRDs (Custom Resource Definitions). We will discuss the purpose of each Trilio CRs and provide examples of how to create these objects Automatically in the Trilio Management Console or from the oc/kubectl tool.

About Backup Plans

  • The Backup Plan CR is defined from the Trilio Management Console or from your own self-prepared YAML.

The Backup Plan CR must reference the following:

  1. Your Application Data (label/helm/operator)

  2. BackupConfig

    1. Target CR

    2. Scheduling Policy CR

    3. Retention Policy CR

  3. SnapshotConfig

    1. Target CR

    2. Scheduling Policy CR

    3. Retention Policy CR

  • A Target CR is defined from the Trilio Management Console or from your own self-prepared YAML. Trilio will test the backup target to insure it is reachable and writable. Look at Trilio validation pod logs to troubleshoot any backup target creation issues.

  • Retention and Schedule Policy CRs are defined from the Trilio Management Console or from your own self-prepared YAML.

    • Scheduling Policies allow users to automate the backup/Snapshot of Kubernetes applications on a periodic basis. With this feature, users can create a scheduling policy that includes multiple cron strings to specify the frequency of backups.

    • Retention Policies make it easy for users to define the number of backups/snapshots they want to retain and the rate at which old backups/snapshots should be deleted. With the retention policy CR, users can use a simple YAML specification to define the number of backups/snapshots to retain in terms of days, weeks, months, years, or the latest backup/snapshots. This provides a flexible and customizable way to manage your backup/snapshots retention policy and ensure you meet your compliance requirements.

  • The Backup and Snapshot CR is defined from the Trilio Management Console or from your own self-prepared YAML.

    The backup/snapshot object references the actual backup Trilio creates on the Target. The backup is taken as either a Full or Incremental backup as defined by the user in the Backup CR. The snapshpt is taken as Full snapshot only.

Creating a Backup Plan

Trilio makes it easy to automatically create your backup plans and all required target and policy CRDs from the Management Console.

Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.

Example Namespace Scope BackupPlan:

kind: "Policy"
apiVersion: "triliovault.trilio.io/v1"
metadata:
  name: "sample-schedule"
spec:
  type: "Schedule"
  scheduleConfig:
    schedule:
      - "0 0 * * *"
      - "0 */1 * * *"
      - "0 0 * * 0"
      - "0 0 1 * *"
      - "0 0 1 1 *"
kubectl apply -f sample-schedule.yaml
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
  name: sample-retention
spec:
  type: Retention
  default: false
  retentionConfig:
    latest: 2
    weekly: 1
    dayOfWeek: Wednesday
    monthly: 1
    dateOfMonth: 15
    monthOfYear: March
    yearly: 1
kubectl apply -f sample-retention.yaml
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: sample-backupplan
spec:
  backupConfig:
    target:
      namespace: default
      name: demo-s3-target
    retentionPolicy:
      name: sample-retention
      namespace: default
    schedulePolicy:
      fullBackupPolicy:
        name: sample-schedule
        namespace: default
  snapshotConfig:
    target:
      namespace: default
      name: demo-s3-target
    retentionPolicy:
      name: sample-retention
      namespace: default
    schedulePolicy:
      snapshotPolicy:
        name: sample-schedule
        namespace: default
# kubectl apply -f ns-backupplan.yaml

Target in the backupConfig and snapshotConfig needs to be the same. User can specify different retention and schedule policies under backupConfig and snapshotConfig.

See more Examples of Backup Plan YAML

Creating a Backup

apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: sample-backup
spec:
  type: Full
  backupPlan:
    name: sample-backupplan
    namespace: default
kubectl apply -f sample-backup.yaml

Learn more about Creating Backups from the Management Console

Creating a Snapshot

apiVersion: triliovault.trilio.io/v1
kind: Snapshot
metadata:
  name: sample-snapshot
spec:
  type: Full
  backupPlan:
    name: sample-backupplan
    namespace: default
kubectl apply -f sample-snapshot.yaml

Learn more about Creating Snapshots from the Management Console

About Restore

A Restore CR (Custom Resource) is defined from the Trilio Management Console or from your own self-prepared YAML. The Restore CR references a backup object which has been created previously from a Backup CR.

In a Migration scenario, the location of the backup/snapshot should be specified within the desired target as there will be no Backup/Snapshot CR defining the location. if you are migrating Snapshot then make sure that then then actual Persistent Volume snapshots are accessible from the other cluster.

Trilio restores the backup/snapshot into a specified namespace and upon completion of the restore operation, the application is ready to be used on the cluster.

Creating a Restore

Trilio makes it easy to automatically create your Restore CRDs from the Management Console.

Learn more about Creating Restores from the Management Console

Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.

kubectl apply -f sample-restore.yaml
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: sample-restore
spec:
  source:
    type: Backup
    backup:
      name: sample-backup
      namespace: default

See more Examples of Restore YAML

Troubleshooting

Problems? Learn about Troubleshooting Trilio for Kubernetes

Last updated

Was this helpful?