# Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)

## Table of Contents

1. [What is Trilio for Kubernetes](#what-is-trilio-for-kubernetes)
2. [Prerequisites](#prerequisites)
   1. [Verify Prerequisites](#verify-prerequisites-with-the-trilio-preflight-check)
3. [Installation Methods](#installation-methods)
   1. [Install from AWS Marketplace](#install-from-the-aws-marketplace)
   2. [Install from CLI](#install-manually-from-the-cli)
4. [Authentication](#authentication)
5. [Licensing Trilio for Kubernetes](#licensing-trilio-for-kubernetes)
6. [Create Backup Target](#create-a-backup-target)
7. [Testing Backup and Restore](#testing-backup-and-restore-operation)
8. [Troubleshooting](#troubleshooting)

## What is Trilio for Kubernetes?

Trilio for Kubernetes is a cloud-native backup and restore application. Being a cloud-native application for Kubernetes, all operations are managed with CRDs (Customer Resource Definitions).

Trilio utilizes Control Plane and Data Plane controllers to carry out the backup and restore operations defined by the associated CRDs. When a CRD is created or modified the controller reconciles the definitions to the cluster.

Trilio gives you the power and flexibility to backup your entire cluster or select a specific namespace(s), label, Helm chart, or Operator as the scope for your backup operations.

In this tutorial, we'll show you how to install and test operation of Trilio for Kubernetes on your EKS deployment.

## Prerequisites

* [x] **Confirm Compatibility**

Before installing Trilio for Kubernetes, please review the [compatibility matrix](https://docs.trilio.io/kubernetes/about-trilio-for-kubernetes/compatibility-matrix) to ensure Trilio can function smoothly in your Kubernetes environment.

* [x] **Verify that a CSI Driver which provides Snapshot functionality**

Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the **Snapshot** feature.

Check the [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs/drivers.html) to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.

Trilio will assume that the selected storage driver is a supported CSI driver when the [`volumesnapshotclass`](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/) and [`storageclass`](https://kubernetes.io/docs/concepts/storage/storage-classes/)are utilized.

* [x] **Verify that the Required Custom Resource Definitions (CRD) are installed**

Trilio for Kubernetes requires the following Custom Resource Definitions (CRD) to be installed on your cluster:`VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass.`

<details>

<summary><strong>Installing the Required VolumeSnapshot CRDs</strong></summary>

Before attempting to install the VolumeSnapshot CRDs, it is important to confirm that the CRDs are not already present on the system.

To do this, run the following command:

```
kubectl api-resources | grep volumesnapshot
```

If CRDs are already present, the output should be similar to the output displayed below. The second column displays the version of the CRD installed (v1 in this case). Ensure that it is the correct version required by the CSI driver being used.

```
volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot
```

**Installing CRDs**

**Be sure to only install v1 version of VolumeSnapshot CRDs**

1. [Read the external-snapshotter GitHub project documentation](https://github.com/kubernetes-csi/external-snapshotter/tree/master). This is compatible with v1.22+.
2. Run the following commands to install directly, check the repo for the latest version:

```
RELEASE_VERSION=6.3
```

```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
```

```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
```

```
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-${RELEASE_VERSION}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
```

</details>

* [x] **Network Access Requirements**

For non-air-gapped environments, the following URLs must be accessed from your Kubernetes cluster.

* [Charts.bitnami.com](http://charts.bitnami.com/)
* [Docker.io](http://docker.io/)
* [charts.helm.sh/stable/](http://charts.helm.sh/stable/)
* [gcr.io](http://gcr.io/)
* [kubernetes.io](http://kubernetes.io/)
* [quay.io](http://quay.io/)
* [github.com](https://github.com/)
* [raw.githubusercontent.com](https://raw.githubusercontent.com/)
* Access to the S3 endpoint if the backup target happens to be S3
* Access to application artifacts registry for image backup/restore

- [x] **Network Port Requirements**

If the Kubernetes cluster's control plane and worker nodes are separated by a firewall, then the firewall must allow traffic on the following port(s)

* 9443

## Verify Prerequisites with the Trilio Preflight Check

{% hint style="success" %}
Make sure your cluster is ready to Install Trilio for Kubernetes by installing the [Preflight Check Plugin](https://docs.trilio.io/kubernetes/krew-plugins/tvk-preflight-checks) and running the [Trilio Preflight Check](https://docs.trilio.io/kubernetes/krew-plugins/tvk-preflight-checks).

Trilio provides a preflight check tool that allows customers to validate their environment for Trilio installation.

The tool generates a report detailing all the requirements and whether they are met or not.

**If you encounter any failures, please send the Preflight Check output to your Trilio Professional Services and Solutions Architect so we may assist you in satisfying any missing requirements before proceeding with the installation.**
{% endhint %}

## Installation Methods

There are two methods that can be used to install T4K on the AWS EKS cluster:

#### Install from the AWS Marketplace -

Trilio for the Kubernetes application is listed in the [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=84af27b2-9ab1-4360-ac9d-b94a8f082cb2), where users can opt for a Long Term Subscription to the product.

1. [Trilio for Kubernetes](https://aws.amazon.com/marketplace/pp/prodview-w3oefbgbea2qo) (Long-Term Contractual Pricing)

#### ​Install manually from the CLI -

Users can follow the exact installation instructions provided for [upstream-kubernetes](https://docs.trilio.io/kubernetes/getting-started/upstream-kubernetes "mention")environments for installing T4K into EKS clusters.

As part of both types of installations, it installs -

1. Trilio for Kubernetes Operator is installed in the `tvk` namespace
2. Trilio for Kubernetes Manager is installed in the `tvk` namespace
3. Trilio ingress is configured to access the T4K Management UI. Refer to[accessing-the-ui](https://docs.trilio.io/kubernetes/advanced-configuration/management-console/accessing-the-ui "mention").

Follow the step-by-step instructions below to install T4K from the AWS marketplace:

### 1. [Trilio for Kubernetes](https://aws.amazon.com/marketplace/pp/prodview-w3oefbgbea2qo) (Long-Term Contractual Pricing)

1. Search for `Trilio` on the AWS Marketplace and select `Trilio for Kubernetes` application offer.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/txyNxyz0k3rVK3HYEACO/image%20(181).png" alt=""><figcaption></figcaption></figure>
2. This offer is built for the long-term contractual license. It is valid for one year with the price of $1000 per node (By default one node is considered as 4 vCPUs.)

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/HDjzuKCAgLPCe49R1suw/image%20(248).png" alt=""><figcaption></figcaption></figure>
3. Helm chart is used to perform the product installation. The user can install the product on the existing EKS cluster or use Cloud Formation Template (CFT) to automatically create a new EKS cluster with T4K installed on it.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/DqIFt0igG5DvVrr8zFNO/image%20(151).png" alt=""><figcaption></figcaption></figure>
4. After T4K is installed, the user can apply the license they have acquired from the Trilio Professional Services and Solutions Architecture team.
5. If user faces any issues they can contact the Support team using the information present in the Support tab.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/72UgEypb7SVc6jW4Z5X2/image%20(91).png" alt=""><figcaption></figcaption></figure>
6. Click on the `Continue to Subscribe` button from the product listing page.\\

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/wros3dIAaQnzPEA2QPoJ/image%20(377).png" alt=""><figcaption></figcaption></figure>
7. Verify that the BYOL offer price is mentioned as $0 and click on `Accept Terms` button to proceed.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/LbEjV5Y8TWpaBHJLhF4o/image%20(461).png" alt=""><figcaption></figcaption></figure>
8. Once the terms are accepted, the `Effective Date` will be updated in the offer. Now, click on the `Continue to Configuration` button to proceed with the installation commands.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/Q0sQawJuhWRWN9W1y76h/image%20(390).png" alt=""><figcaption></figcaption></figure>
9. Choose the `Helm Installation` as `Fulfilment option` and select the desired `Software version` from the listed versions. Click on the `Continue to Launch` button.

   <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/SCUIKunL1De1Vz520exa/image%20(385).png" alt=""><figcaption></figcaption></figure>
10. In the `Launch method` you can select from two options-
    1. **Launch on existing cluster -**
       1. Install T4K on your existing EKS cluster

          <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/MOX44WqII3XTntbTLTGX/image%20(389).png" alt=""><figcaption></figcaption></figure>
       2. Login to the existing EKS cluster through CLI and connect to AWS through awscli.
       3. Follow the commands to create the `AWS IAM role` and `Kubernetes Service Account` on AWS
       4. Follow the command under `Launch the Software` section to pull the helm chart and install the product.
    2. **Launch a new EKS cluster with QuickLaunch** -

       1. Click on the `QuickLaunch with Cloudformation` to trigger the template deployment.

          <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/P6m9n2933FC5xf6hJB2W/image%20(392).png" alt=""><figcaption></figcaption></figure>
       2. Provide the `Stack name` and `EKS cluster name` to create the stack.

          <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/3cOP7f5CpRuMyFVGicGK/image%20(412).png" alt=""><figcaption></figcaption></figure>
       3. Click on the `Create stack` button at the button to start the stack deployment.

       <figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/4hIiGqSa8G5zEvh7EbD9/image%20(107).png" alt=""><figcaption></figcaption></figure>

## Authentication

The T4K user interface facilitates authentication through kubeconfig files, which house elements such as tokens, certificates, and auth-provider information. However, in some Kubernetes cluster distributions, the kubeconfig might include cloud-specific exec actions or auth-provider configurations to retrieve the authentication token via the credentials file. By default, this is not supported.

When using kubeconfig on the local system, any cloud-specific action or config in the user section of the kubeconfig will seek the credentials file in a specific location. This allows the kubectl/client-go library to generate an authentication token for use in authentication. However, when the T4K Backend is deployed in the Cluster Pod, the credentials file necessary for token generation is not accessible within the Pod.

To rectify this, T4K features cloud distribution-specific support to manage and generate tokens from these credential files.

### Using credentials for login

1. In an EKS cluster, a local binary known as *aws* (aws-cli) is used pull the credentials from a file named credentials.
2. This file is located under the path $HOME/.aws and is used to generate an authentication token.
3. When a user attempts to log into the T4K user interface deployed in an EKS cluster, they are expected to supply the credentials file from the location $HOME/.aws for successful authentication.

#### Example of Default kubeconfig

```

clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFR
    server: https://6C74ACD3CA40CFCB719CF3464423ADA9.gr7.us-east-1.eks.amazonaws.com
  name: vinod-eks.us-east-1.eksctl.io
contexts:
- context:
    cluster: vinod-eks.us-east-1.eksctl.io
    user: vinod.patil@trilio.io@vinod-eks.us-east-1.eksctl.io
  name: vinod.patil@trilio.io@vinod-eks.us-east-1.eksctl.io
current-context: vinod.patil@trilio.io@vinod-eks.us-east-1.eksctl.io
kind: Config
preferences: {}
users:
- name: vinod.patil@trilio.io@vinod-eks.us-east-1.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - vinod-eks
      - --region
      - us-east-1
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
```

#### Example of Credentials pulled from credentials

<figure><img src="https://content.gitbook.com/content/9sDjF5HJP1bf8TtLcgkk/blobs/A3A1YfC2lDxqiI6Qv2Py/credseks.webp" alt=""><figcaption><p>Credentials pulled from credentials</p></figcaption></figure>

### Access Entry/ Auth Configmap configurations to support impersonation

T4K uses **user impersonation** in admission webhooks to validate that users have access to referenced resources (namespaces, secrets, other T4K objects) when creating or modifying T4K resources. This prevents privilege escalation by ensuring operations respect user permissions rather than using T4K's elevated service account privileges.

When using EKS clusters, T4K webhooks may fail during user impersonation if users are not properly configured with Kubernetes groups. This applies to both [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) (newer method) and [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html) (legacy method). This section explains how to properly configure EKS authentication to ensure T4K operates correctly.

For impersonation to work in EKS, users must have either:

1. Kubernetes Groups - Through Access Entries or aws-auth ConfigMap
2. Direct ClusterRoleBinding - Binding permissions directly to user ARN

#### EKS Authentication Methods

**Method 1: Access Entries (Recommended)**

EKS Access Entries can grant permissions via:

1. Access Policies : This is AWS managed authorization, it works for direct API calls but doesn't work for kubernetes impersonation.
2. Kubernetes Groups : This is kubernetes native authorization, since it is kubernetes native it allows T4k to impersonate the user in order to perform certain actions on their behalf.

**Configure Access Entry with Kubernetes Groups:**

```
# Create access entry with custom groups
aws eks create-access-entry --cluster-name <cluster-name> \
 --principal-arn <iam-identity-arn> \
 --type STANDARD --kubernetes-groups <groups>
```

> **Limitations:**
>
> EKS Access Entries do not allow groups with system:\* prefix. You cannot use system:masters, system:authenticated. Custom group names like tvk-users, tvk-admins, cluster-admins need to be created that will be assciated with the access entry

**Method 2: aws-auth ConfigMap (Legacy)**

If using the aws-auth ConfigMap approach, users must be configured with groups:

```
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapUsers: |
    - userarn: arn:aws:iam::ACCOUNT:user/my-user
      username: my-user
      groups:
        - tvk-users  # Required for impersonation
```

This configuration will not work for T4K impersonation if the user doesn't have direct clusterRoleBinding associated:

```
# Missing groups - impersonation will fail unless user has a direct clusterRoleBinding
mapUsers: |
  - userarn: arn:aws:iam::ACCOUNT:user/my-user
    username: my-user
    # No groups specified
```

#### Required Kubernetes RBAC Configuration

Regardless of which authentication method you use, create the necessary RBAC:

```
# Create ClusterRole for T4K users
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tvk-user-role
rules:
- apiGroups: ["triliovault.trilio.io"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["namespaces", "persistentvolumes", "persistentvolumeclaims"]
  verbs: ["get", "list", "watch"]
---
# Bind the role to the group
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tvk-user-binding
subjects:
- kind: Group
  name: tvk-users
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: tvk-user-role
  apiGroup: rbac.authorization.k8s.io
```

#### Alternative: Direct User Binding (Not Recommended)

If you cannot use groups, you can create a direct ClusterRoleBinding to the user ARN, in this case no group configuration is needed in Access Entry/ aws-auth Configmap. This approach is discouraged since it's hard to manage direct user binding for each user and doesn't scale with increasing users.

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tvk-user-direct-binding
subjects:
- kind: User
  name: arn:aws:iam::ACCOUNT:user/my-user  # Direct user binding
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: tvk-user-role
  apiGroup: rbac.authorization.k8s.io
```

## Installation Methods

## Licensing Trilio for Kubernetes

To generate and apply the Trilio license, perform the following steps:

{% hint style="info" %}
Although a cluster license enables Trilio features across all namespaces in a cluster, the license only needs to be applied in the namespace where Trilio is installed. For example, trilio-system namespace.
{% endhint %}

1\. Obtain a license by getting in touch with us [here](https://trilio.io/request-demo/). The license file will contain the license key.

2\. Apply the license file to a Trilio instance using the command line or UI:

{% tabs %}
{% tab title="Command line" %}

1. Execute the following command:

```
kubectl apply -f <licensefile> -n trilio-system
```

2\. If the previous step is successful, check that the output generated is similar to the following:

```
NAMESPACE            NAME         STATUS   MESSAGE                                   CURRENT CPU COUNT   GRACE PERIOD END TIME   EDITION     CAPACITY   EXPIRATION TIME        MAX CPUS
trilio-system     license-sample   Active   Cluster License Activated successfully.   4                                           FreeTrial   10         2025-07-08T00:00:00Z   8
```

{% hint style="info" %}
Additional license details can be obtained using the following:

kubectl `get license -o json -m trilio-system`
{% endhint %}
{% endtab %}

{% tab title="Management Console (UI)" %}
{% hint style="info" %}
Prerequisites:

1. Authenticate access to the Management Console (UI). Refer to [Broken link](https://docs.trilio.io/kubernetes/getting-started/broken-reference "mention").
2. Configure access to the Management Console (UI). Refer to [accessing-the-ui](https://docs.trilio.io/kubernetes/advanced-configuration/management-console/accessing-the-ui "mention").
   {% endhint %}

If you have already executed the above prerequisites, then refer to the guide for applying a license in the UI: [#actions-license-update](https://docs.trilio.io/kubernetes/using-trilio/getting-started-with-management-console/navigating-intro/cluster-management-home#actions-license-update "mention")
{% endtab %}
{% endtabs %}

### Upgrading a license

A license upgrade is required when moving from one license type to another.

Trilio maintains only one instance of a license for every installation of Trilio for Kubernetes.

To upgrade a license, run `kubectl apply -f <licensefile> -n <install-namespace>` against a new license file to activate it. The previous license will be replaced automatically.

#### [Learn more about Licensing Trilio for Kubernetes](https://docs.trilio.io/kubernetes/getting-started/licensing)

## Create a Target

The Target CR (Customer Resource) is defined from the Trilio Management Console or from your own self-prepared YAML.

The Target object references the NFS or S3 storage share you provide as a *target* for your backups/snapshots. Trilio will create a validation pod in the namespace where Trilio is installed and attempt to validate the NFS or S3 settings you have defined in the Target CR.

Trilio makes it easy to automatically create your Target CR from the Management Console.

Learn how to [Create a Target from the Management Console](https://docs.trilio.io/kubernetes/using-trilio/getting-started-with-management-console#create-a-target)

Take control of Trilio and define your own self-prepared YAML and apply it to the cluster using the oc/kubectl tool.

#### Example S3 Target

```
kubectl apply -f sample-secret.yaml
```

```
kubectl apply -f demo-s3-target.yaml
```

```
apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
```

```
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
  name: demo-s3-target
spec:
  type: ObjectStore
  vendor: AWS
  objectStoreCredentials:
    region: us-east-1
    bucketName: trilio-browser-test
    credentialSecret:
      name: sample-secret
      namespace: TARGET_NAMESPACE
  thresholdCapacity: 5Gi
```

* See more [Example Target YAML](https://docs.trilio.io/kubernetes/using-trilio/getting-started-1#step-2-create-a-target)

## Testing Backup, Snapshot and Restore Operation

Trilio is a cloud-native application for Kubernetes, therefore all operations are managed with CRDs (Custom Resource Definitions). We will discuss the purpose of each Trilio CRs and provide examples of how to create these objects Automatically in the Trilio Management Console or from the [oc/kubectl](https://docs.openshift.com/container-platform/4.8/cli_reference/openshift_cli/getting-started-cli.html) tool.

### About Backup Plans

* The Backup Plan CR is defined from the Trilio Management Console or from your own self-prepared YAML.

The Backup Plan CR must reference the following:

1. Your Application Data (label/helm/operator)
2. BackupConfig
   1. Target CR
   2. Scheduling Policy CR
   3. Retention Policy CR
3. SnapshotConfig
   1. Target CR
   2. Scheduling Policy CR
   3. Retention Policy CR

* A Target CR is defined from the Trilio Management Console or from your own self-prepared YAML. Trilio will test the backup target to insure it is reachable and writable. Look at Trilio validation pod logs to troubleshoot any backup target creation issues.
* Retention and Schedule Policy CRs are defined from the Trilio Management Console or from your own self-prepared YAML.
  * Scheduling Policies allow users to automate the backup/Snapshot of Kubernetes applications on a periodic basis. With this feature, users can create a scheduling policy that includes multiple cron strings to specify the frequency of backups.
  * Retention Policies make it easy for users to define the number of backups/snapshots they want to retain and the rate at which old backups/snapshots should be deleted. With the retention policy CR, users can use a simple YAML specification to define the number of backups/snapshots to retain in terms of days, weeks, months, years, or the latest backup/snapshots. This provides a flexible and customizable way to manage your backup/snapshots retention policy and ensure you meet your compliance requirements.
* The Backup and Snapshot CR is defined from the Trilio Management Console or from your own self-prepared YAML.

  The backup/snapshot object references the actual backup Trilio creates on the Target. The backup is taken as either a Full or Incremental backup as defined by the user in the Backup CR. The snapshpt is taken as Full snapshot only.

### Creating a Backup Plan

Trilio makes it easy to automatically create your backup plans and all required target and policy CRDs from the Management Console.

Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.

#### Example Namespace Scope BackupPlan:

```
kind: "Policy"
apiVersion: "triliovault.trilio.io/v1"
metadata:
  name: "sample-schedule"
spec:
  type: "Schedule"
  scheduleConfig:
    schedule:
      - "0 0 * * *"
      - "0 */1 * * *"
      - "0 0 * * 0"
      - "0 0 1 * *"
      - "0 0 1 1 *"
```

```
kubectl apply -f sample-schedule.yaml
```

```
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
  name: sample-retention
spec:
  type: Retention
  default: false
  retentionConfig:
    latest: 2
    weekly: 1
    dayOfWeek: Wednesday
    monthly: 1
    dateOfMonth: 15
    monthOfYear: March
    yearly: 1
```

```
kubectl apply -f sample-retention.yaml
```

<pre><code><strong>apiVersion: triliovault.trilio.io/v1
</strong>kind: BackupPlan
metadata:
  name: sample-backupplan
spec:
  backupConfig:
    target:
      namespace: default
      name: demo-s3-target
    retentionPolicy:
      name: sample-retention
      namespace: default
    schedulePolicy:
      fullBackupPolicy:
        name: sample-schedule
        namespace: default
  snapshotConfig:
    target:
      namespace: default
      name: demo-s3-target
    retentionPolicy:
      name: sample-retention
      namespace: default
    schedulePolicy:
      snapshotPolicy:
        name: sample-schedule
        namespace: default
</code></pre>

```
# kubectl apply -f ns-backupplan.yaml
```

{% hint style="info" %}
Target in the backupConfig and snapshotConfig needs to be the same. User can specify different retention and schedule policies under backupConfig and snapshotConfig.
{% endhint %}

See more [Examples of Backup Plan YAML](https://docs.trilio.io/kubernetes/using-trilio/getting-started-1/triliovault-crds#backupplan)

### Creating a Backup

```
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: sample-backup
spec:
  type: Full
  backupPlan:
    name: sample-backupplan
    namespace: default
```

```
kubectl apply -f sample-backup.yaml
```

Learn more about [Creating Backups from the Management Console](https://docs.trilio.io/kubernetes/using-trilio/getting-started-with-management-console#step-5-create-backup)

### Creating a Snapshot

```
apiVersion: triliovault.trilio.io/v1
kind: Snapshot
metadata:
  name: sample-snapshot
spec:
  type: Full
  backupPlan:
    name: sample-backupplan
    namespace: default
```

```
kubectl apply -f sample-snapshot.yaml
```

Learn more about [Creating Snapshots from the Management Console](https://docs.trilio.io/kubernetes/using-trilio/getting-started-with-management-console#step-6-create-snapshot)

### About Restore

A Restore CR (Custom Resource) is defined from the Trilio Management Console or from your own self-prepared YAML. The Restore CR references a backup object which has been created previously from a Backup CR.

In a Migration scenario, the location of the backup/snapshot should be specified within the desired target as there will be no Backup/Snapshot CR defining the location. if you are migrating Snapshot then make sure that then then actual Persistent Volume snapshots are accessible from the other cluster.

Trilio restores the backup/snapshot into a specified namespace and upon completion of the restore operation, the application is ready to be used on the cluster.

### Creating a Restore

Trilio makes it easy to automatically create your Restore CRDs from the Management Console.

Learn more about [Creating Restores from the Management Console](https://docs.trilio.io/kubernetes/using-trilio/getting-started-with-management-console#restore-from-backup)

Take control of Trilio, define your self-prepared YAML, and apply it to the cluster using the oc/kubectl tool.

```
kubectl apply -f sample-restore.yaml
```

```
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: sample-restore
spec:
  source:
    type: Backup
    backup:
      name: sample-backup
      namespace: default
```

See more [Examples of Restore YAML](https://docs.trilio.io/kubernetes/using-trilio/getting-started-1/triliovault-crds#restore)

## Troubleshooting

Problems? Learn about [Troubleshooting Trilio for Kubernetes](https://docs.trilio.io/kubernetes/support/troubleshooting-guide#troubleshooting-guide)
