LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • Main Configuration Types
  • InstanceName and LogLevel
  • Pause Schedule Backups And Snapshots
  • Job Resource Requirements
  • Scheduling Configuration
  • Application Scope
  • Ingress Configuration
  • CSI Configuration
  • Changing the Configuration

Was this helpful?

  1. Advanced Configuration

Modifying Default T4K Configuration

This page describes how to make changes to default configurations for various components of the Trilio for Kubernetes application.

PreviousT4K Integration with Observability StackNextSupported Application Types

Last updated 1 month ago

Was this helpful?

Main Configuration Types

There are different types of configuration for T4K which are helpful in tuning T4K as per requirements/cluster constraints. The available configurations are:

  • InstanceName and LogLevel

  • Pause Schedule Backups and Snapshot at T4K level

  • Job Resource Requirements

  • Scheduling Configuration

  • Application Scope

  • Ingress Configuration

  • CSI Configuration

InstanceName and LogLevel

T4K configuration is used to:

  • provide the InstanceName for the T4K installation.

  • configure the logLevel across the product.

  • configure the datamoverLogLevel for Datamover jobs. Datamover jobs are responsible for performing backup and restore of the data part of an application. Refer the backup and restore details .

  • the available log levels are as follows:

    1. Panic

    2. Fatal

    3. Error

    4. Warn

    5. Info (default value)

    6. Debug

    7. Trace

Pause Schedule Backups And Snapshots

This flag provides ability to pause the schedule backups and snapshots at the global level. Enabling this flag will pause all the backups and snapshots for the given T4K. This functionality provides users with a straightforward way to manage the scheduled backups and snapshot at T4K level and allows user to pause all backups and snapshots in case of upgrades and maintenance.

Job Resource Requirements

Job Resource Requirement is used to modify the default resources requirements like CPU and memory for all the pods which are created as part of product installation as well as backup and restore operation. There are different fields for setting resource requirements for different types of pods.

  • metadataJobResources - Specifies the resource requirements for all meta-data related and target mounting jobs like target-validator, meta-snapshot, pre-restore-validation, meta-restore, etc

  • dataJobResources : Specifies the resource requirements for Datamover jobs

  • deploymentLimits : Specifies limits for helm chart deployments. Not applicable for OCP

Scheduling Configuration

T4K provides a way to put a constraint on the pods it creates by itself and schedule those pods on a specific set of nodes in the cluster. T4K leverages the existing kubernetes scheduling mechanisms to schedule its pods.

We have enhanced scheduling flexibility by separating the control plane and worker node scheduling configurations. Previously, all pods shared a single scheduling configuration. Now, the configuration is divided into two distinct parts: one for the control plane and another for worker jobs

Control-Plane Pods Scheduling Configuration

Control-Plane Pods include the pod of following workloads that are created by T4K when installed

  1. Control Plane Deployment

  2. Exporter Deployment

  3. Ingress Nginx Controller Deployment

  4. Resource Cleaner CronJob

  5. Web-backend Deployment

  6. Web Deployment

There are three fields to put scheduling constraints on control-plane pods of T4K, they are:

WorkerJob Pods Scheduling Configuration

WorkerJob Pods are job pods, these jobs are triggered during actions such as backup, snapshot or restore. These jobs also have a similar scheduling config including the three fields just like the one in control-plane pods.

  • WorkerJobsSchedulingConfig : This config takes NodeSelector , Toleration , and Affinity that would be used to set the scheduling configuration for worker job pods.

Application Scope

Application Scope denotes T4K installation scope. Installation scope can be of two types - Cluster and Namespaced. When T4K is installed in Namespaced scope, the cluster scoped CRDs will not be installed. The cluster scoped CRDs are:

  1. ClusterBackupplan

  2. ClusterBackup

  3. ClusterRestore

  4. ContinuousRestorePlan

  5. ConsistentSet

In namespaced scope installation, the cluster scoped features such as - multi-namespace backup and restore and continuous-restore won't be enabled for use. The working of T4K will be restricted to it's installation namespace only.

Ingress Configuration

User can configure the external access to T4K services using the Ingress through HTTP or HTTPS. There four configurable fields related to Ingress. They are:

  1. IngressClass: Name of the IngressClass resource which contains additional information regarding Ingress' parameters and the name of the controller that implements this particular IngressClass.

  2. Annotations: Extra annotations to be added on the required Ingress resource.

  3. Host: Name based virtual host against the IP address on which the external traffic will be received.

  4. TLSSecretName: Name of the TLS secret that contains a private key and certificate. When TLS secret name is specified, the external traffic to Ingress will use HTTPS protocol (port 443). The TLS secret should be present in the namespace where T4K is to be installed.

The fields IngressClass and Annotations fields should be empty when componentConfiguration.ingress-controller map contains key-value enabled: "true" .

Expose TVK UI on Custom Path

  1. urlPath: Path on which UI should be accessible. By default values is "/".

CSI Configuration

CSI configuration is used to configure the CSI provisioners which do not support Volume Snapshot functionality. For full details about this, refer to T4K For Volumes with Generic Storage. To enable this configuration there are 3 lists:

  • default CSI Provisioner list: Known list of CSI provisioners which do not support Volume Snapshot. Maintained by T4K. This list will be updated as and when new non-snapshot CSI provisioners are discovered.

  • include CSI Provisioner list: User given list of CSI provisioners which do not support Volume Snapshot.

  • exclude CSI Provisioner list: User given list of CSI provisioners which need to be ignored from Default list.

Changing the Configuration

Input for the aforementioned T4K configuration is provided through the TVM CR. The following section explains how to modify the T4K configuration:

   apiVersion: triliovault.trilio.io/v1
   kind: TrilioVaultManager
   metadata:
     labels:
       triliovault: k8s
     name: sample-triliovaultmanager
   spec:
     helmValues:
       urlPath: "/"
     applicationScope: Namespaced
     tvkInstanceName: "tvk"
     dataJobResources:
       limits:
         cpu: 1500m
         memory: 5Gi
       requests:
         cpu: 100m
         memory: 800Mi
     metadataJobResources:
       limits:
         cpu: 500m
         memory: 1024Mi
       requests:
         cpu: 10m
         memory: 10Mi
     ingressConfig:
       ingressClass: alb # specify `ingressClass` only when you are not using trilio's ingress controller
       annotations:
         alb.ingress.kubernetes.io/load-balancer-name: trilio-load-balancer
       host: "trilio.com"
       tlsSecretName: "tls-secret"
     nodeSelector:
       host: Linux
       arch: x86
     affinity:
       nodeAffinity:
         preferredDuringSchedulingIgnoredDuringExecution:
         - weight: 1
           preference:
             matchExpressions:
             - key: arch
               operator: Exists
               values:
               - x86
       podAffinity:
         requiredDuringgSchedulingIgnoredDuringExecution:
           labelSelector:
           - matchExpressions:
             - key: host
               operator: In
               values:
               - Linux
           topologyKey: topology.kubernetes.io/zone
     tolerations:
     - key: "key1"
       operator: "Equal"
       value: "value1"
     - key: "key2"
       value: "Exists"
       value: "value2"
     workerJobsSchedulingConfig:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: arch
                operator: Exists
                values:
                - x86
        podAffinity:
         requiredDuringgSchedulingIgnoredDuringExecution:
           labelSelector:
           - matchExpressions:
             - key: host
               operator: In
               values:
               - Linux
           topologyKey: topology.kubernetes.io/zone
     logLevel: Info
     datamoverLogLevel: Info
     csiConfig:
       include:
         - example.provisioner.csi
       exclude:
         - test.provisioner.csi

Job Resource Requirements:

  • Set spec.dataJobResources for data job resource requirements

  • Set spec.metadataJobResources for meta job resource requirements

T4K Configuration:

  • Set spec.logLevel for logLevel. Default is Info

  • Set spec.datamoverLogLevel for datamover jobs logLevel. Default is Info

  • Set spec.tvkInstanceName for T4K instance name

Scheduling Configuration

  • Set spec.nodeSelector for running control-plane pods on a particular set of nodes

  • Set spec.affinity.nodeAffinity for scheduling control-plane pods with node affinity

  • Set spec.affinity.podAffinity for scheduling control-plane pods with pod affinity

  • Set spec.affinity.podAntiAffinity for scheduling control-plane pods with anti affinity

  • Set spec.tolerations to make control-plane pods tolerant of the taints mentioned on nodes

  • Set spec.workerJobsSchedulingConfig.affinity.nodeAffinity for scheduling worker job pods with node affinity

  • Set spec.workerJobsSchedulingConfig.affinity.podAffinity for scheduling worker job pods with pod affinity

  • Set spec.workerJobsSchedulingConfig.affinity.podAntiAffinity for scheduling worker job pods with pod anti affinity

  • Set spec.workerJobsSchedulingConfig.tolerations to make worker job pods tolerant of the taints mentioned on nodes

  • Set spec.workerJobsSchedulingConfig.nodeSelector for running worker job pods on a particular set of nodes

Application Scope

  • Set spec.applicationScope to set the scope of T4K installation. Default is Namespaced

Ingress Configuration

  • Set spec.ingressConfig.ingressClass to set the IngressClass resource

  • Set spec.ingressConfig.host to set the host name to route external HTTP(S) traffic

  • Set spec.ingressConfig.annotations to add extra annotations on the Ingress resource

  • Set spec.ingressConfig.tlsSecretName to set the TLS secret to use the TLS port 443 for external Ingress traffic.

Custom Path Configuration:

  • Set spec.helmValues.urlPath to set custom path.

CSI Configuration:

  • Set spec.csiConfig.include list for including the CSI provisioners in the non-snapshot functionality category

  • Set spec.csiConfig.exclude list for excluding the CSI provisioners from the non-snapshot functionality category.

NodeSelector: User can specify matching node-labels to schedule pods on a particular node or set of nodes. Refer to the official Kubernetes documentation on for more information.

Affinity: Affinity is used when a pod should schedule or prefer to schedule on a particular set of nodes. Affinity is of two types viz. - Node Affinity and Pod Affinity. On the other hand, Pod Anti-Affinity is opposite of Pod Affinity where a pod should not or prefer not to schedule on a particular set of nodes. User can specify both the Affinity and Anti-Affinity. Refer to the official Kubernetes documentation on for more information.

Toleration: Kubernetes allows nodes to add taints so that nodes can repel a set of pods which do not tolerate the given taints. To tolerate a taint, a pod must specify toleration. User can specify the tolerations with matching taints, so that T4K pods will be able to schedule themselves in a cluster with tainted nodes. Refer to the official Kubernetes documentation on for more information.

here
Node Selection
Affinity and anti-affinity
Taints and Tolerations