Trilio Manager settings
The Trilio Manager page is accessible from Settings > Trilio Manager in the TrilioVault for Kubernetes (TVK) UI. It provides a centralized interface to configure the behaviour of your TVK installation. Changes made on this page updates the TrilioVaultManager Custom Resource on the cluster.
1. TVK Basic Configuration
These top-level fields control the identity and global scheduling behaviour of the TVK installation.

1.1 Instance Name
Field
tvkConfig.name
CR Path
spec.tvkInstanceName
Type
String
Required
Yes
Validation
Must follow Kubernetes naming conventions
Description: A display name for this TVK installation cluster. This name is shown in the TVK UI header and is useful when managing multiple TVK instances across clusters or in multi-cluster environments.
Where it is used:
Displayed in the TVK web UI as the installation identifier.
Used in the Continuous Restore service to distinguish source/destination sites.
Must be unique across TrilioVaultManager instances (enforced by the validating webhook).
1.2 Schedule Policy Timezone
Field
tvkConfig.schedulePolicyTimezone
CR Path
spec.helmValues.schedulePolicyTimezone (propagated to TVK config)
Type
String (IANA timezone, e.g. Asia/Kolkata, America/New_York)
Required
No
Description: The timezone used for all scheduled Backups and Snapshots. When a schedule policy defines a cron expression (e.g. "every day at 2 AM"), this timezone determines what "2 AM" means.
Where it is used:
The BackupPlan and ClusterBackupPlan controllers read this value to calculate the next scheduled backup/snapshot trigger time.
The retention policy expiry calculator uses this timezone for time comparisons.
Affects all BackupPlans that use schedule policies.
1.3 Pause Scheduled Backups and Snapshots
Field
pauseSchedule
CR Path
spec.pauseSchedule
Type
Boolean (toggle)
Default
false
Description: A global switch that pauses all scheduled backups and snapshots across every BackupPlan and ClusterBackupPlan in the cluster.
Where it is used:
The BackupPlan controller checks this flag on every reconciliation. When
true, it skips triggering any scheduled backups/snapshots.The ClusterBackupPlan controller applies the same logic at cluster scope.
When enabled,
PauseScheduleis propagated to each BackupPlan's status (status.pauseSchedule).Immutable backups/snapshots are disabled when the schedule is paused (
isImmutableBackupDisabled = true), since immutability requires active retention + schedule policies.The webhook validates that paused schedules cannot be used to create immutable backups.
Important: This does not prevent manual (on-demand) backups. Only cron-scheduled backups and snapshots are paused.
2. Log Configuration
Controls logging behaviour for all TVK components and data mover pods.

2.1 Log Level
Field
logConfig.logLevel
CR Path
spec.logConfig.logLevel
Type
Enum
Allowed Values
Panic, Fatal, Error, Warn, Info, Debug, Trace
Default
Info
Description: Sets the logging level for TVK control-plane components (controllers, analyzers, webhooks, web-backend).
Where it is used:
All TVK controller pods read this value at startup to configure their logging configuration.
Changing the level takes effect on the next pod restart or configuration reload.
Panic / Fatal
Only log unrecoverable errors (not recommended for normal use)
Error
Log errors only
Warn
Log warnings and errors
Info
Recommended for production. Logs operational messages
Debug
Verbose output for troubleshooting
Trace
Extremely verbose; includes low-level flow details
2.2 Data Mover Log Level
Field
logConfig.datamoverLogLevel
CR Path
spec.logConfig.datamoverLogLevel
Type
Enum
Allowed Values
Panic, Fatal, Error, Warn, Info, Debug, Trace
Default
Info
Description: Sets the logging level specifically for the data mover component. The data mover handles the actual data upload and restore operations (reading from PVCs, writing to target storage, and vice versa).
Where it is used:
The data mover (datamover) pods read this value to configure their logger.
Separate from the main log level because data operations can generate a high volume of logs at
Debug/Tracelevels.
2.3 Buffer Size
Field
logConfig.bufferSize
CR Path
spec.logConfig.bufferSize
Type
Integer
Range
2,000 – 50,000
Default
20,000
Description: The size of the in-memory log buffer (in bytes) used by TVK components before flushing logs to the output destination (stdout/file).
Where it is used:
A larger buffer reduces the frequency of I/O flush operations, which can improve performance under high log volume.
A smaller buffer ensures logs are written more promptly, reducing the risk of log loss on crash.
2.4 Flush Duration (Milliseconds)
Field
logConfig.flushDurationMilliSeconds
CR Path
spec.logConfig.flushDurationMilliSeconds
Type
Integer
Range
100 – 5,000 ms
Default
300 ms
Description: The interval (in milliseconds) at which the log buffer is forcefully flushed to the output, regardless of whether the buffer is full.
Where it is used:
Lower values (e.g. 100ms) ensure near-real-time log output, useful during debugging.
Higher values (e.g. 5000ms) reduce I/O overhead in production environments.
2.5 Max Log Files
Field
logConfig.maxLogFiles
CR Path
spec.logConfig.maxLogFiles
Type
Integer
Range
1 – 10
Default
5
Description: The maximum number of rotated log files to retain. When the current log file reaches the maximum size, it is rotated and a new file is created. The oldest rotated file is deleted once this limit is exceeded.
Where it is used:
Controls disk space usage for log files on TVK pods.
Only applicable when dual output logging is enabled (logs go to both stdout and files).
2.6 Max Log File Size (MB)
Field
logConfig.maxLogFileSizeMB
CR Path
spec.logConfig.maxLogFileSizeMB
Type
Integer
Range
1 – 10 MB
Default
10 MB
Description: The maximum size of each individual log file before rotation occurs.
Where it is used:
When a log file reaches this size, it is rotated (renamed) and a new log file is started.
Works in conjunction with
maxLogFilesto limit total disk usage tomaxLogFiles × maxLogFileSizeMB.
2.7 Enable Dual Output Log
Field
logConfig.enableDualOutputLog
CR Path
spec.logConfig.enableDualOutputLog
Type
Boolean (toggle)
Default
false
Description: When enabled, logs are written to both stdout (for kubectl logs) and to log files on disk simultaneously.
Where it is used:
Useful when you want persistent log files for post-mortem analysis while also having real-time log streaming via
kubectl logs.When disabled, logs go to stdout only (Kubernetes standard practice).
3. Resource Configuration
These sections define the Kubernetes resource requests and limits for various TVK job types. Each section has Requests (minimum guaranteed resources) and Limits (maximum allowed resources), with Memory (Mi/Gi) and CPU (millicores, m) fields.

Requests — The minimum compute resources guaranteed to the container. If omitted, defaults to Limits.
Limits — The maximum compute resources the container can consume. The container is throttled (CPU) or killed (Memory) if it exceeds this.
3.1 Meta Data Job Resources
Field
resources.metadataJobResources
CR Path
spec.metadataJobResources
Description: Resource requests and limits for all metadata processing jobs. These jobs handle metadata operations during backup, snapshot, and restore workflows — for example, reading Kubernetes resource manifests, generating metadata YAML/JSON, uploading metadata to the target, and validating restore metadata.
Memory
10 Mi
1024 Mi
CPU
10m
500m
Reference: For the meaning of Requests and Limits and their validation rules, see 3.1.1 and 3.1.2.
Where it is used:
Applied to pods running: metadata snapshot, metadata upload, metadata validation, and metadata restore operations.
If set too low, metadata jobs may be OOMKilled or throttled, causing backup/restore failures.
3.1.1 spec.metadataJobResources.request
CR Path
spec.metadataJobResources.request
Type
Object (memory, cpu)
Description: The minimum amount of compute resources that Kubernetes guarantees to each metadata job pod. The Kubernetes scheduler uses these values to find a node with sufficient available capacity before placing the pod. If the node cannot satisfy the request, the pod will remain Pending.
Memory
spec.metadataJobResources.request.memory
String (quantity)
Minimum 10 Mi
Minimum memory guaranteed to each metadata job pod. Value must be a valid Kubernetes quantity (e.g. 128Mi, 1Gi). Must be ≤ Limit memory.
CPU
spec.metadataJobResources.request.cpu
String (quantity)
Minimum 10m
Minimum CPU guaranteed to each metadata job pod. Value must be a valid Kubernetes quantity in millicores (e.g. 50m, 200m). Must be ≤ Limit CPU.
Reference: Kubernetes ResourceRequirements.requests; TrilioVaultManager CRD TrilioVaultManagerSpec — spec.metadataJobResources.
UI Validation Rules for Request:
Memory must be ≥ 10 Mi.
CPU must be ≥ 10m.
Request Memory must be ≤ Limit Memory (request cannot exceed the limit).
Request CPU must be ≤ Limit CPU (request cannot exceed the limit).
Values must be valid Kubernetes resource quantity strings.
3.1.2 spec.metadataJobResources.limit
CR Path
spec.metadataJobResources.limit
Type
Object (memory, cpu)
Description: The maximum amount of compute resources a metadata job pod is allowed to consume. If the pod tries to use more memory than the limit, the container is OOMKilled (restarted). If the pod tries to use more CPU than the limit, it is throttled (slowed down) but not killed.
Memory
spec.metadataJobResources.limit.memory
String (quantity)
Minimum 1024 Mi
Maximum memory the metadata job pod can use before being OOMKilled. Value must be a valid Kubernetes quantity (e.g. 1024Mi, 2Gi). Must be ≥ Request memory.
CPU
spec.metadataJobResources.limit.cpu
String (quantity)
Minimum 500m
Maximum CPU the metadata job pod can consume before being throttled. Value must be a valid Kubernetes quantity in millicores (e.g. 500m, 1000m). Must be ≥ Request CPU.
Reference: Kubernetes ResourceRequirements.limits; TrilioVaultManager CRD TrilioVaultManagerSpec — spec.metadataJobResources.
UI Validation Rules for Limit:
Memory must be ≥ 1024 Mi.
CPU must be ≥ 500m.
Limit Memory must be ≥ Request Memory (limit cannot be set below the request).
Limit CPU must be ≥ Request CPU (limit cannot be set below the request).
Values must be valid Kubernetes resource quantity strings.
3.2 Data Job Resources
Field
resources.dataJobResources
CR Path
spec.dataJobResources
Description: Resource requests and limits for all data processing jobs. These jobs handle the actual data movement — reading data from PVCs, compressing, deduplicating, and uploading to the target during backup, or downloading and writing data during restore.
Memory
800 Mi
5120 Mi (5 Gi)
CPU
100m
1500m
Reference: For the meaning of Requests and Limits, see 3.1.1 and 3.1.2.
Where it is used:
Applied to data mover pods that perform data upload and data restore operations.
Data jobs are the most resource-intensive TVK workloads. Undersizing them leads to OOMKill during large data transfers.
For large PVCs (tens of GBs), increasing these limits can significantly improve backup/restore throughput.
3.3 Target Browser Resources
Field
resources.targetBrowserResources
CR Path
spec.targetBrowserResources
Description: Resource requests and limits for target browser deployments. The target browser is a long-running deployment (one per Target) that provides read access to the backup target storage, enabling the UI and API to browse available backups and snapshots on the target.
Memory
64 Mi
256 Mi
CPU
No minimum enforced
No minimum enforced
Reference: For the meaning of Requests and Limits, see 3.1.1 and 3.1.2.
Where it is used:
Applied to the target browser deployment pod for each Target CR.
These pods are relatively lightweight but must stay running to serve target browsing requests.
4. Ingress Configuration
Configures ingress routing for the TVK web UI.

4.1 Ingress Class
Field
ingressConfig.ingressClass
CR Path
spec.ingress.ingressClass
Type
String (dropdown, populated from cluster IngressClasses)
Description: The Kubernetes IngressClass to use for the TVK UI ingress resource. This determines which ingress controller handles traffic to the TVK dashboard.
Where it is used:
Set as
spec.ingressClassNameon the TVK Ingress resource.Must match an IngressClass available on the cluster.
4.2 Host
Field
ingressConfig.host
CR Path
spec.ingress.host
Type
String
Description: The hostname/FQDN for accessing the TVK web UI (e.g. tvk.example.com).
Where it is used:
Set as the host rule in the TVK Ingress resource.
DNS must resolve this hostname to the ingress controller's external IP/load balancer.
4.3 TLS Secret Name
Field
ingressConfig.tlsSecretName
CR Path
spec.ingress.tlsSecretName
Type
String (dropdown, populated from TLS Secrets in TVK namespace)
Description: The name of the Kubernetes TLS Secret containing the certificate and private key for HTTPS termination.
Where it is used:
Referenced in the
tlssection of the TVK Ingress resource.The secret must be of type
kubernetes.io/tlsand exist in the TVK install namespace.
4.4 Annotations
Field
ingressConfig.annotations
CR Path
spec.ingress.annotations
Type
Key-value pairs
Description: Custom annotations to apply to the TVK Ingress resource. Useful for configuring ingress controller-specific behaviour.
Where it is used:
Passed as
metadata.annotationson the Ingress resource.Common examples:
nginx.ingress.kubernetes.io/ssl-redirect: "true",cert-manager.io/cluster-issuer: "letsencrypt".
5. Component Configuration
Configures resource requests/limits for individual TVK deployment components. Each component follows the same Memory + CPU requests/limits pattern.
Memory
≥ 512 Mi
CPU
≥ 200m

5.1 Control Plane
Field
componentConfigurations.control-plane.resources
CR Path
spec.componentConfiguration.control-plane
Description: Resources for the control-plane deployment. The control plane now also includes the admission webhook alongside the TVK controllers (backup, restore, snapshot, analyzer, etc.) responsible for reconciling all TVK Custom Resources.
Where it is used:
Applied to the
k8s-triliovault-control-planedeployment.This is the core of TVK — it handles all backup/restore orchestration, admission webhooks, and resource validation/mutation. Insufficient resources here can slow down reconciliation or impact validation across all TVK operations.
5.2 Web Backend
Field
componentConfigurations.web-backend.resources
CR Path
spec.componentConfiguration.web-backend
Additional
livenessProbeEnable (boolean)
Description: Resources for the web-backend deployment. The web backend serves the REST API that the TVK UI and CLI interact with.
Where it is used:
Applied to the
k8s-triliovault-web-backenddeployment.Handles API requests for listing backups, restores, targets, creating backup plans, etc.
livenessProbeEnablecontrols whether the Kubernetes liveness probe is active on this deployment.
5.3 Web (Frontend)
Field
componentConfigurations.web.resources
CR Path
spec.componentConfiguration.web
Description: Resources for the web (frontend) deployment. This serves the TVK dashboard UI (React application).
Where it is used:
Applied to the
k8s-triliovault-webdeployment.Typically lightweight since it serves static frontend assets.
5.4 Exporter
Field
componentConfigurations.exporter
CR Path
spec.componentConfiguration.exporter
enabled
Boolean
Enable/disable the metrics exporter component
resources
Requests/Limits
Resource allocation for the exporter pod
serviceMonitor.enabled
Boolean
Enable/disable Prometheus ServiceMonitor creation
Description: The exporter component exposes TVK metrics (backup counts, success/failure rates, durations, etc.) in Prometheus format.
Where it is used:
When enabled, deploys the
k8s-triliovault-exporterpod that scrapes TVK metrics.When
serviceMonitor.enabledis true, a Prometheus ServiceMonitor CR is created so Prometheus can auto-discover the exporter.Useful for monitoring dashboards (Grafana) and alerting.
5.5 Ingress Controller
Field
componentConfigurations.ingress-controller
CR Path
spec.componentConfiguration.ingress-controller
enabled
Boolean
Enable/disable TVK's built-in ingress controller
service.type
String
Kubernetes Service type (LoadBalancer, NodePort, ClusterIP)
resources
Requests/Limits
Resource allocation
Description: TVK ships with an optional built-in ingress controller. If your cluster already has an ingress controller (e.g. NGINX, Traefik), you should leave this disabled and use your existing one.
Where it is used:
When enabled, deploys an ingress controller specifically for TVK.
The
service.typedetermines how the ingress controller is exposed externally.Not available on OpenShift (OpenShift uses Routes instead of Ingress).
6. CSI Configuration
Configures which Container Storage Interface (CSI) drivers fall into the "non-snapshot" category.

6.1 Include Provisioners
Field
csiConfig.include
CR Path
spec.csiConfig.include
Type
List of strings (CSI driver names)
Description: A list of CSI driver names to be explicitly included in the non-snapshot functionality category. For drivers in this list, TVK skips backing up volume data and backs up only the Persistent Volume Claim (PVC) and Persistent Volume (PV) configuration.
Where it is used:
During backup, TVK checks if a PVC's CSI driver is in this list. If yes, it uses the file-based approach (Persistent Volume Claim & Persistent Volume configuration) instead of attempting a VolumeSnapshot.
Useful for CSI drivers that are fully functional for provisioning but do not support the Kubernetes VolumeSnapshot API.
6.2 Exclude Provisioners
Field
csiConfig.exclude
CR Path
spec.csiConfig.exclude
Type
List of strings (CSI driver names)
Description: A list of CSI driver names to be explicitly excluded from the non-snapshot functionality category. Even if TVK auto-detects them as non-snapshot capable, they will be treated as snapshot-capable.
Where it is used:
Overrides TVK's auto-detection for specific CSI drivers.
Useful when a CSI driver supports snapshots but TVK incorrectly classifies it as non-snapshot capable.
7. Topology Spread Constraints
Field
workerJobsSchedulingConfig.topologySpreadConstraints
CR Path
spec.workerJobsSchedulingConfig.topologySpreadConstraints
Type
YAML (list of Kubernetes TopologySpreadConstraint objects)
Description: Defines Kubernetes Topology Spread Constraints for TVK worker job pods and data mover pods. This controls how these pods are distributed across your cluster's topology domains (nodes, zones, regions).

Use topology spread constraints to improving availability and resource utilization. For example, using topologyKey: kubernetes.io/hostname ensures pods are spread across all nodes before placing multiple on the same node. More info
Where it is used:
Applied to the PodSpec of all TVK worker jobs (metadata jobs, data jobs, hook executors, etc.) and data mover pods.
Ensures backup/restore workloads are evenly distributed across failure domains, improving resilience.
8. Actions
Save
Validates all fields and updates the TrilioVaultManager CR on the cluster. The TVK operator reconciles the changes and rolls out updated configurations to affected components.
Reset to Default
Resets all fields to the factory default values defined in the TVK Helm chart. This fetches the default configuration from the /tvm-default-settings API endpoint.
Cancel
Discards unsaved changes and reverts to the last saved configuration.
Last updated