3.1.X
Search
⌃K

Upstream Kubernetes

This page describes how to install and license Trilio for Kubernetes (T4K) in an upstream Kubernetes environment.
Follow the instructions in this section to Install Trilio for Kubernetes in an upstream Kubernetes environment. This section assumes that you have installed kubectl and helm installed and correctly configured to work with desired Kubernetes cluster. T4K supports v3 version of helm.
There are multiple methods of installing:

Helm Quickstart Installation

In this installation method for upstream operator, a cluster scope TVM custom resource triliovault-manager is created. Perform the following steps to install:
  1. 1.
    To add the repository where the triliovault-operator helm chart is located, use the command:
helm repo add triliovault-operator https://charts.k8strilio.net/trilio-stable/k8s-triliovault-operator
2. Install the chart from the added repository:
Using Default Configurations
User-defined Configurations
To install the chart from the added repository using default configurations, use the following command:
helm install tvm triliovault-operator/k8s-triliovault-operator
Instead of using the default configurations provided, you can configure optional parameters by adding to the default install command in the first tab. Refer to the following Installation Configuration Options table, which lists the configuration parameters of the upstream operator install feature as well as preflight check flags, their default values and usage. Also refer to the following example of the install command, with various configuration parameters set:
helm install tvm triliovault-operator/k8s-triliovault-operator --set preflight.enabled=true,preflight.cleanupOnFailure=true,preflight.storageClass=<storage-class-name>

Installation Configuration Options

Parameter
Description
Default
Example
installTVK.enabled
T4K-Quickstart install feature is enabled
true
installTVK.applicationScope
scope of T4K application created
Cluster
installTVK.tvkInstanceName
tvk instance name
"
"tvk-instance"
installTVK.ingressConfig.host
host of the ingress resource created
""
installTVK.ingressConfig.tlsSecretName
tls secret name which contains ingress certs
""
installTVK.ingressConfig.annotations
annotations to be added on ingress resource
""
installTVK.ingressConfig.ingressClass
ingress class name for the ingress resource
""
installTVK.ComponentConfiguration.ingressController.enabled
T4K ingress controller should be deployed
true
installTVK.ComponentConfiguration.ingressController.service.type
T4K ingress controller service type
"NodePort"
preflight.enabled
enables preflight check for tvk
false
preflight.storageClass
Name of storage class to use for preflight checks (Required)
""
preflight.cleanupOnFailure
Cleanup the resources on cluster if preflight checks fail (Optional)
false
preflight.imagePullSecret
Name of the secret for authentication while pulling the images from the local registry (Optional)
""
preflight.limits
Pod memory and cpu resource limits for DNS and volume snapshot preflight check (Optional)
""
"cpu=600m,memory=256Mi"
preflight.localRegistry
Name of the local registry from where the images will be pulled (Optional)
""
preflight.nodeSelector
Node selector labels for pods to schedule on a specific nodes of cluster (Optional)
""
"key=value"
preflight.pvcStorageRequest
PVC storage request for volume snapshot preflight check (Optional)
""
"2Gi"
preflight.requests
Pod memory and cpu resource requests for DNS and volume snapshot preflight check (Optional)
""
"cpu=300m,memory=128Mi"
preflight.volumeSnapshotClass
Name of volume snapshot class to use for preflight checks (Optional)
""
Check out T4K Integration with Observability Stack for additional options to enable observability stack for T4K.
If using an external ingress controller, you must use the following command:
--set installTVK.ComponentConfiguration.ingressController.enabled=false --set installTVK.ingressConfig.ingressClass="" --set installTVK.ingressConfig.host="" --set installTVK.ingressConfig.tlsSecretName=""
4. Check the output from the previous command and ensure that the installation was successful.
5. Check the TVM CR configuration using the following command:
kubectl get triliovaultmanagers.triliovault.trilio.io triliovault-manager -o yaml
6. Optionally, if you wish to access the T4K UI via HTTPS, you must create a TLS password and edit the TVM CR configuration. Refer toAccess over HTTPS - Prerequisite for more details.
7. Once the operator pod is in a running state, confirm that the T4K pods are up and running:
Check T4K Install
  • Firstly, check that the pods were created:
kubectl get pods
The readout should be similar to this:
NAME READY STATUS RESTARTS AGE
k8s-triliovault-admission-webhook-6ff5f98c8-qwmfc 1/1 Running 0 81s
k8s-triliovault-web-backend-6f66b6b8d5-gxtmz 1/1 Running 0 81s
k8s-triliovault-control-plane-6c464c5d78-ftk6g 1/1 Running 0 81s
k8s-triliovault-exporter-59566f97dd-gs4xc 1/1 Running 0 81s
k8s-triliovault-ingress-nginx-controller-867c764cd5-qhpx6 1/1 Running 0 18s
k8s-triliovault-web-967c8475-m7pc6 1/1 Running 0 81s
k8s-triliovault-operator-66bd7d86d5-dvhzb 1/1 Running 0 6m48s
  • Secondly, check that ingress controller service is of type nodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-triliovault-admission-webhook ClusterIP 10.7.243.24 <none> 443/TCP 129m
k8s-triliovault-ingress-nginx-controller nodePort 10.7.246.193 35.203.155.148 80:30362/TCP,443:32327/TCP 129m
k8s-triliovault-ingress-nginx-controller-admission ClusterIP 10.7.250.31 <none> 443/TCP 129m
k8s-triliovault-web ClusterIP 10.7.254.41 <none> 80/TCP 129m
k8s-triliovault-web-backend ClusterIP 10.7.252.146 <none> 80/TCP 129m
k8s-triliovault-operator-webhook-service ClusterIP 10.7.248.163 <none> 443/TCP 130m
  • Thirdly, check that ingress resources have the host defined by the user:
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-triliovault k8s-triliovault-default-nginx * 35.203.155.148 80 129m
  • Lastly, check that you can access the T4K UI by typing this address in your browser: https://35.203.155.148 Trilio is now successfully installed on your cluster.
8. If the install was not successful or the T4K pods were not spawned as expected:
Cluster version 1.21 or above
Cluster version below 1.21
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting the auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
  • Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
  • Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
  • Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
For cluster versions below 1.21, you must manually clean up failed preflight jobs. To delete a job manually, run the following command:
kubectl delete job -f <job-name> -n <helm-release-namespace>
The above job name should also start with:
<helm-release-name>-preflight-job-preinstall-hook
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
  • Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
  • Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
  • Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role

Manual Installation

To install the operator manually, run the latest helm charts from the following repository:
  1. 1.
    To add the repository where the triliovault-operator helm chart is located, use the command:
helm repo add triliovault-operator https://charts.k8strilio.net/trilio-stable/k8s-triliovault-operator
2. Install the chart from the added repository, but with the quick install method flag set to false, so that users can have more control over the installation:
helm install tvm triliovault-operator/k8s-triliovault-operator --set installTVK.enabled=false
Note that in step 2, you can also set additional parameters as set out inInstallation Configuration Options above.
3. Copy the sample TrilioVaultManager CR contents below and paste them into a new YAML file.
apiVersion: triliovault.trilio.io/v1
kind: TrilioVaultManager
metadata:
labels:
triliovault: k8s
name: tvk
spec:
trilioVaultAppVersion: latest
applicationScope: Cluster
# User can configure tvk instance name
tvkInstanceName: tvk-instance
# User can configure the ingress hosts, annotations and TLS secret through the ingressConfig section
ingressConfig:
host: ""
tlsSecretName: "secret-name"
# T4K components configuration, currently supports control-plane, web, exporter, web-backend, ingress-controller, admission-webhook.
# User can configure resources for all components and can configure service type and host for the ingress-controller
componentConfiguration:
web-backend:
resources:
requests:
memory: "400Mi"
cpu: "200m"
limits:
memory: "2584Mi"
cpu: "1000m"
ingress-controller:
enabled: true
service:
type: LoadBalancer
4. Optionally, if you wish to access the T4K UI via HTTPS, you must create a TLS password for use in the next step. Refer toAccess over HTTPS - Prerequisite for more details.
5. Customize the T4K resources configuration in the YAML file and then save it.
If using an external ingress controller, you must set these parameters in the yaml:
ingress-controller: enabled: false
6. Now apply the CR YAML file using the command:
kubectl create -f TVM.yaml
7. Once the operator pod is in a running state, confirm that the T4K pods are up.
8. If the install was not successful or the T4K pods were not spawned as expected:
Cluster version 1.21 or above
Cluster version below 1.21
Preflight jobs are not cleaned up immediately following failure. If your cluster version is 1.21 or above, the job is cleaned up after one hour, so you should collect any failure logs within one hour of a job failure.
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
  • Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
  • Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
  • Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
For cluster versions below 1.21, you must manually clean up failed preflight jobs. To delete a job manually, run the following command:
kubectl delete job -f <job-name> -n <helm-release-namespace>
The above job name should also start with:
<helm-release-name>-preflight-job-preinstall-hook
Additionally, there is a bug on the helm side affecting auto-deletion of resources following failure. Until this Helm bug is fixed, to run preflight again, users must clean the following resources left behind after the first failed attempt. Once this bug is fixed, the cleanup will be handled automatically. Run the following commands to clean up the temporary resources:
  • Cleanup Service Account:
kubectl delete sa k8s-triliovault-operator-preflight-service-account -n <helm-release-namespace>
  • Cleanup Cluster Role Binding:
kubectl delete clusterrolebinding <helm-release-name>-<helm-release-namespace>-preflight-rolebinding
  • Cleanup Cluster Role:
kubectl delete clusterrole <helm-release-name>-<helm-release-namespace>-preflight-role
9. Finally, check the T4K install:
Check T4K Install
  • Firstly, check that the pods were created:
kubectl get pods
The readout should be similar to this:
NAME READY STATUS RESTARTS AGE
k8s-triliovault-admission-webhook-6ff5f98c8-qwmfc 1/1 Running 0 81s
k8s-triliovault-web-backend-6f66b6b8d5-gxtmz 1/1 Running 0 81s
k8s-triliovault-control-plane-6c464c5d78-ftk6g 1/1 Running 0 81s
k8s-triliovault-exporter-59566f97dd-gs4xc 1/1 Running 0 81s
k8s-triliovault-ingress-nginx-controller-867c764cd5-qhpx6 1/1 Running 0 18s
k8s-triliovault-web-967c8475-m7pc6 1/1 Running 0 81s
k8s-triliovault-operator-66bd7d86d5-dvhzb 1/1 Running 0 6m48s
  • Secondly, check that the ingress controller service is of type LoadBalancer.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-triliovault-admission-webhook ClusterIP 10.7.243.24 <none> 443/TCP 129m
k8s-triliovault-ingress-nginx-controller LoadBalancer 10.7.246.193 35.203.155.148 80:30362/TCP,443:32327/TCP 129m
k8s-triliovault-ingress-nginx-controller-admission ClusterIP 10.7.250.31 <none> 443/TCP 129m
k8s-triliovault-web ClusterIP 10.7.254.41 <none> 80/TCP 129m
k8s-triliovault-web-backend ClusterIP 10.7.252.146 <none> 80/TCP 129m
k8s-triliovault-operator-webhook-service ClusterIP 10.7.248.163 <none> 443/TCP 130m
  • Thirdly, check that ingress resources have the host defined by the user:
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-triliovault k8s-triliovault-default-nginx * 35.203.155.148 80 129m
  • Lastly, check that you can access the T4K UI by typing this address in your browser: https://35.203.155.148 Trilio is now successfully installed on your cluster.

Proxy Enabled Environments

As a Prerequisite, configure a Proxy Server. For example - Squid Proxy
In order to install Trilio for Kubernetes in proxy-enabled environments. Install the operator (step 2 above) by providing the proxy settings:
Environment Variable
Purpose
HTTP_PROXY
Proxy address to use when initiating HTTP connection(s)
HTTPS_PROXY
Proxy address to use when initiating HTTPS connection(s)
NO_PROXY
Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s)
Note NO_PROXY must be in uppercase to use network range (CIDR) notation.
  • proxySettings.PROXY_ENABLED=true
  • proxySettings.HTTP_PROXY=http://<uname>:<password>@<IP>:<Port>
  • proxySettings.HTTPS_PROXY=https://<uname>:<password>@<IP>:<Port>
  • proxySettings.NO_PROXY="<according to user>"
  • proxySettings.CA_BUNDLE_CONFIGMAP="<configmap>"
    • For HTTPS proxy, create CA Bundle Proxy configMap in Install Namespace
    • Proxy CA certificate file key should be ca-bundle.crt
helm install tvm trilio-vault-operator/k8s-triliovault-operator \
--set proxySettings.PROXY_ENABLED=true \
--set proxySettings.NO_PROXY="localhost\,127.0.0.1\,10.239.112.0\/20\,10.240.0.0\/14" \
--set proxySettings.HTTP_PROXY=http://<uname>:<password>@<IP>:<Port> \
--set proxySettings.HTTPS_PROXY=https://<uname>:<password>@<IP>:<Port> \
--set proxySettings.CA_BUNDLE_CONFIGMAP="<proxy-configmap>"
After the operator is created by specifying proxy settings, the TVM will pick up these settings and leverage them directly for operations. No other configuration is required.
After installation the next step is licensing-tvk