Configuring the UI

Learn how to configure and access the Trilio for Kubernetes, Management Console UI.

Upstream Environments

There are three steps that a user must perform to enable UI access to the cluster:

  1. Configure Trilio Management Console access (If not done already during install).

  2. Create a DNS record for the FQDN.

  3. Launch the Trilio Management Console UI over HTTP/HTTPS.

Step 1: Configure Management Console Access

The UI configurations for the Trilio's Management Console are controlled by the Trilio Manager (TVM) Custom Resource (CR).

  1. Configure the hostname (optional) for the Management Console UI.

  2. Specify whether to use the Trilio provided ingress controller, or an existing ingress controller available in the cluster. If using the Trilio provided one, then specify setup access over NodePort or LoadBalancer.

The following Trilio Manager YAML shows configuration settings for the UI components.

kind: TrilioVaultManager
    triliovault: k8s
  name: tvk
  trilioVaultAppVersion: 3.0.0
  applicationScope: Cluster
  # T4K components configuration, currently supports control-plane, web, exporter, web-backend, ingress-controller, admission-webhook.
  # User can configure resources for all componentes and can configure service type and host for the ingress-controller
    ###ingress class and annotation should be uncommented only when ingress controller is set to false below.
    #ingressClass: "nginx"
    # "key": "value"
    host: ""
    #tlsSecretName: ssl-certs
          memory: "400Mi"
          cpu: "200m"
          memory: "2584Mi"
          cpu: "1000m"
      enabled: true
        type: NodePort
  1. Leverage default Ingress Controller - The ingressController section specifies that the Trilio provided ingress controller will be deployed (enabled:true) (See point 4 below for using an existing ingress controller) and that access to the service is set through type: NodePort

    • Based on the type provided in the TVM CR, the ingress service (k8s-triliovault-ingress-nginx-controller) Type is set accordingly on the service resource

  2. Configure Hostname - The ingressConfig section shows the hostname for the management console is set to:

    • For test and development environments the host field can be left blank for accessing directly over the IP address. In this case, the value for host in the ingress resource will be set to '*'

  3. HTTPS Access For HTTPS access the tlsSecretName that has the TLS information should be provided. More information on HTTPS access is provided below.

  4. Leverage existing Ingress Controller - If using a pre-existing ingress controller, then the ingressClass and annotations parameters should be used.

Set up Access via Port Forwarding

The console can also be accessed by forwarding the traffic for the ingress service if NodePort or LoadBalancer is not an option. This is only meant to serve for evaluation purposes and not recommended for Production deployments.

kubectl port-forward --address svc/k8s-triliovault-ingress-nginx-controller 80:80 &

The above command will start forwarding T4K management console traffic to the localhost IP of via port 80

Step 2: Create a DNS Record

After the TVM has been configured with ingress and ingress controller information, users need to create a DNS record for the host that was set within the resource.

Note: This step can be skipped if the host value is set to ""

In this case the console can be available in a browser directly by:

  1. LoadBalancer: using the IP address of the load balancer

  2. NodePort: The IP address of a worker node with port number provided

  3. PortForwarding:<port number> if port forwarding the ingress resource

DNS enabled environments

  1. Create an A-Record in Route53 (AWS) or Google DNS service (GCP) or any other DNS manager of your choice

  2. Map (depending on the host value specified above) to the {PUBLIC_NODE_IP} for NodePort or {LB_IP} for Load Balancer.

Non-DNS or Local environments

For local environments perform the following on your local machine:

  • edit file sudo vi /etc/hosts

  • create an entry in /etc/hosts file for the IPs, so your file should look like this

...	localhost


# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

If Port Forwarding the ingress service, then do the following

For access via Port Forwarding use <FQDN from step 2>

Step 3: Launch the T4K Management Console

Users can access the console via HTTP or HTTPS:

Access over HTTP - Launch via LoadBalancer

Ports do not need to be specified for LoadBalancer based access

  • Via FQDN - if host value in TVM CR is set

    • goes to port 80 (default)

  • Via External IP - if host value in TVM CR is not set

    • http://<LoadBalancer IP>/

Access over HTTP - Launch via NodePort

If using NodePort to access the management console, capture the port number from the service resource (set by either the Trilio ingress controller or the user specified ingress controller) and use that port number in the management console URL

  • Via FQDN - if host value in TVM CR is set

    • For http (port 80) ->

  • Via External IP - if host value in TVM CR is not set

    • http://<Node IP>:<NodePort>/

Access over HTTPS - Prerequisite

As mentioned in the previous sections, console access over HTTPS requires TLS certificates to be provided as part of the TVM CR.

k8s-triliovault-ingress-tls-certs is a default certificate generated during T4K deployment. However, users should provide a correct secret specific to their environment with TLS information as explained below.

To generate a secret and provide it as a part of ingress resource:

  1. Create a newsecret ssl-certs using custom SSL certificate tls.crt and key tls.key in the tvk-namespacenamespace where T4K is deployed

    kubectl create secret tls ssl-certs --cert tls.crt --key tls.key -n tvk-namespace

  2. Edit the TVM CR and set the field for tlsSecretName

Access over HTTPS - Launch via LoadBalancer

Ports do not need to be specified for LoadBalancers - goes to port 443 (default).

Access over HTTPS - Launch via NodePort

For https (port 443) ->

After accessing the above URL in your browser, the UI Login authentication page is displayed. For more details about the UI Login, refer to UI Login.

Note: If you are facing issue while accessing the T4K UI after above setting, check the Firewall Rules on the Kubernetes cluster nodes. Here is an example to check firewall rules for Google GKE cluster.

OpenShift Environments

For OpenShift environments, as part of the install from OperatorHub, the management console access through routes, and authentication to the management console. Any proxy settings are automatically configured by reading these settings in the OpenShift cluster.

Ingress Controller

T4K uses the default ingress controller provided by OCP as a part of the cluster. T4K works with the OpenShift default ingress-controller named default, which is present in the openshift-ingress-operator. So the hostname used for the T4K Ingress host is the domain supported by this controller (refer to status.domain of the IngressController resource).

oc get ingresscontroller -n openshift-ingress-operator
default   6d14h

To use the default ingress controller in the cluster, run the following command on the OCP cluster after the deployment of the Trilio Operator from OperatorHub.

oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

Hostname Configuration

Trilio automatically installs and creates the ingress resources with a default hostname of:


This value can be changed by editing the master ingress resource (k8s-triliovault-master). The minion ingress resource will automatically pick up the settings from the master ingress resource. Only the <install-namespace> portion of the host can be changed, and the domain of the ingress controller must be kept as-is. Example: abcd.<default-ingress-controller-domain>

Access over HTTPS:

By default T4K works on HTTPS. T4K uses the OpenShift default ingress-controller TLS certificates for HTTPS communication:

  • k8s-triliovault-ingress-server-certs is a default secret generated during T4K deployment which contains the Ingress-controller's TLS certificate.

  • Check host field of ingress (kubectl get ingress k8s-triliovault -n <install-namespace>), and use that host to access UI on (goes to port 443 (default))

Users can use their own custom SSL certificate to generate a secret and provide it as a part of ingress resource. Create a new secret ssl-certs using custom SSL certificate tls.crt and key tls.key in the <install-namespace> namespace where T4K is deployed.

kubectl create secret tls ssl-certs --cert tls.crt --key tls.key -n <install-namespace>

To add the secretName value of SSL certificate in the ingress spec, edit the ingress resource in the Trilio installation namespace (trilio-system in the example):

kubectl edit ingress k8s-triliovault -n trilio-system

Add the below section of tls: to the ingress resource, in parallel to the existing rules: section. Then save the updated ingress resource.

    - host: <>
    - hosts:
        - <>
      secretName: <tlsSecretName>

Lets Encrypt or Cert-Manager can be leveraged to generate valid SSL certificates for a domain.

Access over HTTP:

If HTTP access is required, then remove the TLS section from the k8s-triliovault Ingress resource. To remove TLS section from ingress `k8s-triliovault` where the Trilio installation namespace is trilio-system, edit the ingress resource (`kubectl edit ingress k8s-triliovault -n trilio-system`) and remove the TLS section present, as follows:

     - hosts:
       - <host-name>
       secretName: <tlssecretName>

Then access the UI on

OpenShift Routes

Routes are automatically created based on the ingress settings. Users can simply click on the route for service k8s-triliovault-web to launch the management console.


Trilio automatically configures authentication for OCP environments by reading the IDP settings on the cluster. As soon as the console is launched, authentication is pre-configured and is ready to use.

Last updated