Accessing the UI

This page describes how to access the user interface of TrilioVault for Kubernetes.

There are 4 simple steps that a user needs to perform to enable UI access to the cluster:

  1. Enable UI access via NodePort or LoadBalancer

  2. Set FQDN (Fully Qualified Domain Name) for UI service

  3. Create a DNS record for the FQDN

  4. Launch the TVK UI

UI Components

Before proceeding with UI access it is imperative to understand the architecture behind the UI. When TVK is installed the following deployments and services are created with it. Deployments:

$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
k8s-triliovault-admission-webhook 1/1 1 1 8d
k8s-triliovault-backend 1/1 1 1 8d
k8s-triliovault-control-plane 1/1 1 1 8d
k8s-triliovault-exporter 1/1 1 1 8d
k8s-triliovault-ingress-controller 1/1 1 1 9d
k8s-triliovault-web 1/1 1 1 8d

Services:

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-triliovault-backend-svc ClusterIP 100.158.23.239 <none> 80/TCP 9d
k8s-triliovault-ingress NodePort 100.158.16.210 <none> 80:31200/TCP,443:30452/TCP 9d
k8s-triliovault-ingress-admission ClusterIP 100.158.19.252 <none> 443/TCP 9d
k8s-triliovault-web-svc ClusterIP 100.158.15.172 <none> 80/TCP 9d
k8s-triliovault-webhook-service ClusterIP 100.158.12.240 <none> 443/TCP 8d

For exposing the UI the ingress-controller deployment and k8s-triliovault-ingress service is responsible.

By default the k8s-triliovault-ingress is always NodePort and a random port in range 30000-32767 will be allotted to the ingress service

Step 1: Enable UI Access

The first step to launch the user console is enabling access to the TVK UI service. Access can be enabled either via NodePort or via LoadBalancer.

Via NodePort

Node port access may be common with self-managed clusters. Self-managed clusters are those that are generally created via kubeadm, kops or ones where the worker nodes infrastructure is managed and accessible by the user.

  • Expose the NodePort range for Nodes in Worker and Master groups (30000-32767) via firewall rules.

  • This will relay the NodePorts via kube-proxy to all the nodes in worker and master

    • If you need a fixed port, edit the ingress service

Via LoadBalancer

LoadBalancer access may be common with provider-managed clusters, clusters in a public cloud provider, or on-prem clusters where a LoadBalancer is available. Provider managed Kubernetes clusters are generally those where the user does not manage or have access to the worker node infrastructure. Example - AWS EKS, GCP GKE/Anthos, Azure AKS, or OpenShift (managed service through cloud providers).

  • Edit the ingress service kubectl edit <svc_name>

    • Change default value spec.type from NodePort to LoadBalancer -> this will create a LoadBalancer in the cloud that forwards traffic to ingress service.

    • A LoadBalancer Public IP will be allotted to the service (kubectl get svc) in the field EXTERNAL-IP

Step 2: Assign FQDN for TVK UI

After the access setup is complete, the next step is to ensure that the TVK UI can be accessed via an FQDN. FQDN access is a must to reach the TVK UI service.

When TVK is installed, by default the host value is set to: spec.rules[0].host: k8s-tvk.com In order to change the value for the host in the spec, edit the ingress resource:

kubectl edit ingress k8s-triliovault-ingress

Step 3: Create a DNS record

After the desired FQDN has been set, create a DNS record for it so that the UI can be accessed. If TVK is being leveraged in a local or non-production environment, then access via etc/host file is possible as well.

DNS enabled environments

  1. Create an A-Record in Route53 (AWS) or Google DNS service (GCP) or any other DNS manager of your choice

  2. Map k8s-tvk.com (depending on the host value specified above) to the {PUBLIC_NODE_IP} for NodePort or {LB_IP} for Load Balancer.

Non-DNS or Local environments

For custom environments do the following on your local machine:

Note: This only works for a single system.

  • edit file sudo vi /etc/hosts

  • create an entry in /etc/hosts file for the IPs, so your file should look like this

...
127.0.0.1 localhost
....
xx.xx.xx.xx k8s-tvk.com
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Step 4: Launch the TVK UI Console

After completing steps 1-3, access the UI console using one of the following methods:

Access over HTTP

Launch via LoadBalancer

Ports do not need to be specified for LoadBalancers

Launch via NodePort

The service example above shows port 80:31200 for ingress service

Access over HTTPS

If TVK UI is not accessible over HTTPS by default, the SSL certificate need to be specified as a part of k8s-triliovault-ingress .

Note: Users can use their own custom SSL certificate to generate a secret and provide it as a part of ingress resource. Create a newsecret ssl-certs using custom SSL certificate tls.crt and key tls.key in the tvk-namespacenamespace where TVK is deployed

kubectl create secret tls ssl-certs --cert tls.crt --key tls.key -n tvk-namespace

In order to add the secretName value of SSL certificate in the ingress spec, edit the ingress resource:

kubectl edit ingress k8s-triliovault-ingress

Add below section of tls: to ingress resource in parallel to the existing rules: section and save the updated ingress resource.

specs:
rules:
<Keep the details for HTTP as it is>
tls:
- hosts:
- k8s-tvk.com
secretName: k8s-triliovault-ingress-server-certs

k8s-triliovault-ingress-server-certs is a default certificate generated during TVK deployment. User can also provide a custom secret created in above steps.

Launch via LoadBalancer

Ports do not need to be specified for LoadBalancers

Launch via NodePort

The service example above shows port 443:30452 for ingress service