Installing the TrilioVault Controller Cluster

Download the TrilioVault binary

Trilio is providing a binary installation tool, which is creating the Kubernetes cluster, downloads all images in their latest version, and ensures a clean state of the whole environment.
triliovault.1.17.tar.gz
50KB
Binary
TrilioVault installation binary
Move the binary into the /usr/bin directory
1
mv triliovault /usr/bin/
Copied!
The binary provides the following options:
  • deploy ==> Fresh deployment, expects only the base VMs to be present
  • redeploy ==> Deletes the entire TrilioVault Controller Cluster including its Kubernetes base, before redeploying based on the last available configuration.
  • cleanup ==> Deletes the entire TrilioVault Controller Cluster including the Kubernetes base
  • reset ==> deletes and redeploys the TrilioVault Controller Cluster without changing the Kubernetes base
reset should only be done from the same node that was used to run the initial deployment. running the reset option on other nodes will not execute any steps.

Define the deployment

The TrilioVault binary is taking a deployment file in json format.
The following information need to be provided inside the deployment.json
  • number of nodes ==> How many nodes will the TrilioVault Controller Cluster consist of. Default and minimum is 3 nodes, but can be increased in steps of 2 if required.
  • virtual_ip_for_keepalived ==> This is IP is used by the Kubernetes to ensure that all nodes are still working and keeping the Kubernetes Masternodes in sync
  • ingress_ip ==> IP under which the TrilioVault Controller Cluster is reachable
  • For each VM that is part of the cluster:
    • node_ip ==> IP under which the VM is reachable for the deployment binary
    • username ==> root
    • password ==> password that allows the provided user to ssh into the VM
The provided user requires root permissions on the VMs
An example deployment.json can be seen below
1
{
2
"number_of_nodes": 3,
3
"virtual_ip_for_keepalived": "142.44.219.119",
4
"ingress_ip": "142.44.219.116",
5
"node_details":[
6
{
7
"node_ip" : "142.44.219.120",
8
"username" : "root",
9
"password" : "abcd"
10
},
11
{
12
"node_ip" : "142.44.219.121",
13
"username" : "root",
14
"password" : "abcd"
15
},
16
{
17
"node_ip" : "142.44.219.122",
18
"username" : "root",
19
"password" : "abcd"
20
}
21
]
22
Copied!

Run the TrilioVault binary

Once the deployment.json has beed created the TrilioVault binary can be started using the following command.
1
triliovault -t multinode -a deploy -f deployment.json
Copied!
Wait for the binary to finish.
Once the procedure finished check the status of the containers using
1
kubectl get pods
2
kubectl get pods -n trilio
Copied!
The output will look similar to the following.
The wlm containers will be in the status CrashLoopBackOff at this point. This is the expected behavior. The containers will go into a running state after the configuration has finished.
1
[[email protected] ~]# kubectl get pods
2
NAME READY STATUS RESTARTS AGE
3
tvr-ingress-nginx-controller-6d7b7c47f-zzhxq 1/1 Running 1 (3m10s ago) 3h16m
4
tvr-metallb-controller-79757c94b7-66r5j 1/1 Running 1 (3m10s ago) 3h16m
5
tvr-metallb-speaker-zxdgh 1/1 Running 1 (3m10s ago) 3h16m
6
7
[[email protected] ~]# kubectl get pods -n trilio
8
NAME READY STATUS RESTARTS AGE
9
config-api-55879c774c-dmcqt 1/1 Running 1 3h16m
10
mariadb-0 1/1 Running 1 3h16m
11
rabbitmq-0 1/1 Running 1 (3m10s ago) 3h16m
12
rabbitmq-1 1/1 Running 1 (3m10s ago) 3h16m
13
rabbitmq-2 1/1 Running 1 (3m10s ago) 3h16m
14
rabbitmq-ha-policy-h984m 0/1 Completed 1 3h16m
15
rhvconfigurator-5f65f79897-tprhx 1/1 Running 1 (3m12s ago) 3h16m
16
wlm-api-58fdcdfc7-jggb8 0/1 CrashLoopBackOff 42 (119s ago) 3h16m
17
wlm-scheduler-5f58586bd5-n2gxz 0/1 CrashLoopBackOff 42 (112s ago) 3h16m
18
wlm-workloads-4pfk7 0/1 CrashLoopBackOff 42 (119s ago) 3h17m
19
wlm-workloads-dgtpr 0/1 CrashLoopBackOff 42 (2m57s ago) 3h17m
20
wlm-workloads-mnxkf 0/1 CrashLoopBackOff 42 (2m26s ago) 3h17m
Copied!