If you plan to use NFS as a backup target, install nfs-common on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.
## SSH into each Kubernetes node with Trilio for OpenStack enabled (control plane and compute nodes)
apt-get install nfs-common -y
1.4] Install Necessary Dependencies
Run the following command on the installation node:
sudo apt update -y && sudo apt install make jq -y
2] Clone Helm Chart Repository
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../
3] Configure Container Image Tags
Select the appropriate image values file based on your OpenStack-Helm setup. Update the Trilio-OpenStack image tags accordingly.
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/2023.2.yaml
4] Create the trilio-openstack Namespace
To isolate trilio-openstack services, create a dedicated Kubernetes namespace:
5] Label Kubernetes Nodes for TrilioVault Control Plane
Trilio for OpenStack control plane services should run on Kubernetes nodes labeled as triliovault-control-plane=enabled. It is recommended to use three Kubernetes nodes for high availability.
Steps:
1] Retrieve the OpenStack control plane node names:
kubectl get nodes --show-labels | grep openstack-control-plane
2] Assign the triliovault-control-plane label to the selected nodes:
Please ensure that the correct ca.crt is present inside the secret “trilio-ca-cert” in the openstack namespace.
If the secret is created with a different name, make sure to update the reference in the ./get_admin_creds.sh script before executing it.
9] Configure Ceph Storage (if used as Nova/Cinder Backend)
If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.
Manual Approach
1] Edit the Ceph configuration file:
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/ceph.yaml
2] Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools).
By default, the cinder/nova user usually has these permissions, but it’s recommended to verify.
Copy the contents of /etc/ceph/ceph.conf into the appropriate Trilio template file:
vi ../templates/bin/_triliovault-ceph.conf.tpl
Automated Approach
1] Run the Ceph configuration script:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph.sh
2] Verify that the output is correctly written to:
cat ../values_overrides/ceph.yaml
10] Create Docker Registry Credentials Secret
Trilio images are hosted in a private registry. You must create an ImagePullSecret in the triliovault namespace.
1] Navigate to the utilities directory:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
2] Run the script with Trilio’s Docker registry credentials kubernetes secret:
11.1.3] Select the Appropriate OpenStack Helm Version
Use the correct YAML file based on your OpenStack Helm Version:
Antelope → 2023.1.yaml
Bobcat (Default)→ 2023.2.yaml
11.1.4] Validate values_overrides Configuration
Ensure the correct configurations are used:
Disable Ceph in ceph.yaml if not applicable.
Remove tls_public_endpoint.yaml if TLS is unnecessary.
11.2] Uninstall Existing Trilio for OpenStack Chart
For a fresh install, uninstall any previous deployment follow the OpenStack-Helm Uninstall Guide.
11.3] Run the Installation Script
Execute the installation:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
./install.sh
11.4] Configure DNS for Public Endpoints
11.4.1] Retrieve Ingress External IP
kubectl get service -n openstack | grep LoadBalancer
Example Output:
public-openstack LoadBalancer 10.105.43.185 192.168.2.50 80:30162/TCP,443:30829/TCP
11.4.2] Fetch TrilioVault FQDNs
kubectl -n trilio-openstack get ingress
Example Output:
root@master:~# kubectl -n trilio-openstack get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
triliovault-datamover nginx triliovault-datamover,triliovault-datamover.trilio-openstack,triliovault-datamover.trilio-openstack.svc.cluster.local 192.168.2.5 80 14h
triliovault-datamover-cluster-fqdn nginx-cluster triliovault-datamover.triliodata.demo 80, 443 14h
triliovault-datamover-namespace-fqdn nginx triliovault-datamover.triliodata.demo 192.168.2.5 80, 443 14h
triliovault-wlm nginx triliovault-wlm,triliovault-wlm.trilio-openstack,triliovault-wlm.trilio-openstack.svc.cluster.local 192.168.2.5 80 14h
triliovault-wlm-cluster-fqdn nginx-cluster triliovault-wlm.triliodata.demo 80, 443 14h
triliovault-wlm-namespace-fqdn nginx triliovault-wlm.triliodata.demo 192.168.2.5 80, 443 14h
If the ingress service doesn’t have an IP assigned, follow these steps:
1] Check the Ingress Controller Deployment
Look for the ingress-nginx-controller deployment, typically in the ingress-nginx or kube-system namespace: