Getting started with Trilio on OpenStack-Helm
1. Prepare for deployment
1.1] Install Helm CLI Client
Ensure the Helm CLI client is installed on the node from which you are installing Trilio for OpenStack.
curl -O https://get.helm.sh/helm-v3.17.2-linux-amd64.tar.gz
tar -zxvf helm*.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
rm -rf linux-amd64 helm*.tar.gz
1.2] (Optional) Install NFS Client Package
If you plan to use NFS as a backup target, install nfs-common
on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.
SSH into each Kubernetes nodes which have the following labels
kubectl get nodes --show-labels | grep openstack-control-plane
kubectl get nodes --show-labels | grep openstack-compute
Run this command on the respective nodes:
sudo apt-get install nfs-common -y
1.3] Install Necessary Dependencies
Run the following command on the installation node:
sudo apt update -y && sudo apt install make jq -y
2] Clone Helm Chart Repository
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../
3] Configure Container Image Tags
Select the appropriate image values file based on your OpenStack-Helm setup. Update the Trilio-Openstack image tags accordingly.
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/2023.2.yaml
If your OS Helm cloud version is 2023.1 then images yaml file name is 2023.1.yaml
4] Create the trilio-openstack
Namespace
trilio-openstack
NamespaceTo isolate trilio-openstack
services, create a dedicated Kubernetes namespace:
kubectl create namespace trilio-openstack
kubectl config set-context --current --namespace=trilio-openstack
5] Label Kubernetes Nodes for TrilioVault Control Plane
Trilio for OpenStack control plane services should run on Kubernetes nodes labeled with triliovault-control-plane=enabled
. Only nodes labeled with openstack-control-plane should be selected. For high availability, it is recommended to use three Kubernetes nodes.
Steps:
1] Retrieve the OpenStack control plane node names:
kubectl get nodes --show-labels | grep openstack-control-plane
2] Assign the triliovault-control-plane
label to the selected nodes:
kubectl label nodes <NODE_NAME_1> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_2> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_3> triliovault-control-plane=enabled
3] Verify the nodes with the assigned label:
kubectl get nodes --show-labels | grep triliovault-control-plane
6] Configure the Backup Target for Trilio-OpenStack
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
b) S3
Steps:
If using NFS as the backup target, define its details in below file:
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/nfs.yaml
If using S3, configure its details in below file:
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/s3.yaml
If using S3 with TLS enabled and self-signed certificates, store the CA certificate in below file:
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/files/s3-cert.pem
The deployment scripts will automatically place this certificate in the required location.
7] Provide Cloud Admin Credentials in keystone.yaml
The cloud admin user in Keystone must have the admin role on the cloud domain. Update the required credentials in below file:
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/keystone.yaml
8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials
Fetch Internal and Public Domain Names of the Kubernetes Cluster
Fetch Keystone, RabbitMQ, and Database Admin Credentials a. These credentials are required for Trilio deployment. b. Navigate to the utils directory:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
c. Generate the admin credentials file using the previously retrieved domain names
./get_admin_creds.sh <internal_domain_name> <public_domain_name> Example: ./get_admin_creds.sh cluster.local triliodata.demo
Verify that the credentials file is created:
cat ../values_overrides/admin_creds.yaml
cat ../templates/bin/_triliovault-nova-compute.conf.tpl
9] Configure Ceph Storage (if used as Nova/Cinder Backend)
If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.
Manual Approach
1] Edit the Ceph configuration file:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/
## Provide rbd_user, keyring. This user needs to have read,write access on vms, volumes pool used for nova, cinder backend.
## By default 'nova' user generally has these permissions. But we recommend to verify and use it for triliovault.
vi ceph.yaml
## Copy your /etc/ceph/ceph.conf content to following file. Clean existing file content.
vi ../templates/bin/_triliovault-ceph.conf.tpl
a) Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools). b) By default, the cinder/nova user usually has these permissions, but it’s recommended to verify.
2] Copy the contents of /etc/ceph/ceph.conf into the appropriate Trilio template file:
vi ../templates/bin/_triliovault-ceph.conf.tpl
Automated Approach
1] Run the Ceph configuration script:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph.sh
2] Verify that the output is correctly written to:
cat ../values_overrides/ceph.yaml
cat ../templates/bin/_triliovault-ceph.conf.tpl
10] Create Docker Registry Credentials Secret
Trilio images are hosted in a private registry. You must create an ImagePullSecret in the trilio-openstack namespace.
1] Navigate to the utilities directory:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
2] Run the script with Trilio’s Docker registry credentials kubernetes secret:
./create_image_pull_secret.sh <TRILIO_REGISTRY_USERNAME> <TRILIO_REGISTRY_PASSWORD>
3] Verify that the secret has been created successfully:
kubectl describe secret triliovault-image-registry -n trilio-openstack
11. Install Trilio for OpenStack Helm Chart
11.1] Review the Installation Script
11.1.1] Open the install.sh
Script
The install.sh
script installs the Trilio Helm chart in the trilio-openstack
namespace.
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
vi install.sh
11.1.2] Configure Backup Target
Modify the script to select the appropriate backup target:
a) NFS Backup Target: Default configuration includes nfs.yaml
.
b) S3 Backup Target: Replace nfs.yaml
with s3.yaml
.
Example Configuration for S3:
helm upgrade --install trilio-openstack ./trilio-openstack --namespace=trilio-openstack \
--values=./trilio-openstack/values_overrides/image_pull_secrets.yaml \
--values=./trilio-openstack/values_overrides/keystone.yaml \
--values=./trilio-openstack/values_overrides/s3.yaml \
--values=./trilio-openstack/values_overrides/2023.2.yaml \
--values=./trilio-openstack/values_overrides/admin_creds.yaml \
--values=./trilio-openstack/values_overrides/tls_public_endpoint.yaml \
--values=./trilio-openstack/values_overrides/ceph.yaml \
--values=./trilio-openstack/values_overrides/db_drop.yaml \
--values=./trilio-openstack/values_overrides/ingress.yaml \
--values=./trilio-openstack/values_overrides/triliovault_passwords.yaml
echo -e "Waiting for TrilioVault pods to reach running state"
./trilio-openstack/utils/wait_for_pods.sh trilio-openstack
kubectl get pods
11.1.3] Select the Appropriate OpenStack Helm Version
Use the correct YAML file based on your OpenStack Helm Version:
Antelope →
2023.1.yaml
Bobcat (Default)→
2023.2.yaml
11.1.4] Validate values_overrides
Configuration
Ensure the correct configurations are used:
Disable Ceph in
ceph.yaml
if not applicable.Remove
tls_public_endpoint.yaml
if TLS is unnecessary.
11.2] Run the Installation Script
Execute the installation:
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
./install.sh
11.3] Configure DNS for Public Endpoints
11.3.1] Retrieve Ingress External IP
kubectl get service -n openstack | grep LoadBalancer
Example Output:
public-openstack LoadBalancer 10.105.43.185 192.168.2.50 80:30162/TCP,443:30829/TCP
11.3.2] Fetch TrilioVault FQDNs
kubectl -n trilio-openstack get ingress
Example Output:
root@master:~# kubectl -n trilio-openstack get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
triliovault-datamover nginx triliovault-datamover,triliovault-datamover.trilio-openstack,triliovault-datamover.trilio-openstack.svc.cluster.local 192.168.2.5 80 14h
triliovault-datamover-cluster-fqdn nginx-cluster triliovault-datamover.triliodata.demo 80, 443 14h
triliovault-datamover-namespace-fqdn nginx triliovault-datamover.triliodata.demo 192.168.2.5 80, 443 14h
triliovault-wlm nginx triliovault-wlm,triliovault-wlm.trilio-openstack,triliovault-wlm.trilio-openstack.svc.cluster.local 192.168.2.5 80 14h
triliovault-wlm-cluster-fqdn nginx-cluster triliovault-wlm.triliodata.demo 80, 443 14h
triliovault-wlm-namespace-fqdn nginx triliovault-wlm.triliodata.demo 192.168.2.5 80, 443 14h
If the ingress service doesn’t have an IP assigned, follow these steps:
1] Check the Ingress Controller Deployment
Look for the ingress-nginx-controller deployment, typically in the ingress-nginx or kube-system namespace:
kubectl -n openstack get deployment ingress-nginx-controller -o yaml | grep watch-namespace
2] Verify the --watch-namespace Arg
If the controller has a --watch-namespace argument, it means it’s watching only specific namespaces for ingress resources.
3] Update watch-namespace to include trilio-openstack
Edit the deployment to include trilio-openstack in the comma-separated list of namespaces:
kubectl edit deployment ingress-nginx-controller -n ingress-nginx
Example --watch-namespace arg:
--watch-namespace=trilio-openstack,openstack
4] Restart the Controller
This will happen automatically when you edit the deployment, but you can manually trigger it if needed:
kubectl rollout restart deployment ingress-nginx-controller -n openstack
11.4] Verify Installation
11.4.1] Check Helm Release Status
helm status trilio-openstack
11.4.2] Validate Deployed Containers
Ensure correct image versions are used by checking container tags or SHA digests.
11.4.3] Verify Pod Status
kubectl get pods -n trilio-openstack
Example Output:
NAME READY STATUS RESTARTS AGE
triliovault-datamover-api-5c7fbb949c-2m8dc 1/1 Running 0 21h
triliovault-datamover-api-5c7fbb949c-kxspg 1/1 Running 0 21h
triliovault-datamover-api-5c7fbb949c-z4wkn 1/1 Running 0 21h
triliovault-datamover-db-init-7k7jg 0/1 Completed 0 21h
triliovault-datamover-db-sync-6jkgs 0/1 Completed 0 21h
triliovault-datamover-ks-endpoints-gcrht 0/3 Completed 0 21h
triliovault-datamover-ks-service-nnnvh 0/1 Completed 0 21h
triliovault-datamover-ks-user-td44v 0/1 Completed 0 20h
triliovault-datamover-openstack-compute-node-4gkv8 1/1 Running 0 21h
triliovault-datamover-openstack-compute-node-6lbc4 1/1 Running 0 21h
triliovault-datamover-openstack-compute-node-pqslx 1/1 Running 0 21h
triliovault-wlm-api-7647c4b45c-52449 1/1 Running 0 21h
triliovault-wlm-api-7647c4b45c-h47mw 1/1 Running 0 21h
triliovault-wlm-api-7647c4b45c-rjbvl 1/1 Running 0 21h
triliovault-wlm-cloud-trust-h8xgq 0/1 Completed 0 20h
triliovault-wlm-cron-574ff78486-54rqg 1/1 Running 0 21h
triliovault-wlm-db-init-hvk65 0/1 Completed 0 21h
triliovault-wlm-db-sync-hpl4c 0/1 Completed 0 21h
triliovault-wlm-ks-endpoints-4bsxl 0/3 Completed 0 21h
triliovault-wlm-ks-service-btcb4 0/1 Completed 0 21h
triliovault-wlm-ks-user-gnfdh 0/1 Completed 0 20h
triliovault-wlm-rabbit-init-ws262 0/1 Completed 0 21h
triliovault-wlm-scheduler-669f4758b4-ks7qr 1/1 Running 0 21h
triliovault-wlm-workloads-5ff86448c-mj8p2 1/1 Running 0 21h
triliovault-wlm-workloads-5ff86448c-th6f4 1/1 Running 0 21h
triliovault-wlm-workloads-5ff86448c-zhr4m 1/1 Running 0 21h
11.4.4] Check Job Completion
kubectl get jobs -n trilio-openstack
Example Output:
NAME COMPLETIONS DURATION AGE
triliovault-datamover-db-init 1/1 5s 21h
triliovault-datamover-db-sync 1/1 8s 21h
triliovault-datamover-ks-endpoints 1/1 17s 21h
triliovault-datamover-ks-service 1/1 18s 21h
triliovault-datamover-ks-user 1/1 19s 21h
triliovault-wlm-cloud-trust 1/1 2m10s 20h
triliovault-wlm-db-init 1/1 5s 21h
triliovault-wlm-db-sync 1/1 20s 21h
triliovault-wlm-ks-endpoints 1/1 17s 21h
triliovault-wlm-ks-service 1/1 17s 21h
triliovault-wlm-ks-user 1/1 19s 21h
triliovault-wlm-rabbit-init 1/1 4s 21h
11.4.5] Verify NFS Backup Target (if applicable)
kubectl get pvc -n trilio-openstack
Example Output:
triliovault-nfs-pvc-172-25-0-10-mnt-tvault-42424 Bound triliovault-nfs-pv-172-25-0-10-mnt-tvault-42424 20Gi RWX nfs 6d
11.4.6] Validate S3 Backup Target (if applicable)
Ensure S3 is correctly mounted on all WLM pods.
Trilio-OpenStack Helm Chart Installation is Done!
Logs:
1] triliovault-datamover-api service logs.
Logs available on kuberentes nodes
kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover-api/triliovault-datamover-api.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods
kubectl get pods | grep triliovault-datamover-api
-- See logs
kubectl logs <triliovault-datamover-api-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-api
triliovault-datamover-api-c87899fb7-dq2sd 1/1 Running 0 3d18h
triliovault-datamover-api-c87899fb7-j4fdz 1/1 Running 0 3d18h
triliovault-datamover-api-c87899fb7-nm8pt 1/1 Running 0 3d18h
root@helm1:~# kubectl logs triliovault-datamover-api-c87899fb7-dq2sd
2] trliovault-datamover service logs
Logs available on kuberentes nodes
kubectl get nodes --show-labels | grep openstack-compute-node
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover/triliovault-datamover.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods
kubectl get pods | grep triliovault-datamover-openstack
-- See logs
kubectl logs <triliovault-datamover-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-openstack
triliovault-datamover-openstack-compute-node-2krmj 1/1 Running 0 3d19h
triliovault-datamover-openstack-compute-node-9f5w7 1/1 Running 0 3d19h
root@helm1:~# kubectl logs triliovault-datamover-openstack-compute-node-2krmj
3] triliovault-wlm-api, triliovault-wlm-cron, triliovault-wlm-scheduler, triliovault-wlm-workloads services logs
Logs available on kuberentes nodes
kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- Log files are available in following directory.
ls /var/log/triliovault-wlm/
## Sample command output
root@helm4:~# ls -ll /var/log/triliovault-wlm/
total 26576
-rw-r--r-- 1 42424 42424 2079322 Mar 20 07:55 triliovault-wlm-api.log
-rw-r--r-- 1 42424 42424 25000088 Mar 20 00:41 triliovault-wlm-api.log.1
-rw-r--r-- 1 42424 42424 12261 Mar 16 12:40 triliovault-wlm-cron.log
-rw-r--r-- 1 42424 42424 10263 Mar 16 12:36 triliovault-wlm-scheduler.log
-rw-r--r-- 1 42424 42424 87918 Mar 16 12:36 triliovault-wlm-workloads.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-wlm services pods
kubectl get pods | grep triliovault-wlm
-- See logs
kubectl logs <triliovault-wlm-service-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-wlm
triliovault-wlm-api-7b956f7b8-84gtw 1/1 Running 0 3d19h
triliovault-wlm-api-7b956f7b8-85mdk 1/1 Running 0 3d19h
triliovault-wlm-api-7b956f7b8-hpcpt 1/1 Running 0 3d19h
triliovault-wlm-cloud-trust-rdh8n 0/1 Completed 0 3d19h
triliovault-wlm-cron-78bdb4b959-wzrfs 1/1 Running 0 3d19h
triliovault-wlm-db-drop-dhfgj 0/1 Completed 0 3d19h
triliovault-wlm-db-init-snrsr 0/1 Completed 0 3d19h
triliovault-wlm-db-sync-wffk5 0/1 Completed 0 3d19h
triliovault-wlm-ks-endpoints-zvqtf 0/3 Completed 0 3d19h
triliovault-wlm-ks-service-6425q 0/1 Completed 0 3d19h
triliovault-wlm-ks-user-fmgsx 0/1 Completed 0 3d19h
triliovault-wlm-rabbit-init-vsdn6 0/1 Completed 0 3d19h
triliovault-wlm-scheduler-649b95ffd6-bkqxt 1/1 Running 0 3d19h
triliovault-wlm-workloads-6b98679d45-2kjdq 1/1 Running 0 3d19h
triliovault-wlm-workloads-6b98679d45-mxvhp 1/1 Running 0 3d19h
triliovault-wlm-workloads-6b98679d45-v4dn8 1/1 Running 0 3d19h
# kubectl logs triliovault-wlm-api-7b956f7b8-84gtw
# kubectl logs triliovault-wlm-cron-78bdb4b959-wzrfs
# kubectl logs triliovault-wlm-scheduler-649b95ffd6-bkqxt
# kubectl logs triliovault-wlm-workloads-6b98679d45-mxvhp
12] Install Trilio for OpenStack Horizon Plugin
Below are the steps to patch the Horizon deployment in an OpenStack Helm setup to install the Trilio Horizon Plugin.
12.1] Pre-requisites
Horizon is deployed via OpenStack Helm and is running in the
openstack
namespace.Docker registry secret
triliovault-image-registry
must already exist in theopenstack
namespace from the steps performed during Trilio Installation.
kubectl describe secret triliovault-image-registry -n openstack
If not already created, follow this command:
kubectl create secret docker-registry triliovault-image-registry \
--docker-server="docker.io" \
--docker-username=<TRILIO_REGISTRY_USERNAME> \
--docker-password=<TRILIO_REGISTRY_PASSWORD> \
-n openstack
12.2] Patch Horizon Deployment
Use the command below to patch the Horizon deployment with the Trilio Horizon Plugin image. Update the image tag as needed for your release.
kubectl -n openstack patch deployment horizon \
--type='strategic' \
-p '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "horizon",
"image": "docker.io/trilio/trilio-horizon-plugin-helm:<TRILIOVAULT_HORIZON_PLUGIN_IMAGE_TAG>"
}
],
"imagePullSecrets": [
{
"name": "triliovault-image-registry"
}
]
}
}
}
}'
12.3] Verification
After patching:
Ensure the Horizon pods are restarted and running with the new image:
kubectl get pods -n openstack -l application=horizon,component=server -o jsonpath="{.items[*].spec.containers[*].image}" | tr ' ' '\n'
Access the Horizon dashboard and verify the TrilioVault section appears in the UI.
Last updated
Was this helpful?