Getting started with Trilio on MOSK

1. Prepare for deployment

1.1] Install Helm CLI Client

Ensure the Helm CLI client is installed on the node from which you are installing Trilio for OpenStack.

curl -O https://get.helm.sh/helm-v3.17.2-linux-amd64.tar.gz
tar -zxvf helm*.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
rm -rf linux-amd64 helm*.tar.gz

1.2] [Optional] Install NFS Client Package

If you plan to use NFS as a backup target, install nfs-common on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.

SSH into each Kubernetes nodes which have the following labels

kubectl get nodes --show-labels | grep openstack-control-plane
kubectl get nodes --show-labels | grep openstack-compute

Run this command on the respective nodes:

apt-get install nfs-common -y

1.3] Install Necessary Dependencies

Run the following command on the installation node:

sudo apt update -y && sudo apt install make jq -y

2] Clone Helm Chart Repository

Refer to the link Resources to get release specific values of the placeholder, viz trilio_branch.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../

3] Configure Container Image Tags

Select the appropriate image values file based on your MOSK setup version. Update the Trilio-OpenStack image tags accordingly.

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/mosk25.1.yaml

4] Create the trilio-openstack Namespace

To isolate trilio-openstack services, create a dedicated Kubernetes namespace:

kubectl create namespace trilio-openstack
kubectl config set-context --current --namespace=trilio-openstack

5] Label Kubernetes Nodes for TrilioVault Control Plane

Trilio for OpenStack control plane services should run on Kubernetes nodes labeled with triliovault-control-plane=enabled. Only nodes labeled with openstack-control-plane should be selected. For high availability, it is recommended to use three Kubernetes nodes.

Steps:

1] Retrieve the OpenStack control plane node names:

kubectl get nodes --show-labels | grep openstack-control-plane

2] Assign the triliovault-control-plane label to the selected nodes:

kubectl label nodes <NODE_NAME_1> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_2> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_3> triliovault-control-plane=enabled

3] Verify the nodes with the assigned label:

kubectl get nodes --show-labels | grep triliovault-control-plane

6] Configure the Backup Target for Trilio-OpenStack

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

b) S3

Steps:

1] If using NFS as the backup target, define its details in below file:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/nfs.yaml

2] If using S3, configure its details in below file:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/s3.yaml

3] If using S3 with TLS enabled and self-signed certificates, store the CA certificate in below file:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/files/s3-cert.pem

The deployment scripts will automatically place this certificate in the required location.

We will be using this yaml file in trilio-openstack 'install' command at later step in this document.

7] Setup Internal RabbitMQ instance

Make sure the OpenStack-Helm repositories are enabled, OpenStack-Helm plugin is installed. You can also clone the OpenStack-helm repository using the following command:

git clone https://github.com/openstack/openstack-helm.git

Set environment variables that are later used in the subsequent sections:

# If helm osh plugin doesn't exist, install it
helm plugin install https://opendev.org/openstack/openstack-helm-plugin.git

# Set OpenStack release (adjust as needed for the deployment version)
export OPENSTACK_RELEASE=2024.1

# Features enabled for the deployment. This is used to look up values overrides.
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"

# Directory where values overrides are looked up or downloaded to.
export OVERRIDES_DIR=$(pwd)/overrides
mkdir -p "${OVERRIDES_DIR}"

cd openstack-helm/rabbitmq
helm dependency build
cd ../..

Use the following script to deploy RabbitMQ service in trilio-openstack namespace: Provide the appropriate storage class name in the field --set volume.class_name

helm upgrade --install trilio-rabbitmq openstack-helm/rabbitmq \
    --namespace=trilio-openstack \
    --set pod.replicas.server=1 \
    --set volume.class_name="general" \
    --timeout=600s \
    $(helm osh get-values-overrides -p ${OVERRIDES_DIR} -c rabbitmq ${FEATURES})

helm osh wait-for-pods trilio-openstack

8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials

8.1] Fetch Internal and Public Domain Names of the Kubernetes Cluster.

Fetch ‘internal_domain_name’ and ‘public_domain_name’ of your kubernetes cluster where MOSK cloud is deployed. Get 'osdpl' object name and describe it to get the domain name as below:

kubectl get osdpl -n openstack

Example command output: Here 'osh-dev' is the osdpl object name.

root@ubuntu:~# kubectl get osdpl -n openstack
NAME      OPENSTACK   AGE     DRAFT
osh-dev   caracal     7d19h
root@ubuntu:~# kubectl describe osdpl osh-dev -n openstack | grep domain
  internal_domain_name:             cluster.local
  public_domain_name:               triliodata.demo

Here internal_domain_name=cluster.local and public_domain_name=triliodata.demo

8.2] Fetch Keystone, RabbitMQ, and Database Admin Credentials.

a) These credentials are required for Trilio deployment.

b) Navigate to the utils directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

c) Generate the admin credentials file using the previously retrieved domain names

./get_admin_creds_mosk.sh <internal_domain_name> <public_domain_name>
Example:
./get_admin_creds_mosk.sh cluster.local setup.triliodata.demo

d) Verify that the credentials file is created:

i) Check service FQDNs ii) Check TLS Certificates (if TLS enabled)

cd ../../../../
cat triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/admin_creds.yaml
cat triliovault-cfg-scripts/openstack-helm/trilio-openstack/templates/bin/_triliovault-nova-compute.conf.tpl

e) The cloud admin user in Keystone must have the admin role on the cloud domain. Update the required credentials:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/keystone.yaml

f) [IMP] Verify and Update Network Interface Values

(i) Before proceeding, ensure the correct network interface values are configured for live migration and hypervisor communication.

(ii) Run the following command to inspect the live migration interface:

$ kubectl describe osdpl osh-dev -n openstack | grep live_migration_interface
Example output:
    live_migration_interface:  mcc-lcm

(iii) If the value is mcc-lcm → no change needed.

(iv) If the value is anything else → update it in values.yaml:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values.yaml

Update the following section:

libvirt:
  images_rbd_ceph_conf: /etc/ceph/ceph.conf
  live_migration_interface: "<output>"          # Interface used for live migration traffic
  hypervisor_host_interface: "<output>"         # Interface used for live migration traffic

Replace "<output>" with the actual network interface name in your environment (e.g., mcc-lcm, tenant, tenant-v,).

9] Configure Ceph Storage (if used as Nova/Cinder Backend)

If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.

1] Run the Ceph configuration script:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph_mosk.sh

2] Verify that the output is correctly written to:

cat ../values_overrides/ceph.yaml
cat ../templates/bin/_triliovault-ceph.conf.tpl

If output is incorrect; perform manual steps:

i) Edit the Ceph configuration file Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools). By default, the cinder/nova user usually has these permissions, but it’s recommended to verify.

vi ../values_overrides/ceph.yaml

ii) Copy your /etc/ceph/ceph.conf content to following file. Clean existing file content.

vi ../templates/bin/_triliovault-ceph.conf.tpl

10] Create Docker Registry Credentials Secret

Trilio images are hosted in a private registry. You must create an ImagePullSecret in the trilio-openstack namespace.

1] Navigate to the utilities directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

2] Run the script with Trilio’s Docker registry credentials kubernetes secret:

./create_image_pull_secret.sh <TRILIO_REGISTRY_USERNAME> <TRILIO_REGISTRY_PASSWORD>

3] Verify that the secret has been created successfully:

kubectl describe secret triliovault-image-registry -n trilio-openstack

11] Install T4O Helm Chart

11.1] Review the Installation Script

11.1.1] Open the install.sh Script

The install.sh script installs the Trilio Helm chart in the trilio-openstack namespace.

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
vi install.sh

11.1.2] Configure Backup Target

Modify the script to select the appropriate backup target:

a) NFS Backup Target: Default configuration includes nfs.yaml.

b) S3 Backup Target: Replace nfs.yaml with s3.yaml.

Example Configuration for S3:

helm upgrade --install trilio-openstack ./trilio-openstack --namespace=trilio-openstack \
     --values=./trilio-openstack/values_overrides/image_pull_secrets.yaml \
     --values=./trilio-openstack/values_overrides/keystone.yaml \
     --values=./trilio-openstack/values_overrides/s3.yaml \
     --values=./trilio-openstack/values_overrides/mosk25.1.yaml \
     --values=./trilio-openstack/values_overrides/admin_creds.yaml \
     --values=./trilio-openstack/values_overrides/tls_public_endpoint.yaml \
     --values=./trilio-openstack/values_overrides/ceph.yaml \
     --values=./trilio-openstack/values_overrides/db_drop.yaml \
     --values=./trilio-openstack/values_overrides/ingress.yaml \
     --values=./trilio-openstack/values_overrides/triliovault_passwords.yaml
     echo -e "Waiting for TrilioVault pods to reach running state"
     ./trilio-openstack/utils/wait_for_pods.sh trilio-openstack
     kubectl get pods

11.1.3] Select the Appropriate OpenStack Helm Version

Use the correct YAML file based on your OpenStack Helm Version:

  • MOSK 25.1 → mosk25.1.yaml

11.1.4] Validate values_overrides Configuration

Ensure the correct configurations are used:

  • Disable Ceph in ceph.yaml if not applicable.

  • Remove tls_public_endpoint.yaml if TLS is unnecessary.

11.2] Run the Installation Script

Execute the installation:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
./install.sh

11.3] Fetch Trilio FQDNs

kubectl -n trilio-openstack get ingress

Example Output:

root@master:~# kubectl -n trilio-openstack get ingress
NAME                                   CLASS           HOSTS                                                                                                                   ADDRESS       PORTS     AGE
triliovault-datamover                  nginx           triliovault-datamover,triliovault-datamover.trilio-openstack,triliovault-datamover.trilio-openstack.svc.cluster.local   192.168.2.5   80        14h
triliovault-datamover-cluster-fqdn     nginx-cluster   triliovault-datamover.triliodata.demo                                                                                                 80, 443   14h
triliovault-datamover-namespace-fqdn   nginx           triliovault-datamover.triliodata.demo                                                                                   192.168.2.5   80, 443   14h
triliovault-wlm                        nginx           triliovault-wlm,triliovault-wlm.trilio-openstack,triliovault-wlm.trilio-openstack.svc.cluster.local                     192.168.2.5   80        14h
triliovault-wlm-cluster-fqdn           nginx-cluster   triliovault-wlm.triliodata.demo                                                                                                       80, 443   14h
triliovault-wlm-namespace-fqdn         nginx           triliovault-wlm.triliodata.demo                                                                                         192.168.2.5   80, 443   14h

If the ingress service doesn’t have an IP assigned in Address column, follow these steps:

Run this command to patch the ingress with ingressClass annotation

for i in $(kubectl get ingress -n trilio-openstack -o name); do
  kubectl patch -n trilio-openstack "$i" --type='json' -p='[
    {"op": "remove", "path": "/spec/ingressClassName"},
    {"op": "add", "path": "/metadata/annotations/kubernetes.io~1ingress.class", "value": "openstack-ingress-nginx"}
  ]' || echo "Skipping $i (possibly missing ingressClassName)"
done

11.4] Verify Installation

11.4.1] Check Helm Release Status

helm status trilio-openstack

11.4.2] Validate Deployed Containers

Ensure correct image versions are used by checking container tags or SHA digests.

11.4.3] Verify Pod Status

kubectl get pods -n trilio-openstack

Example Output:

NAME                                                 READY   STATUS      RESTARTS   AGE
triliovault-datamover-api-5c7fbb949c-2m8dc           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-kxspg           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-z4wkn           1/1     Running     0          21h
triliovault-datamover-db-init-7k7jg                  0/1     Completed   0          21h
triliovault-datamover-db-sync-6jkgs                  0/1     Completed   0          21h
triliovault-datamover-ks-endpoints-gcrht             0/3     Completed   0          21h
triliovault-datamover-ks-service-nnnvh               0/1     Completed   0          21h
triliovault-datamover-ks-user-td44v                  0/1     Completed   0          20h
triliovault-datamover-openstack-compute-node-4gkv8   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-6lbc4   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-pqslx   1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-52449                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-h47mw                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-rjbvl                 1/1     Running     0          21h
triliovault-wlm-cloud-trust-h8xgq                    0/1     Completed   0          20h
triliovault-wlm-cron-574ff78486-54rqg                1/1     Running     0          21h
triliovault-wlm-db-init-hvk65                        0/1     Completed   0          21h
triliovault-wlm-db-sync-hpl4c                        0/1     Completed   0          21h
triliovault-wlm-ks-endpoints-4bsxl                   0/3     Completed   0          21h
triliovault-wlm-ks-service-btcb4                     0/1     Completed   0          21h
triliovault-wlm-ks-user-gnfdh                        0/1     Completed   0          20h
triliovault-wlm-rabbit-init-ws262                    0/1     Completed   0          21h
triliovault-wlm-scheduler-669f4758b4-ks7qr           1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-mj8p2            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-th6f4            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-zhr4m            1/1     Running     0          21h

11.4.4] Check Job Completion

kubectl get jobs -n trilio-openstack

Example Output:

NAME                                 COMPLETIONS   DURATION   AGE
triliovault-datamover-db-init        1/1           5s         21h
triliovault-datamover-db-sync        1/1           8s         21h
triliovault-datamover-ks-endpoints   1/1           17s        21h
triliovault-datamover-ks-service     1/1           18s        21h
triliovault-datamover-ks-user        1/1           19s        21h
triliovault-wlm-cloud-trust          1/1           2m10s      20h
triliovault-wlm-db-init              1/1           5s         21h
triliovault-wlm-db-sync              1/1           20s        21h
triliovault-wlm-ks-endpoints         1/1           17s        21h
triliovault-wlm-ks-service           1/1           17s        21h
triliovault-wlm-ks-user              1/1           19s        21h
triliovault-wlm-rabbit-init          1/1           4s         21h

11.4.5] Verify NFS Backup Target (if applicable)

kubectl get pvc -n trilio-openstack

Example Output:

triliovault-nfs-pvc-172-25-0-10-mnt-tvault-42424   Bound   triliovault-nfs-pv-172-25-0-10-mnt-tvault-42424   20Gi   RWX   nfs   6d

11.4.6] Validate S3 Backup Target (if applicable)

Ensure S3 is correctly mounted on all WLM pods.

Trilio-OpenStack Helm Chart Installation is Done!

12] Install Trilio Horizon Plugin

12.1] Find horizon nodes

Run following commands to fetch hostnames and IP addresses of horizon nodes.

kubectl get nodes -o wide --show-labels | grep openstack-control-plane=enabled |  awk '{print $1, $6;}'
# Sample output:
-------------------
root@helm1# kubectl get nodes -o wide --show-labels | grep openstack-control-plane=enabled |  awk '{print $1, $6;}'
helm2 172.25.10.203
helm3 172.25.10.204
helm4 172.25.10.205

12.2] SSH to above list of kubernetes nodes and run following commands. [Run this step on all Horizon nodes]

  • Prerequisites: Get following variable details

    1. <TRILIO_DOCKER_REGISTRY_USERNAME>, <TRILIO_DOCKER_REGISTRY_PASSWORD>

    2. <TRILIOVAULT_HORIZON_PLUGIN_IMAGE_TAG> : Please check TrilioVault release notes for TrilioVault horizon docker image tag

  • ssh to all nodes using IP addresses/hostname fetched in previous step and pull TrilioVault horizon image

# ssh to horizon nodes
ssh <HORIZON_NODE_IP>
# Login to TrilioVault Docker Image Registry
docker login docker.io -u <TRILIO_DOCKER_REGISTRY_USERNAME> -p <TRILIO_DOCKER_REGISTRY_PASSWORD>
# Pull TrilioVault Horizon Plugin Image
docker pull docker.io/trilio/trilio-horizon-plugin-helm:<TRILIOVAULT_HORIZON_PLUGIN_IMAGE_TAG>

12.3] Edit your 'openstackdeployment.yaml'

  • Get openstack deployment yaml file.

## Get mosk openstack deployment resource name
kubectl -n openstack get osdpl

## Example: Here 'osh-dev' is mosk openstack deployment resource name
kubectl -n openstack get osdpl
NAME      OPENSTACK   AGE    DRAFT
osh-dev   victoria    243d

## Get it's resource definition file in yaml format.
kubectl -n openstack get osdpl ${OSDPL_RESOURCE_NAME} -o yaml > openstackdeployment.yaml
## Example
kubectl -n openstack get osdpl osh-dev -o yaml > openstackdeployment.yaml
  • Edit your openstackdeployment.yaml and add following block under “spec->services” block. Set correct image tag for horizon plugin image here.

  services:
    dashboard:
      horizon:
        values:
          images:
            tags:
              horizon: docker.io/trilio/trilio-horizon-plugin-helm:<TRILIOVAULT_HORIZON_PLUGIN_IMAGE_TAG>

12.4] Update the openstack cloud

kubectl apply -f openstackdeployment.yaml

12.5] Check status of deployment:

STATE should be APPLIED.

kubectl -n openstack get osdplst
##Sample output of update completed state:
--------------------------------------------
root@helm1:# kubectl -n openstack get osdplst
NAME      OPENSTACK VERSION   CONTROLLER VERSION   STATE
osh-dev   victoria            0.8.3                APPLIED
  • Verify that horizon pods are in running state.

kubectl get pods -n openstack | grep horizon
## Sample output
root@helm1:# kubectl get pods -n openstack | grep horizon
horizon-f9d4c747d-8wmzt                                        1/1     Running     0          4d20h
horizon-f9d4c747d-lhplg                                        1/1     Running     0          4d19h

Installation of triliovault horizon plugin is done. You can login to openstack horizon. You should see ‘Backups’ tab in openstack project space. If you are already logged in to horizon, then please logout and login again.

13] Install Dynamic Backup Target

To add a dynamic backup target, refer to the Install Dynamic Backup Target.

Last updated

Was this helpful?