LogoLogo
T4O-6.x
T4O-6.x
  • About Trilio for OpenStack
    • Welcome to Trilio for OpenStack
    • T4O Architecture
    • Release Notes
    • Features
    • Compatibility Matrix
    • Resources
      • 6.1.1
      • 6.1.0
      • 6.0.0
  • Getting Started
    • Requirements
      • Network Considerations
      • Installation Strategy and Preparation
    • Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)
      • Post Installation Health-Check
      • Trilio Installation on RHOSO18.0
      • Add new backup target on RHOSO18.0
    • Getting started with Trilio on OpenStack-Helm
      • Install Dynamic Backup Target
    • Getting started with Trilio on Canonical OpenStack
    • Licensing
    • Installing WorkloadManager CLI client
    • Uninstall Trilio
      • Uninstalling from RHOSP
      • Uninstalling from OpenStack Helm
  • Upgrading to T4O-6.x from older supported versions
    • Supported Trilio Upgrade Path
    • Upgrading on RHOSP
    • Upgrading on RHOSO18.0
  • Advanced Configuration
    • Switching NFS Backing file
    • Multi-IP NFS Backup target mapping file configuration
    • Advanced Ceph configurations
      • Additions for multiple CEPH configurations
    • Multi-Region Deployments
    • Serial Upload per Instance during Snapshot
  • User Guide
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notifications
  • Admin Guide
    • Backups-Admin Area
    • Backup Targets
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import & Migration
    • Disaster Recovery
      • Example runbook for Disaster Recovery using NFS
    • Migrating encrypted Workloads
    • Rebasing existing workloads
  • Troubleshooting
    • Frequently Asked Questions
    • General Troubleshooting Tips
    • Important log files
  • API GUIDE
    • Backup Targets
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notification Settings
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import and Migration
Powered by GitBook
On this page
  • 1. Prepare for deployment
  • 2] Clone Helm Chart Repository
  • 3] Configure Container Image Tags
  • 4] Create the trilio-openstack Namespace
  • 5] Label Kubernetes Nodes for TrilioVault Control Plane
  • 6] Configure the Backup Target for Trilio-OpenStack
  • 7] Provide Cloud Admin Credentials in keystone.yaml
  • 8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials
  • 9] Configure Ceph Storage (if used as Nova/Cinder Backend)
  • 10] Create Docker Registry Credentials Secret
  • 11. Install Trilio for OpenStack Helm Chart
  • 11.1] Review the Installation Script
  • 11.2] Uninstall Existing Trilio for OpenStack Chart
  • 11.3] Run the Installation Script
  • 11.4] Configure DNS for Public Endpoints
  • 11.5] Verify Installation
  • Trilio-OpenStack Helm Chart Installation is Done!
  • 12] Install Trilio for OpenStack Horizon Plugin
  • 12.1] Pre-requisites
  • 12.2] Patch Horizon Deployment
  • 12.3] Verification

Was this helpful?

Export as PDF
  1. Getting Started

Getting started with Trilio on OpenStack-Helm

1. Prepare for deployment

1.1] Install Helm CLI Client

Ensure the Helm CLI client is installed on the node from which you are installing Trilio for OpenStack.

curl -O https://get.helm.sh/helm-v3.17.2-linux-amd64.tar.gz
tar -zxvf helm*.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
rm -rf linux-amd64 helm*.tar.gz

1.2] Remove Existing Trilio for OpenStack Docker Images

Check if there are existing Trilio for OpenStack Docker images, remove them before proceeding.

## List existing TrilioVault Docker images
docker images | grep trilio
## Remove existing Trilio for OpenStack Docker images
docker image rm <trilio-docker-img-name>

1.3] Install NFS Client Package (Optional)

If you plan to use NFS as a backup target, install nfs-common on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.

## SSH into each Kubernetes node with Trilio for OpenStack enabled (control plane and compute nodes)
apt-get install nfs-common -y

1.4] Install Necessary Dependencies

Run the following command on the installation node:

sudo apt update -y && sudo apt install make jq -y

2] Clone Helm Chart Repository

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../

3] Configure Container Image Tags

Select the appropriate image values file based on your OpenStack-Helm setup. Update the Trilio-OpenStack image tags accordingly.

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/2023.2.yaml

4] Create the trilio-openstack Namespace

To isolate trilio-openstack services, create a dedicated Kubernetes namespace:

kubectl create namespace trilio-openstack
kubectl config set-context --current --namespace=trilio-openstack

5] Label Kubernetes Nodes for TrilioVault Control Plane

Trilio for OpenStack control plane services should run on Kubernetes nodes labeled as triliovault-control-plane=enabled. It is recommended to use three Kubernetes nodes for high availability.

Steps:

1] Retrieve the OpenStack control plane node names:

kubectl get nodes --show-labels | grep openstack-control-plane

2] Assign the triliovault-control-plane label to the selected nodes:

kubectl label nodes <NODE_NAME_1> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_2> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_3> triliovault-control-plane=enabled

3] Verify the nodes with the assigned label:

kubectl get nodes --show-labels | grep triliovault-control-plane

6] Configure the Backup Target for Trilio-OpenStack

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

b) S3

Steps:

If using NFS as the backup target, define its details in:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/nfs.yaml

If using S3, configure its details in:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/s3.yaml

If using S3 with TLS enabled and self-signed certificates, store the CA certificate in:

triliovault-cfg-scripts/openstack-helm/trilio-openstack/files/s3-cert.pem

The deployment scripts will automatically place this certificate in the required location.

We will be using this yaml file in trilio-openstack 'install' command at later step in this document.

7] Provide Cloud Admin Credentials in keystone.yaml

The cloud admin user in Keystone must have the admin role on the cloud domain. Update the required credentials:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/keystone.yaml

8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials

1] Fetch Internal and Public Domain Names of the Kubernetes Cluster.

2] Fetch Keystone, RabbitMQ, and Database Admin Credentials.

a) These credentials are required for Trilio deployment.

b) Navigate to the utils directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

c) Generate the admin credentials file using the previously retrieved domain names

./get_admin_creds.sh <internal_domain_name> <public_domain_name>
Example:
./get_admin_creds.sh cluster.local triliodata.demo

3] Verify that the credentials file is created:

cat ../values_overrides/admin_creds.yaml

Please ensure that the correct ca.crt is present inside the secret “trilio-ca-cert” in the openstack namespace. If the secret is created with a different name, make sure to update the reference in the ./get_admin_creds.sh script before executing it.

9] Configure Ceph Storage (if used as Nova/Cinder Backend)

If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.

Manual Approach

1] Edit the Ceph configuration file:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/ceph.yaml

2] Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools). By default, the cinder/nova user usually has these permissions, but it’s recommended to verify. Copy the contents of /etc/ceph/ceph.conf into the appropriate Trilio template file:

vi ../templates/bin/_triliovault-ceph.conf.tpl

Automated Approach

1] Run the Ceph configuration script:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph.sh

2] Verify that the output is correctly written to:

cat ../values_overrides/ceph.yaml

10] Create Docker Registry Credentials Secret

Trilio images are hosted in a private registry. You must create an ImagePullSecret in the triliovault namespace.

1] Navigate to the utilities directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

2] Run the script with Trilio’s Docker registry credentials kubernetes secret:

./create_image_pull_secret.sh <TRILIO_REGISTRY_USERNAME> <TRILIO_REGISTRY_PASSWORD>

3] Verify that the secret has been created successfully:

kubectl describe secret triliovault-image-registry -n trilio-openstack

11. Install Trilio for OpenStack Helm Chart

11.1] Review the Installation Script

11.1.1] Open the install.sh Script

The install.sh script installs the Trilio Helm chart in the trilio-openstack namespace.

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
vi install.sh

11.1.2] Configure Backup Target

Modify the script to select the appropriate backup target:

a) NFS Backup Target: Default configuration includes nfs.yaml.

b) S3 Backup Target: Replace nfs.yaml with s3.yaml.

Example Configuration for S3:

helm upgrade --install trilio-openstack ./trilio-openstack --namespace=trilio-openstack \
     --values=./trilio-openstack/values_overrides/image_pull_secrets.yaml \
     --values=./trilio-openstack/values_overrides/keystone.yaml \
     --values=./trilio-openstack/values_overrides/s3.yaml \
     --values=./trilio-openstack/values_overrides/2023.2.yaml \
     --values=./trilio-openstack/values_overrides/admin_creds.yaml \
     --values=./trilio-openstack/values_overrides/tls_public_endpoint.yaml \
     --values=./trilio-openstack/values_overrides/ceph.yaml \
     --values=./trilio-openstack/values_overrides/db_drop.yaml \
     --values=./trilio-openstack/values_overrides/ingress.yaml \
     --values=./trilio-openstack/values_overrides/triliovault_passwords.yaml
     echo -e "Waiting for TrilioVault pods to reach running state"
     ./trilio-openstack/utils/wait_for_pods.sh trilio-openstack
     kubectl get pods

11.1.3] Select the Appropriate OpenStack Helm Version

Use the correct YAML file based on your OpenStack Helm Version:

  • Antelope → 2023.1.yaml

  • Bobcat (Default)→ 2023.2.yaml

11.1.4] Validate values_overrides Configuration

Ensure the correct configurations are used:

  • Disable Ceph in ceph.yaml if not applicable.

  • Remove tls_public_endpoint.yaml if TLS is unnecessary.

11.2] Uninstall Existing Trilio for OpenStack Chart

For a fresh install, uninstall any previous deployment follow the OpenStack-Helm Uninstall Guide.

11.3] Run the Installation Script

Execute the installation:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
./install.sh

11.4] Configure DNS for Public Endpoints

11.4.1] Retrieve Ingress External IP

kubectl get service -n openstack | grep LoadBalancer
Example Output:

public-openstack      LoadBalancer   10.105.43.185    192.168.2.50   80:30162/TCP,443:30829/TCP

11.4.2] Fetch TrilioVault FQDNs

kubectl -n trilio-openstack get ingress
Example Output:


root@master:~# kubectl -n trilio-openstack get ingress
NAME                                   CLASS           HOSTS                                                                                                                   ADDRESS       PORTS     AGE
triliovault-datamover                  nginx           triliovault-datamover,triliovault-datamover.trilio-openstack,triliovault-datamover.trilio-openstack.svc.cluster.local   192.168.2.5   80        14h
triliovault-datamover-cluster-fqdn     nginx-cluster   triliovault-datamover.triliodata.demo                                                                                                 80, 443   14h
triliovault-datamover-namespace-fqdn   nginx           triliovault-datamover.triliodata.demo                                                                                   192.168.2.5   80, 443   14h
triliovault-wlm                        nginx           triliovault-wlm,triliovault-wlm.trilio-openstack,triliovault-wlm.trilio-openstack.svc.cluster.local                     192.168.2.5   80        14h
triliovault-wlm-cluster-fqdn           nginx-cluster   triliovault-wlm.triliodata.demo                                                                                                       80, 443   14h
triliovault-wlm-namespace-fqdn         nginx           triliovault-wlm.triliodata.demo                                                                                         192.168.2.5   80, 443   14h

If the ingress service doesn’t have an IP assigned, follow these steps:

1] Check the Ingress Controller Deployment

Look for the ingress-nginx-controller deployment, typically in the ingress-nginx or kube-system namespace:

kubectl -n openstack get deployment ingress-nginx-controller -o yaml | grep watch-namespace

2] Verify the --watch-namespace Arg

If the controller has a --watch-namespace argument, it means it’s watching only specific namespaces for ingress resources.

3] Update watch-namespace to include trilio-openstack

Edit the deployment to include trilio-openstack in the comma-separated list of namespaces:

kubectl edit deployment ingress-nginx-controller -n ingress-nginx

Example --watch-namespace arg:

--watch-namespace=trilio-openstack,openstack

4] Restart the Controller

This will happen automatically when you edit the deployment, but you can manually trigger it if needed:

kubectl rollout restart deployment ingress-nginx-controller -n openstack

11.5] Verify Installation

11.5.1] Check Helm Release Status

helm status trilio-openstack

11.5.2] Validate Deployed Containers

Ensure correct image versions are used by checking container tags or SHA digests.

11.5.3] Verify Pod Status

kubectl get pods -n trilio-openstack
Example Output:


NAME                                                 READY   STATUS      RESTARTS   AGE
triliovault-datamover-api-5c7fbb949c-2m8dc           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-kxspg           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-z4wkn           1/1     Running     0          21h
triliovault-datamover-db-init-7k7jg                  0/1     Completed   0          21h
triliovault-datamover-db-sync-6jkgs                  0/1     Completed   0          21h
triliovault-datamover-ks-endpoints-gcrht             0/3     Completed   0          21h
triliovault-datamover-ks-service-nnnvh               0/1     Completed   0          21h
triliovault-datamover-ks-user-td44v                  0/1     Completed   0          20h
triliovault-datamover-openstack-compute-node-4gkv8   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-6lbc4   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-pqslx   1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-52449                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-h47mw                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-rjbvl                 1/1     Running     0          21h
triliovault-wlm-cloud-trust-h8xgq                    0/1     Completed   0          20h
triliovault-wlm-cron-574ff78486-54rqg                1/1     Running     0          21h
triliovault-wlm-db-init-hvk65                        0/1     Completed   0          21h
triliovault-wlm-db-sync-hpl4c                        0/1     Completed   0          21h
triliovault-wlm-ks-endpoints-4bsxl                   0/3     Completed   0          21h
triliovault-wlm-ks-service-btcb4                     0/1     Completed   0          21h
triliovault-wlm-ks-user-gnfdh                        0/1     Completed   0          20h
triliovault-wlm-rabbit-init-ws262                    0/1     Completed   0          21h
triliovault-wlm-scheduler-669f4758b4-ks7qr           1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-mj8p2            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-th6f4            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-zhr4m            1/1     Running     0          21h

11.5.4] Check Job Completion

kubectl get jobs -n trilio-openstack

Example Output:

NAME                                 COMPLETIONS   DURATION   AGE
triliovault-datamover-db-init        1/1           5s         21h
triliovault-datamover-db-sync        1/1           8s         21h
triliovault-datamover-ks-endpoints   1/1           17s        21h
triliovault-datamover-ks-service     1/1           18s        21h
triliovault-datamover-ks-user        1/1           19s        21h
triliovault-wlm-cloud-trust          1/1           2m10s      20h
triliovault-wlm-db-init              1/1           5s         21h
triliovault-wlm-db-sync              1/1           20s        21h
triliovault-wlm-ks-endpoints         1/1           17s        21h
triliovault-wlm-ks-service           1/1           17s        21h
triliovault-wlm-ks-user              1/1           19s        21h
triliovault-wlm-rabbit-init          1/1           4s         21h

11.5.5] Verify NFS Backup Target (if applicable)

kubectl get pvc -n trilio-openstack

Example Output:

triliovault-nfs-pvc-172-25-0-10-mnt-tvault-42424   Bound   triliovault-nfs-pv-172-25-0-10-mnt-tvault-42424   20Gi   RWX   nfs   6d

11.5.6] Validate S3 Backup Target (if applicable)

Ensure S3 is correctly mounted on all WLM pods.

Trilio-OpenStack Helm Chart Installation is Done!

Logs:

1] triliovault-datamover-api service logs.

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover-api/triliovault-datamover-api.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods 
kubectl get pods | grep triliovault-datamover-api
-- See logs 
kubectl logs <triliovault-datamover-api-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-api
triliovault-datamover-api-c87899fb7-dq2sd            1/1     Running     0          3d18h
triliovault-datamover-api-c87899fb7-j4fdz            1/1     Running     0          3d18h
triliovault-datamover-api-c87899fb7-nm8pt            1/1     Running     0          3d18h
root@helm1:~# kubectl logs triliovault-datamover-api-c87899fb7-dq2sd

2] trliovault-datamover service logs

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep openstack-compute-node
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover/triliovault-datamover.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods 
kubectl get pods | grep triliovault-datamover-openstack
-- See logs 
kubectl logs <triliovault-datamover-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-openstack
triliovault-datamover-openstack-compute-node-2krmj   1/1     Running     0          3d19h
triliovault-datamover-openstack-compute-node-9f5w7   1/1     Running     0          3d19h
root@helm1:~# kubectl logs triliovault-datamover-openstack-compute-node-2krmj

3] triliovault-wlm-api, triliovault-wlm-cron, triliovault-wlm-scheduler, triliovault-wlm-workloads services logs

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- Log files are available in following directory.
ls /var/log/triliovault-wlm/
## Sample command output
root@helm4:~# ls -ll /var/log/triliovault-wlm/
total 26576
-rw-r--r-- 1 42424 42424  2079322 Mar 20 07:55 triliovault-wlm-api.log
-rw-r--r-- 1 42424 42424 25000088 Mar 20 00:41 triliovault-wlm-api.log.1
-rw-r--r-- 1 42424 42424    12261 Mar 16 12:40 triliovault-wlm-cron.log
-rw-r--r-- 1 42424 42424    10263 Mar 16 12:36 triliovault-wlm-scheduler.log
-rw-r--r-- 1 42424 42424    87918 Mar 16 12:36 triliovault-wlm-workloads.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-wlm services pods
kubectl get pods | grep triliovault-wlm
-- See logs 
kubectl logs <triliovault-wlm-service-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-wlm
triliovault-wlm-api-7b956f7b8-84gtw                  1/1     Running     0          3d19h
triliovault-wlm-api-7b956f7b8-85mdk                  1/1     Running     0          3d19h
triliovault-wlm-api-7b956f7b8-hpcpt                  1/1     Running     0          3d19h
triliovault-wlm-cloud-trust-rdh8n                    0/1     Completed   0          3d19h
triliovault-wlm-cron-78bdb4b959-wzrfs                1/1     Running     0          3d19h
triliovault-wlm-db-drop-dhfgj                        0/1     Completed   0          3d19h
triliovault-wlm-db-init-snrsr                        0/1     Completed   0          3d19h
triliovault-wlm-db-sync-wffk5                        0/1     Completed   0          3d19h
triliovault-wlm-ks-endpoints-zvqtf                   0/3     Completed   0          3d19h
triliovault-wlm-ks-service-6425q                     0/1     Completed   0          3d19h
triliovault-wlm-ks-user-fmgsx                        0/1     Completed   0          3d19h
triliovault-wlm-rabbit-init-vsdn6                    0/1     Completed   0          3d19h
triliovault-wlm-scheduler-649b95ffd6-bkqxt           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-2kjdq           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-mxvhp           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-v4dn8           1/1     Running     0          3d19h
# kubectl logs triliovault-wlm-api-7b956f7b8-84gtw
# kubectl logs triliovault-wlm-cron-78bdb4b959-wzrfs
# kubectl logs triliovault-wlm-scheduler-649b95ffd6-bkqxt
# kubectl logs triliovault-wlm-workloads-6b98679d45-mxvhp

12] Install Trilio for OpenStack Horizon Plugin

Below are the steps to patch the Horizon deployment in an OpenStack Helm setup to install the Trilio Horizon Plugin.

12.1] Pre-requisites

  • Horizon is deployed via OpenStack Helm and is running in the openstack namespace.

  • Docker registry secret triliovault-image-registry must already exist in the openstack namespace from the steps performed during Trilio Installation.

kubectl describe secret triliovault-image-registry -n openstack
  • If not already created, follow this command:

kubectl create secret docker-registry triliovault-image-registry \
  --docker-server="docker.io" \
  --docker-username=<TRILIO_REGISTRY_USERNAME> \
  --docker-password=<TRILIO_REGISTRY_PASSWORD> \
  -n openstack

12.2] Patch Horizon Deployment

Use the command below to patch the Horizon deployment with the Trilio Horizon Plugin image. Update the image tag as needed for your release.

kubectl -n openstack patch deployment horizon \
  --type='strategic' \
  -p '{
    "spec": {
      "template": {
        "spec": {
          "containers": [
            {
              "name": "horizon",
              "image": "docker.io/trilio/trilio-horizon-plugin-helm:6.1.0-dev-maint1-1-2023.2"
            }
          ],
          "imagePullSecrets": [
            {
              "name": "triliovault-image-registry"
            }
          ]
        }
      }
    }
  }'

12.3] Verification

After patching:

  1. Ensure the Horizon pods are restarted and running with the new image:

kubectl get pods -n openstack -l application=horizon,component=server -o jsonpath="{.items[*].spec.containers[*].image}" | tr ' ' '\n'
  1. Access the Horizon dashboard and verify the TrilioVault section appears in the UI.

PreviousAdd new backup target on RHOSO18.0NextInstall Dynamic Backup Target

Last updated 22 hours ago

Was this helpful?

Refer to the link to get release specific values of the placeholder, viz trilio_branch.

Resources