If you plan to use NFS as a backup target, install nfs-common on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.
SSH into each Kubernetes nodes which have the following labels
kubectl get nodes --show-labels | grep openstack-control-plane
kubectl get nodes --show-labels | grep openstack-compute
Run this command on the respective nodes:
apt-get install nfs-common -y
1.3] Install Necessary Dependencies
Run the following command on the installation node:
2] Clone Helm Chart Repository
Refer to the link Resources to get release specific values of the placeholder, viz trilio_branch.
3] Configure Container Image Tags
Select the appropriate image values file based on your MOSK setup version. Update the Trilio-OpenStack image tags accordingly.
4] Create the trilio-openstack Namespace
To isolate trilio-openstack services, create a dedicated Kubernetes namespace:
5] Label Kubernetes Nodes for TrilioVault Control Plane
Trilio for OpenStack control plane services should run on Kubernetes nodes labeled with triliovault-control-plane=enabled. Only nodes labeled with openstack-control-plane should be selected. For high availability, it is recommended to use three Kubernetes nodes.
Steps:
1] Retrieve the OpenStack control plane node names:
2] Assign the triliovault-control-plane label to the selected nodes:
3] Verify the nodes with the assigned label:
6] Configure the Backup Target for Trilio-OpenStack
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
b) S3
Steps:
1] If using NFS as the backup target, define its details in below file:
2] If using S3, configure its details in below file:
3] If using S3 with TLS enabled and self-signed certificates, store the CA certificate in below file:
The deployment scripts will automatically place this certificate in the required location.
We will be using this yaml file in trilio-openstack 'install' command at later step in this document.
Make sure the OpenStack-Helm repositories are enabled, OpenStack-Helm plugin is installed. You can also clone the OpenStack-helm repository using the following command:
Set environment variables that are later used in the subsequent sections:
Use the following script to deploy RabbitMQ service in trilio-openstack namespace:
Provide the appropriate storage class name in the field --set volume.class_name
8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials
8.1] Fetch Internal and Public Domain Names of the Kubernetes Cluster.
Fetch ‘internal_domain_name’ and ‘public_domain_name’ of your kubernetes cluster where MOSK cloud is deployed. Get 'osdpl' object name and describe it to get the domain name as below:
Example command output: Here 'osh-dev' is the osdpl object name.
Here internal_domain_name=cluster.local and public_domain_name=triliodata.demo
8.2] Fetch Keystone, RabbitMQ, and Database Admin Credentials.
a) These credentials are required for Trilio deployment.
b) Navigate to the utils directory:
c) Generate the admin credentials file using the previously retrieved domain names
e) The cloud admin user in Keystone must have the admin role on the cloud domain. Update the required credentials:
f) [IMP] Verify and Update Network Interface Values
(i) Before proceeding, ensure the correct network interface values are configured for live migration and hypervisor communication.
(ii) Run the following command to inspect the live migration interface:
(iii) If the value is mcc-lcm → no change needed.
(iv) If the value is anything else → update it in values.yaml:
Update the following section:
Replace "<output>" with the actual network interface name in your environment (e.g., mcc-lcm, tenant, tenant-v,).
9] Configure Ceph Storage (if used as Nova/Cinder Backend)
If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.
1] Run the Ceph configuration script:
2] Verify that the output is correctly written to:
If output is incorrect; perform manual steps:
i) Edit the Ceph configuration file
Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools).
By default, the cinder/nova user usually has these permissions, but it’s recommended to verify.
ii) Copy your /etc/ceph/ceph.conf content to following file. Clean existing file content.
10] Create Docker Registry Credentials Secret
Trilio images are hosted in a private registry. You must create an ImagePullSecret in the trilio-openstack namespace.
1] Navigate to the utilities directory:
2] Run the script with Trilio’s Docker registry credentials kubernetes secret:
3] Verify that the secret has been created successfully:
11] Install T4O Helm Chart
11.1] Review the Installation Script
11.1.1] Open the install.sh Script
The install.sh script installs the Trilio Helm chart in the trilio-openstack namespace.
11.1.2] Configure Backup Target
Modify the script to select the appropriate backup target:
a) NFS Backup Target: Default configuration includes nfs.yaml.
b) S3 Backup Target: Replace nfs.yaml with s3.yaml.
Example Configuration for S3:
11.1.3] Select the Appropriate OpenStack Helm Version
Use the correct YAML file based on your OpenStack Helm Version:
MOSK 25.1 → mosk25.1.yaml
11.1.4] Validate values_overrides Configuration
Ensure the correct configurations are used:
Disable Ceph in ceph.yaml if not applicable.
Remove tls_public_endpoint.yaml if TLS is unnecessary.
11.2] Run the Installation Script
Execute the installation:
11.3] Fetch Trilio FQDNs
Example Output:
If the ingress service doesn’t have an IP assigned in Address column, follow these steps:
Run this command to patch the ingress with ingressClass annotation
11.4] Verify Installation
11.4.1] Check Helm Release Status
11.4.2] Validate Deployed Containers
Ensure correct image versions are used by checking container tags or SHA digests.
11.4.3] Verify Pod Status
Example Output:
11.4.4] Check Job Completion
Example Output:
11.4.5] Verify NFS Backup Target (if applicable)
Example Output:
11.4.6] Validate S3 Backup Target (if applicable)
Ensure S3 is correctly mounted on all WLM pods.
Trilio-OpenStack Helm Chart Installation is Done!
12] Install Trilio Horizon Plugin
12.1] Find horizon nodes
Run following commands to fetch hostnames and IP addresses of horizon nodes.
12.2] SSH to above list of kubernetes nodes and run following commands. [Run this step on all Horizon nodes]
<TRILIOVAULT_HORIZON_PLUGIN_IMAGE_TAG> : Please check TrilioVault release notes for TrilioVault horizon docker image tag
ssh to all nodes using IP addresses/hostname fetched in previous step and pull TrilioVault horizon image
12.3] Edit your 'openstackdeployment.yaml'
Get openstack deployment yaml file.
Edit your openstackdeployment.yaml and add following block under “spec->services” block. Set correct image tag for horizon plugin image here.
12.4] Update the openstack cloud
12.5] Check status of deployment:
STATE should be APPLIED.
Verify that horizon pods are in running state.
Installation of triliovault horizon plugin is done. You can login to openstack horizon. You should see ‘Backups’ tab in openstack project space. If you are already logged in to horizon, then please logout and login again.
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/mosk25.1.yaml
# If helm osh plugin doesn't exist, install it
helm plugin install https://opendev.org/openstack/openstack-helm-plugin.git
# Set OpenStack release (adjust as needed for the deployment version)
export OPENSTACK_RELEASE=2024.1
# Features enabled for the deployment. This is used to look up values overrides.
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
# Directory where values overrides are looked up or downloaded to.
export OVERRIDES_DIR=$(pwd)/overrides
mkdir -p "${OVERRIDES_DIR}"
cd openstack-helm/rabbitmq
helm dependency build
cd ../..
vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values.yaml
libvirt:
images_rbd_ceph_conf: /etc/ceph/ceph.conf
live_migration_interface: "<output>" # Interface used for live migration traffic
hypervisor_host_interface: "<output>" # Interface used for live migration traffic
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph_mosk.sh
## Get mosk openstack deployment resource name
kubectl -n openstack get osdpl
## Example: Here 'osh-dev' is mosk openstack deployment resource name
kubectl -n openstack get osdpl
NAME OPENSTACK AGE DRAFT
osh-dev victoria 243d
## Get it's resource definition file in yaml format.
kubectl -n openstack get osdpl ${OSDPL_RESOURCE_NAME} -o yaml > openstackdeployment.yaml
## Example
kubectl -n openstack get osdpl osh-dev -o yaml > openstackdeployment.yaml
kubectl -n openstack get osdplst
##Sample output of update completed state:
--------------------------------------------
root@helm1:# kubectl -n openstack get osdplst
NAME OPENSTACK VERSION CONTROLLER VERSION STATE
osh-dev victoria 0.8.3 APPLIED