LogoLogo
T4O-4.2
T4O-4.2
  • About Trilio for OpenStack
  • Trilio for OpenStack Architecture
  • Trilio 4.2 Release Notes
    • T4O 4.2 HF1 Release Notes
    • T4O 4.2 HF2 Release notes
    • T4O 4.2 HF3 Release Notes
    • T4O 4.2 HF4 Release Notes
    • T4O 4.2 HF5 Release Notes
    • T4O 4.2.6 Release Notes
    • T4O 4.2.7 Release Notes
    • T4O 4.2.8 Release Notes
  • Deployment Guide
    • Compatibility Matrix
    • Requirements
    • Trilio network considerations
    • Preparing the installation
    • Spinning up the Trilio VM
    • Installing Trilio Components
      • Installing on RHOSP
      • Installing on Canonical OpenStack
      • Installing on Kolla Openstack
      • Installing on Ansible Openstack
      • Installing on TripleO Train
    • Configuring Trilio
    • Apply the Trilio license
    • Advanced Ceph configurations
      • Additions for multiple CEPH configurations
      • Additions for multiple Ceph users
    • Post Installation Health-Check
    • Uninstall Trilio
      • Uninstalling from RHOSP
      • Uninstalling from Canonical OpenStack
      • Uninstalling from Kolla OpenStack
      • Uninstalling from Ansible OpenStack
    • Upgrade Trilio
      • Upgrading on RHOSP
      • Upgrading on Canonical OpenStack
      • Upgrading on Kolla OpenStack
      • Upgrading on Ansible OpenStack
      • Upgrading on TripleO Train [CentOS7]
      • Upgrade Trilio Appliance
    • Workload Encryption with Barbican
    • Multi-IP NFS Backup target mapping file configuration
    • Enabling T4O 4.1 or older backups when using NFS backup target
    • Install workloadmgr CLI client
    • Switch Backup Target on Kolla-ansible
    • Switch NFS Backing file
  • Trilio Appliance Administration Guide
    • Set Trilio GUI login banner
    • Trilio Appliance Dashboard
    • Set network accessibility of Trilio GUI
    • Reconfigure the Trilio Cluster
    • Change the Trilio GUI password
    • Reset the Trilio GUI password
    • Reinitialize Trilio
    • Download Trilio logs
    • Change Certificates used by Trilio
    • Restart Trilio Services
    • Shutdown/Restart the Trilio cluster
    • Clean up Trilio database
  • User Guide
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notifications
  • Admin Guide
    • Backups-Admin Area
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import & Migration
    • Disaster Recovery
      • Example runbook for Disaster Recovery using NFS
    • Migrating encrypted Workloads
    • Rebasing existing workloads
  • Troubleshooting
    • Frequently Asked Questions
    • General Troubleshooting Tips
    • Using the workloadmgr CLI tool on the Trilio Appliance
    • Healthcheck of Trilio
    • Important log files
  • API GUIDE
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notification Settings
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import and Migration
Powered by GitBook
On this page
  • Introduction
  • Examples
  • Using hostnames
  • Using IPs
  • Getting the correct compute hostnames/IPs
  • RHOSP or TripleO
  • Kolla-Ansible OpenStack
  • OpenStack Ansible

Was this helpful?

Export as PDF
  1. Deployment Guide

Multi-IP NFS Backup target mapping file configuration

Introduction

Filename and location:

triliovault-cfg-scripts/common/triliovault_nfs_map_input.yml

This file is used only when the user wants to configure 'multiple IP/endpoints based NFS share' as a backup target for Trilio. In all other cases like single IP NFS, S3 this file does not get used. Follow regular install doc.

If the user is using multiple-IP/endpoints based NFS share as a backup target for triliovault then triliovault mounts any one IP/endpoint on a given compute node for a given NFS share. Users can distribute NFS share IPs/endpoints across compute nodes.

'triliovault_nfs_map_input.yml' file is a tool that users can use to distribute/load balance NFS share endpoints across compute nodes in a given cloud.

Note: Two IPs/endpoints of same NFS share on single compute node is invalid scenario and not required because in backend it stores data at same place.

Examples

Using hostnames

Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33 Share directory path is: /var/share1

So, this NFS share supports the following full paths that clients can mount:

192.168.1.33:/var/share1 
192.168.1.34:/var/share1 
192.168.1.35:/var/share1

There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern

prod-compute-1.trilio.demo 
prod-compute-2.trilio.demo 
prod-compute-3.trilio.demo 
. 
. 
. 
prod-compute-30.trilio.demo

The remaining 2 node hostnames do not follow any format/pattern.

compute_bare.trilio.demo 
compute_virtual

Now the mapping file will look like this

multi_ip_nfs_shares: 
 - "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo'] 
   "192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual'] 
   "192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo'] 

single_ip_nfs_shares: []

Using IPs

Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes

multi_ip_nfs_shares: 
 - "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40'] 
   "192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50'] 
   "192.168.1.33:/var/share1": ['172.30.3.[31:40]'] 

single_ip_nfs_shares: []

Getting the correct compute hostnames/IPs

RHOSP or TripleO

Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml' file.

In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.

Run this command on undercloud by sourcing 'stackrc'.

(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+

Kolla-Ansible OpenStack

Compute hostnames/IPs should match in kolla-ansible inventory file and triliovault_nfs_map_input.yml.

If IP addresses are sued in kolla-ansible inventory file then the same IP addresses should be used in ‘triliovault_nfs_map_input.yml' file too. If the kolla-ansible deploy command looks like the following,

kolla-ansible -i multinode deploy

then, the inventory file is 'multinode'. Generally, it is available at “/root/multinode“.

OpenStack Ansible

Compute hostnames/IPs should match in the Openstack Ansible inventory file and triliovault_nfs_map_input.yml

If IP addresses have been used in the Openstack-ansible inventory file then you should use the same IP addresses in the ‘triliovault_nfs_map_input.yml' file too.

Generally inventory file available at /etc/openstack_deploy/openstack_user_config.yml

Openstack Ansible deploy command looks like following,

openstack-ansible os-tvault-install.yml

Openstack Ansible automatically picks up values that the user has set in the file openstack_user_config.yml

PreviousWorkload Encryption with BarbicanNextEnabling T4O 4.1 or older backups when using NFS backup target

Last updated 1 year ago

Was this helpful?

Other complex examples are available on github at

triliovault-cfg-scripts/common/examples-multi-ip-nfs-map at master · trilioData/triliovault-cfg-scripts