The next step in the installation process is to install the Trilio Nova API extension on each of the Nova API controller nodes. The purpose of the API extension is to route RESTful API calls to a Trilio Data Mover (which will be discussed next).
In case of multiple Controller nodes it is necessary to repeat these steps on all controller nodes running the Nova API service.
To install the Trilio Nova Extension on each Nova controller node, follow these steps after logging into a controller node.
Once the script has been started does it prompt for the following information:
Trilio IP This IP will be used to download the Trilio Nova API Extension pip packages.
Request whether the script is running on a Controller or Compute Node To install the Trilio Nova API Extension choose Controller
The Trilio Nova API extension is a download from Trilio appliance and installed on the controller node. After installation, the Nova-API service is restarted to load the Trilio Nova API extension into Nova-API process so Nova-API now can handle Data Mover requests from the Trilio appliance.
Now it is time to install the Trilio Data Mover service on the compute nodes where the Nova Compute service is running. The purpose of the Data Mover service is, as the name says, to move the actual disk data to and from the backup target.
The Trilio Data Mover needs to be installed on all Compute Nodes to guarantee the possible protection of all VMs running inside the Openstack Cluster.
To install the Trilio Data Mover on a Compute node, follow these steps after logging into a compute node.
At this point it is possible to run the installation either in an automated fashion or in an interactive mode.
The following command will start the interactive installation.
Once the script has been started does it prompt for the following information:
Trilio IP This IP will be used to download the Trilio Nova API Extension pip packages.
Request whether the script is running on a Controller or Compute Node To install the Trilio Data Mover service choose Compute Node.
Request of the path to the compute.filters file Choose matching your distribution or provide the exact path.
Acceptance of Trilio changing the sudoers file for nova
nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
Choose between NFS or S3 as backup target
In case of NFS
Provide the NFS Volume path
check and change the default NFS options if necessary
In case of S3
Choose the S3 profile to be used
provide access credentials according to the chosen profile
The Trilio Data Mover can be installed in an unattended way using a configuration file. The configuration file contains all the data that is required while the script is being executed.
The first step for the automated installation is to generate the answers file. A predefined file can be downloaded from any Trilio VM.
Edit tvault-contego-install.answers file as required for compute nodes. An example can be seen below:
Once the answersfile has been created can it be used to run the Data Mover installation unattended, which allows the automation of the installation process.
At the end of the installation is the Data Mover service deactivated.
To start the service using the installation script run:
Alternatively can the operating system service control commands be used to start the service tvault-contego.service
The last component to install is the Trilio Horizon Plugin on the Controllers where the Openstack Horizon service is running. The purpose of the Horizon Plugin is, to provide the native integration of TrlioVault into the GUI of Openstack.
To install the Trilio Horizon Plugin on each Openstack Horizon controller node, follow these steps after logging into a controller node.
The installation of the Trilio Horizon Plugin will restart the Openstack Horizon service.
Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:
Install the Trilio dm-api service on the control plane
Install the Trilio datamover service on the compute plane
Install the Trilio Horizon plugin into the Horizon service
How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.
The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Depending whether the RHOSP environment is already installed or is getting installed for the first time different steps are done to be able to deploy Trilio.
If overcloud is not deployed already, in that case user should install trilio rpms on overcloud image before starting deployment. Trilio RPM packages are provided through yum repo hosted on the Trilio VM.
To inject the Trilio yum repository on the overcloud image do the following commands:
Afterwards will the overcloud image be created.
If overcloud is deployed already it is necessary to prepare the artifacts to install on the overcloud.
All commands need to be run as user 'stack'
Firstly the github repository needs to be synced with the undercloud.
Afterwards are those artifacts created and pushed to the overcloud nodes using the upload-swift-artifacts tool. This tool is provided on the undercloud and the prepare_artifacts.sh is created for it.
Trilio contains of multiple services. Add these services to your roles_data.yaml.
In case of uncostomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
This service needs to share the same role as the nova-api
service.
In case of the pre-defined roles will the nova-api
service run on the role Controller
.
In case of custom defined roles, it is necessary to use the role the nova-api
service is using.
Add the following line to the identified role:
This service needs to share the same role as the openstack horizon
service.
In case of the pre-defined roles will the openstack horizon
service run on the role Controller
.
In case of custom defined roles, it is necessary to use the role the openstack horizon
service is using.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
Provide backup target details like NFS share, S3 bucket details and other necessary details in trilio_env.yaml environment file. This environment file will be used in overcloud deployment to configure trilio components.
Sample trilio_env.yaml
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
The following packages should be installed on nodes with the role that contains the nova-api
:
The following packages should be installed nodes with the role that contains the openstack horizon
:
The following packages should be installed on nodes with the role that contain nova-compute
:
On the same nodes that contain the nova-compute service a new systemd service tvault-contego
should have been registered and running.
Further should the following mount be visible: /var/triliovault-mounts/<hash>
Lastly login into the Horizon Dashboard as admin user. Two new tabs should be visible:
Backups
Backups-Admin
If any rpm packages are missing or other verification steps fail verify that the given steps have been followed.
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing does the following command provide the list of errors:
Further commands that can help identifying any errors.
If Cinder backend is Ceph it is necessary to manually add the ceph details to tvault-contego.conf on all compute nodes.
The file can be found here:
/var/lib/config-data/puppet-generated/triliodm/etc/tvault-contego/tvault-contego.conf
Add the following information:
The same block of information can be found in the nova.conf file.
In case that the user nova does not have permission to read and use the ceph conf and keyring files, run the following commands to provide the necessary access: