Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. At the time of release are the JuJu Charms not yet updated to Trilio 4.1. We will update this page once the Charms are available.
The Trilio Ansible OpenStack playbook can be run to uninstall the Trilio services.
cd /opt/openstack-ansible/playbooks
openstack-ansible os-tvault-install.yml --tags "tvault-all-uninstall"To cleanly remove the Trilio Datamover API container run the following Ansible playbook.
cd /opt/openstack-ansible/playbooks
openstack-ansible lxc-containers-destroy.yml --limit "DMPAI CONTAINER_NAME"Remove the tvault-dmapi_hosts and tvault_compute_hosts entries from /etc/openstack_deploy/openstack_user_config.yml
#tvault-dmapi
tvault-dmapi_hosts:
infra-1:
ip: 172.26.0.3
infra-2:
ip: 172.26.0.4
#tvault-datamover
tvault_compute_hosts:
infra-1:
ip: 172.26.0.7
infra-2:
ip: 172.26.0.8Remove Trilio Datamover API settings from /etc/openstack_deploy/user_variables.yml
# Datamover haproxy setting
haproxy_extra_services:
- service:
haproxy_service_name: datamover_service
haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
haproxy_ssl: "{{ haproxy_ssl }}"
haproxy_port: 8784
haproxy_balance_type: http
haproxy_backend_options:
- "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"rm /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml source cloudadmin.rc
openstack endpoint delete "internal datamover service endpoint_id"
openstack endpoint delete "public datamover service endpoint_id"
openstack endpoint delete "admin datamover service endpoint_id"Go inside galera container.
Login as root user in mysql database engine.
Drop dmapi database.
Drop dmapi user
lxc-attach -n "GALERA CONTAINER NAME"
mysql -u root -p "root password"
DROP DATABASE dmapi;
DROP USER dmapi;Go inside rabbitmq container.
Delete dmapi user.
Delete dmapi vhost.
lxc-attach -n "RABBITMQ CONTAINER NAME"
rabbitmqctl delete_user dmapi
rabbitmqctl delete_vhost /dmapiRemove /etc/haproxy/conf.d/datamover_service file.
rm /etc/haproxy/conf.d/datamover_serviceRemove HAproxy configuration entry from /etc/haproxy/haproxy.cfg file.
frontend datamover_service-front-1
bind ussuriubuntu.triliodata.demo:8784 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
option httplog
option forwardfor except 127.0.0.0/8
reqadd X-Forwarded-Proto:\ https
mode http
default_backend datamover_service-back
frontend datamover_service-front-2
bind 172.26.1.2:8784
option httplog
option forwardfor except 127.0.0.0/8
mode http
default_backend datamover_service-back
backend datamover_service-back
mode http
balance leastconn
stick store-request src
stick-table type ip size 256k expire 30m
option forwardfor
option httplog
option httpchk GET / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck
server controller_dmapi_container-bf17d5b3 172.26.1.75:8784 check port 8784 inter 12000 rise 1 fall 1Restart the HAproxy service.
systemctl restart haproxyrm -rf /opt/config-certs/rabbitmq
rm -rf /opt/config-certs/s3List all VMs running on the KVM node
virsh listDestroy the Trilio VMs
virsh destroy <Trilio VM Name or ID>Undefine the Trilio VMs
virsh undefine <Trilio VM name>Delete the TrlioVault VM disk from KVM Host storage
The uninstallation of Trilio is depending on the Openstack Distribution it is installed in. The high-level process is the same for all Distributions.
Uninstall the Horizon Plugin or the Trilio Horizon container
Uninstall the datamover-api container
Uninstall the datamover
Delete the Trilio Appliance Cluster
The container needs to be cleaned on all nodes where the triliovault_datamover_api container is running.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the triliovault_datamover_api container:
Stop the triliovault_datamover_api container.
docker stop triliovault_datamover_apiRemove the triliovault_datamover_api container.
docker rm triliovault_datamover_apiClean /etc/kolla/triliovault-datamover-api directory.
rm -rf /etc/kolla/triliovault-datamover-apiClean log directory of triliovault_datamover_api container.
rm -rf /var/log/kolla/triliovault-datamover-api/The container needs to be cleaned on all nodes where the triliovault_datamover container is running.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the triliovault_datamover container:
Stop the triliovault_datamover container.
docker stop triliovault_datamoverRemove the triliovault_datamover container.
docker rm triliovault_datamoverClean /etc/kolla/triliovault-datamover directory.
rm -rf /etc/kolla/triliovault-datamoverClean log directory of triliovault_datamover container.
rm -rf /var/log/kolla/triliovault-datamover/The Trilio Datamover API entries need to be cleaned on all haproxy nodes.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the haproxy container:
rm /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
docker restart haproxyDelete all Trilio related entries from:
Trilio entries can be found in:
/usr/local/share/kolla-ansible/ansible/roles/ ➡️ There is a role triliovault
/etc/kolla/globals.yml➡️ Trilio entries had been appended at the end of the file
/etc/kolla/passwords.yml➡️ Trilio entries had been appended at the end of the file
/usr/local/share/kolla-ansible/ansible/site.yml ➡️ Trilio entries had been appended at the end of the file
/root/multinode ➡️ Trilio entries had been appended at the end of this example inventory file
Run deploy command to replace the Trilio Horizon container with original Kolla Ansible Horizon container.
kolla-ansible -i multinode deployTrilio created a dmapi service with dmapi user.
openstack service delete dmapi
openstack user delete dmapiTrilio Datamover API service has its own database in the Openstack database.
Login to Openstack database as root user or user with similar priviliges.
mysql -u root -pDelete dmapi database and user.
DROP DATABASE dmapi;
DROP USER dmapi;List all VMs running on the KVM node
virsh listDestroy the Trilio VMs
virsh destroy <Trilio VM Name or ID>Undefine the Trilio VMs
virsh undefine <Trilio VM name>Delete the TrlioVault VM disk from KVM Host storage
The following steps need to be run on all nodes, which have the Trilio Datamover API service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamoverApi.
Once the role that runs the Trilio Datamover API service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Stop trilio_dmapi container.
# For RHOSP13
systemctl disable tripleo_trilio_dmapi.service
systemctl stop tripleo_trilio_dmapi.service
docker stop trilio_dmapi
# For RHOSP16 onwards
systemctl disable tripleo_trilio_dmapi.service
systemctl stop tripleo_trilio_dmapi.service
podman stop trilio_dmapiRemove trilio_dmapi container.
# For RHOSP13
docker rm trilio_dmapi
docker rm trilio_datamover_api_init_log
docker rm trilio_datamover_api_db_sync
# For RHOSP16 onwards
podman rm trilio_dmapi
podman rm trilio_datamover_api_init_log
podman rm trilio_datamover_api_db_sync
## If present, remove below container as well
podman rm container-puppet-triliodmapiClean Trilio Datamover API service conf directory.
rm -rf /var/lib/config-data/puppet-generated/triliodmapi
rm /var/lib/config-data/puppet-generated/triliodmapi.md5sum
rm -rf /var/lib/config-data/triliodmapi*Clean Trilio Datamover API service log directory.
rm -rf /var/log/containers/trilio-datamover-api/The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.
Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Stop trilio_datamover container.
# For RHOSP13
docker stop trilio_datamover
# For RHOSP16 onwards
systemctl disable tripleo_trilio_datamover.service
systemctl stop tripleo_trilio_datamover.service
podman stop trilio_datamoverRemove trilio_datamover container.
# For RHOSP13
docker rm trilio_datamover
# For RHOSP16 onwards
podman rm trilio_datamover
## If present, remove below container as well
podman rm container-puppet-triliodmapiUnmount Trilio Backup Target on compute host.
## Following steps applicable for all supported RHOSP releases.
# Check triliovault backup target mount point
mount | grep trilio
# Unmount it
-- If it's NFS (COPY UUID_DIR from your compute host using above command)
umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
-- If it's S3
umount /var/lib/nova/triliovault-mounts
# Verify that it's unmounted
mount | grep trilio
df -h | grep trilio
# Remove mount point directory after verifying that backup target unmounted successfully.
# Otherwise actual data from backup target may get cleaned.
rm -rf /var/lib/nova/triliovault-mountsClean Trilio Datamover service conf directory.
rm -rf /var/lib/config-data/puppet-generated/triliodm/
rm /var/lib/config-data/puppet-generated/triliodm.md5sum
rm -rf /var/lib/config-data/triliodm*Clean log directory of Trilio Datamover service.
rm -rf /var/log/containers/trilio-datamover/The following steps need to be run on all nodes, which have the haproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.
Once the role that runs the haproxy service has been identified will the following commands clean the nodes from all Trilio resources.
Run all commands as root or user with sudo permissions.
Edit the following file inside the haproxy container and remove all Trilio entries.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
An example of these entries is given below.
listen trilio_datamover_api
bind 172.25.3.60:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.25.3.60:8784 transparent
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
option httpchk
option httplog
server overcloud-controller-0.internalapi.localdomain 172.25.3.59:8784 check fall 5 inter 2000 rise 2Restart the haproxy container once all edits have been done.
# For RHOSP13
docker restart haproxy-bundle-docker-0
# For RHOSP16 onwards
podman restart haproxy-bundle-podman-0Trilio registers services and users in Keystone. Those need to be unregistered and deleted.
openstack service delete dmapi
openstack user delete dmapiTrilio creates a database for the dmapi service. This database needs to be cleaned.
Login into the database cluster
## On RHOSP13, run following command on node where database service runs
docker exec -ti -u root galera-bundle-docker-0 mysql -u root
## On RHOSP16
podman exec -it galera-bundle-podman-0 mysql -u rootRun the following SQL statements to clean the database.
## Clean database
DROP DATABASE dmapi;
## Clean dmapi user
=> List 'dmapi' user accounts
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
+-------+-------------+
| user | host |
+-------+-------------+
| dmapi | 172.25.2.10 |
| dmapi | 172.25.2.8 |
+-------+-------------+
2 rows in set (0.00 sec)
=> Delete those user accounts
MariaDB [mysql]> DROP USER [email protected];
Query OK, 0 rows affected (0.82 sec)
MariaDB [mysql]> DROP USER [email protected];
Query OK, 0 rows affected (0.05 sec)
=> Verify that dmapi user got cleaned
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
Empty set (0.00 sec)Remove the following entries from roles_data.yaml used in the overcloud deploy command.
OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioDatamover
Follow these steps to clean the overcloud deploy command from all Trilio entries.
Remove trilio_env.yaml entry
Remove trilio endpoint map file Replace with original map file if existing
Run the cleaned overcloud deploy command.
List all VMs running on the KVM node
virsh listDestroy the Trilio VMs
virsh destroy <Trilio VM Name or ID>Undefine the Trilio VMs
virsh undefine <Trilio VM name>Delete the TrlioVault VM disk from KVM Host storage