Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...


...
[ceph]
keyring_ext = .keyring
...
_1806_migration_panel.py
_1807_migration_panel_group.py[root@overcloudtrain5-controller-0 /]# podman ps | grep trilio-
76511a257278 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 12 days ago Up 12 days ago horizon
5c5acec33392 cluster.common.tag/trilio-wlm:pcmklatest /bin/bash /usr/lo... 7 days ago Up 7 days ago triliovault-wlm-cron-podman-0
8dc61a674a7f undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_datamover_api
a945fbf80554 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_wlm_scheduler
402c9fdb3647 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 6 days ago triliovault_wlm_workloads
f9452e4b3d14 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 6 days ago triliovault_wlm_api[root@overcloudtrain5-controller-0 /]# pcs status
Cluster name: tripleo_cluster
Cluster Summary:
* Stack: corosync
* Current DC: overcloudtrain5-controller-0 (version 2.0.5-9.el8_4.3-ba59be7122) - partition with quorum
* Last updated: Mon Jul 24 11:19:05 2023
* Last change: Mon Jul 17 10:38:45 2023 by root via cibadmin on overcloudtrain5-controller-0
* 4 nodes configured
* 14 resource instances configured
Node List:
* Online: [ overcloudtrain5-controller-0 ]
* GuestOnline: [ galera-bundle-0@overcloudtrain5-controller-0 rabbitmq-bundle-0@overcloudtrain5-controller-0 redis-bundle-0@overcloudtrain5-controller-0 ]
Full List of Resources:
* ip-172.30.6.27 (ocf::heartbeat:IPaddr2): Started overcloudtrain5-controller-0
* ip-172.30.6.16 (ocf::heartbeat:IPaddr2): Started overcloudtrain5-controller-0
* Container bundle: haproxy-bundle [cluster.common.tag/openstack-haproxy:pcmklatest]:
* haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
* Container bundle: galera-bundle [cluster.common.tag/openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf::heartbeat:galera): Master overcloudtrain5-controller-0
* Container bundle: rabbitmq-bundle [cluster.common.tag/openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloudtrain5-controller-0
* Container bundle: redis-bundle [cluster.common.tag/openstack-redis:pcmklatest]:
* redis-bundle-0 (ocf::heartbeat:redis): Master overcloudtrain5-controller-0
* Container bundle: openstack-cinder-volume [cluster.common.tag/openstack-cinder-volume:pcmklatest]:
* openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
* Container bundle: triliovault-wlm-cron [cluster.common.tag/trilio-wlm:pcmklatest]:
* triliovault-wlm-cron-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg[root@overcloudtrain5-novacompute-0 heat-admin]# podman ps | grep -i datamover
c750a8d0471f undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_datamover[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep triliovault-mounts
172.30.1.9:/mnt/rhosptargetnfs 7.0T 5.1T 2.0T 72% /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw==[root@overcloudtrain5-controller-0 heat-admin]# podman ps | grep horizon
76511a257278 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 12 days ago Up 12 days ago horizon## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
option-2: Restart the memcached pod (command: podman restart memcached)workloadmgr license-create <license_file>Learn about artifacts related to Trilio for OpenStack 5.0.0




qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 21M
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false# echo -n 10.10.2.20:/upstream | base64
MTAuMTAuMi4yMDovdXBzdHJlYW0=#mount --bind <mount-path1> <mount-path2>#vi /etc/fstab
<mount-path1> <mount-path2> none bind 0 0virt-v2v: error: inspection could not detect the source guest (or physical
machine).
Assuming that you are running virt-v2v/virt-p2v on a source which is
supported (and not, for example, a blank disk), then this should not
happen.
Inspection field 'i_arch' was 'unknown'.workloadmgr trust-listworkloadmgr trust-show <trust_id>workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>workloadmgr trust-delete <trust_id>[trilio]
name=Trilio Repository
baseurl=https://yum.fury.io/trilio-5-0/
enabled=1
gpgcheck=0deb [trusted=yes] https://apt.fury.io/trilio-5-0/ /curl -i -X PUT \
-H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
-H "Content-Type:application/json" \
-d \
'{
"metadata": {
"workload_id": "c13243a3-74c8-4f23-b3ac-771460d76130",
"workload_name": "workload-c13243a3-74c8-4f23-b3ac-771460d76130"
}
}' \
'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
curl -i -X GET \
-H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>/tmp/backing_file_update.logdeb [trusted=yes] https://apt.fury.io/trilio-5-0/ /https://yum.fury.io/trilio-5-0/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-0/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.0.0-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.0.0-rhosp16.1https://pypi.fury.io/trilio-5-0/Learn about artifacts related to Trilio for OpenStack 5.2.2
git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/Learn about artifacts related to Trilio for OpenStack 5.2.0
git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/Learn about artifacts related to Trilio for OpenStack 5.2.1
git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/






vcenters-list List all the VCenters configured for migration.
migration-plans-list List all the migration_plans of current project.
migration-plan-create Creates a migration plan.
migration-plan-discover-vms discover VM's of a migration plan.
migration-plan-get-by-vmid List the migration_plan for given vm id
migration-plan-get-import-list Get list of migration_plans to be imported.
migration-plan-import Import all migration plan records from backup store.
migration-plan-modify Modify a migration plan.
migration-plan-show Show details about a migration plan.
migration-plan-delete Remove a migration plan.
migrations-list List all the migrations for the migration plan.
migration-create Execute a migration plan.
migration-resume Resume the migration.
migration-show Show details about migration of the migration plan
migration-cancel Cancel the migration.
migration-delete Delete the migration.
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp16.1https://pypi.fury.io/trilio-5-2/deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp16.1https://pypi.fury.io/trilio-5-2/deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp16.1https://pypi.fury.io/trilio-5-2/https://pypi.fury.io/trilio-5-1/git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/git clone -b 5.2.7 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.7 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/trilio_branch : 5.2.6trilio/trilio-migration-vm2os:5.2.6trilio_branch : 5.2.3trilio/trilio-migration-vm2os:5.2.3trilio_branch : 5.2.5trilio/trilio-migration-vm2os:5.2.5trilio_branch : 5.2.7trilio/trilio-migration-vm2os:5.2.7triliovault-cfg-scripts/common/triliovault_nfs_map_input.ymldocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.0-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.0-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.0-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.0-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.1-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.1-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.1-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.1-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.1.0-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.1.0-zeddeb [trusted=yes] https://apt.fury.io/trilio-5-1/ /https://yum.fury.io/trilio-5-1/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-1/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.1.0-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.1.0-rhosp16.1<high_watermark>➡️ Value to set for High Watermark warnings<project_id>➡️ Project to assign the quota to
--workload_ids <workload_id>➡️ Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
juju run --app trilio-wlm "sudo apt install python3-oslo.messaging=12.1.6-0ubuntu1 -y --allow-downgrades"
juju run --app trilio-wlm "sudo apt-mark hold python3-oslo.messaging"workloadmgr project-quota-type-listworkloadmgr project-quota-type-show <quota_type_id>workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
--allowed-value allowed_value
--high-watermark high_watermark
--project-id project_idworkloadmgr project-allowed-quota-list <project_id>workloadmgr project-allowed-quota-show <allowed_quota_id>workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
[--high-watermark <high_watermark>]
[--project-id <project_id>]
<allowed_quota_id>workloadmgr project-allowed-quota-delete <allowed_quota_id>git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.gitcp -r triliovault-cfg-scripts/migration-vm2os/nginx /opt
cp triliovault-cfg-scripts/migration-vm2os/env /opt
cp triliovault-cfg-scripts/migration-vm2os/docker-compose.yml /opt
cp triliovault-cfg-scripts/migration-vm2os/vmosmapping.conf /optdocker compose -f /opt/docker-compose.yml --env-file /opt/env up &CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ed651c084d4 nginx:latest "/docker-entrypoint.…" 5 seconds ago Up 3 seconds 80/tcp, 0.0.0.0:5085->5085/tcp, :::5085->5085/tcp nginx
8ff9f81b9913 trilio/trilio-migration-vm2os:5.2.3-dev-maint3-3 "gunicorn --config g…" 5 seconds ago Up 4 seconds trilio_vm2os
1f91f14ad011 trilio/trilio-migration-vm2os:5.2.3-dev-maint3-3 "celery -A run.celer…" 5 seconds ago Up 4 seconds opt-worker-1
0bb5574ad56b redis:latest "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redishttp://192.168.6.25:5085/workloadmgr workload-get-importworkloads-list [--project_id <project_id>]workloadmgr workload-importworkloads [--workloadids <workloadid>]workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
[--generate_yaml {True,False}]workloadmgr workload-reassign-workloads
[--old_tenant_ids <old_tenant_id>]
[--new_tenant_id <new_tenant_id>]
[--workload_ids <workload_id>]
[--user_id <user_id>]
[--migrate_cloud {True,False}]
[--map_file <map_file>]reassign_mappings:
- old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
new_tenant_id: new_tenant_id
user_id: user_id
workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
migrate_cloud: True/False #Set to True if want to reassign workloads from
# other clouds as well. Default is False
- old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
new_tenant_id: new_tenant_id
user_id: user_id
workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
migrate_cloud: True/False #Set to True if want to reassign workloads from
# other clouds as well. Default is False[root@TVM2 ~]# pcs resource disable wlm-cron
[root@TVM2 ~]# systemctl status wlm-cron
● wlm-cron.service - workload's scheduler cron service
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
Active: inactive (dead)
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
Hint: Some lines were ellipsized, use -l to show in full.
[root@TVM2 ~]# pcs resource show wlm-cron
Resource: wlm-cron (class=systemd type=wlm-cron)
Meta Attrs: target-role=Stopped
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
[root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
$ podman exec -itu root triliovault_wlm_api /bin/bash
$ source admin.rc
$ workloadmgr enable-global-job-scheduler
Global job scheduler is successfully enabled192.168.1.33:/var/share1
192.168.1.34:/var/share1
192.168.1.35:/var/share1prod-compute-1.trilio.demo
prod-compute-2.trilio.demo
prod-compute-3.trilio.demo
.
.
.
prod-compute-30.trilio.democompute_bare.trilio.demo
compute_virtualmulti_ip_nfs_shares:
- "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo']
"192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual']
"192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo']
single_ip_nfs_shares: []multi_ip_nfs_shares:
- "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40']
"192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50']
"192.168.1.33:/var/share1": ['172.30.3.[31:40]']
single_ip_nfs_shares: [](undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+workloadmgr disable-scheduler --workloadids <workloadid>workloadmgr enable-scheduler --workloadids <workloadid>workloadmgr scheduler-trust-validate <workload_id>[root@TVM2 ~]# pcs resource disable wlm-cron
[root@TVM2 ~]# systemctl status wlm-cron
● wlm-cron.service - workload's scheduler cron service
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
Active: inactive (dead)
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
Hint: Some lines were ellipsized, use -l to show in full.
[root@TVM2 ~]# pcs resource show wlm-cron
Resource: wlm-cron (class=systemd type=wlm-cron)
Meta Attrs: target-role=Stopped
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
[root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
$ docker exec -itu root triliovault_wlm_api /bin/bash
$ source admin.rc
$ workloadmgr enable-global-job-scheduler
Global job scheduler is successfully enabledC:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.confC:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.conf.bak/etc/cloud/cloud.cfg.d/99-migration-disable.cfgworkloadmgr filepath-search [--snapshotids <snapshotid>]
[--end_filter <end_filter>]
[--start_filter <start_filter>]
[--date_from <date_from>]
[--date_to <date_to>]
<vm_id> <file_path>deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2024.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-zedregistry.connect.redhat.com/trilio/trilio-datamover:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.6-rhosp17.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-2023.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp16.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2024.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-zedregistry.connect.redhat.com/trilio/trilio-datamover:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.5-rhosp17.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.7-2025.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.7-2025.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.7-2024.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.7-2024.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.7-2024.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.7-2024.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.7-2024.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.7-2024.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.7-2024.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.7-2024.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.7-2024.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.7-2024.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.7-rhosp17.1# echo -n 10.10.2.20:/Trilio_Backup | base64
MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==# echo -n /Trilio_Backup | base64
L1RyaWxpb19CYWNrdXA=./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id># echo -n 10.10.2.20:/Trilio_Backup | base64
MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==# echo -n /Trilio_Backup | base64
L1RyaWxpb19CYWNrdXA=mkdir -p /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
mount --bind /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw\=\=/ /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
chmod 777 /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZzmkdir -p /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
mount --bind /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw\=\=/ /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
chmod 777 /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZzdocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-zedjuju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run --wait trilio-wlm/leader create-licensejuju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run-action --wait trilio-wlm/leader create-licensejuju export-bundle --filename openstack_base_file.yamlgit clone https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts
git checkout {{ trilio_branch }}
cd juju-charms/sample_overlay_bundlesjuju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>juju status | grep -i trilio
trilio-data-mover 5.2.8.14 active 3 trilio-charmers-trilio-data-mover latest/candidate 22 no Unit is ready
trilio-data-mover-mysql-router 8.0.39 active 3 mysql-router 8.0/stable 200 no Unit is ready
trilio-dm-api 5.2.8 active 1 trilio-charmers-trilio-dm-api latest/candidate 17 no Unit is ready
trilio-dm-api-mysql-router 8.0.39 active 1 mysql-router 8.0/stable 200 no Unit is ready
trilio-horizon-plugin 5.2.8.8 active 1 trilio-charmers-trilio-horizon-plugin latest/candidate 10 no Unit is ready
trilio-wlm 5.2.8.15 active 1 trilio-charmers-trilio-wlm latest/candidate 18 no Unit is ready
trilio-wlm-mysql-router 8.0.39 active 1 mysql-router 8.0/stable 200 no Unit is ready
trilio-data-mover-mysql-router/2 active idle 172.20.1.5 Unit is ready
trilio-data-mover/1 active idle 172.20.1.5 Unit is ready
trilio-data-mover-mysql-router/0* active idle 172.20.1.7 Unit is ready
trilio-data-mover/2 active idle 172.20.1.7 Unit is ready
trilio-data-mover-mysql-router/1 active idle 172.20.1.8 Unit is ready
trilio-data-mover/0* active idle 172.20.1.8 Unit is ready
trilio-horizon-plugin/0* active idle 172.20.1.27 Unit is ready
trilio-dm-api/0* active idle 1/lxd/2 172.20.1.29 8784/tcp Unit is ready
trilio-dm-api-mysql-router/0* active idle 172.20.1.29 Unit is ready
trilio-wlm/0* active idle 1 172.20.1.4 8780/tcp Unit is ready
trilio-wlm-mysql-router/0* active idle 172.20.1.4 Unit is readychattr -i /etc/resolv.conftrilio_branch : 5.2.4HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:23:25 GMT
Content-Type: application/json
Content-Length: 244
Connection: keep-alive
X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa
{
"file_search":{
"created_at":"2020-11-09T13:23:25.698534",
"updated_at":null,
"id":14,
"deleted_at":null,
"status":"executing",
"error_msg":null,
"filepath":"/etc/h*",
"json_resp":null,
"vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
}
}--policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'<policy_id>➡️policy to be assigned or removed{
"file_search":{
"start":<Integer>,
"end":<Integer>,
"filepath":"<Reg-Ex String>",
"date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
"date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
"snapshot_ids":[
"<Snapshot-ID>"
],
"vm_id":"<VM-ID>"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:24:28 GMT
Content-Type: application/json
Content-Length: 819
Connection: keep-alive
X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063
{
"file_search":{
"created_at":"2020-11-09T13:23:25.000000",
"updated_at":"2020-11-09T13:23:48.000000",
"id":14,
"deleted_at":null,
"status":"completed",
"error_msg":null,
"filepath":"/etc/h*",
"json_resp":"[
{
"ed4f29e8-7544-4e1c-af8a-a76031211926":[
{
"/dev/vda1":[
"/etc/hostname",
"/etc/hosts"
],
"/etc/hostname":{
"dev":"2049",
"ino":"32",
"mode":"33204",
"nlink":"1",
"uid":"0",
"gid":"0",
"rdev":"0",
"size":"1",
"blksize":"1024",
"blocks":"2",
"atime":"1603455255",
"mtime":"1603455255",
"ctime":"1603455255"
},
"/etc/hosts":{
"dev":"2049",
"ino":"127",
"mode":"33204",
"nlink":"1",
"uid":"0",
"gid":"0",
"rdev":"0",
"size":"37",
"blksize":"1024",
"blocks":"2",
"atime":"1603455257",
"mtime":"1431011050",
"ctime":"1431017172"
}
}
]
}
]",
"vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
}
}workloadmgr policy-listworkloadmgr policy-show <policy_id>workloadmgr policy-create --policy-fields <key=key-name>
[--display-description <display_description>]
[--metadata <key=key-name>]
<display_name>workloadmgr policy-update [--display-name <display-name>]
[--display-description <display-description>]
[--policy-fields <key=key-name>]
[--metadata <key=key-name>]
<policy_id>workloadmgr policy-assign [--add_project <project_id>]
[--remove_project <project_id>]
<policy_id>workloadmgr policy-delete <policy_id>deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/registry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp16.1trilio/trilio-migration-vm2os:5.2.4region=RegionOne
Network Subnet: 172.21.6/23
| neutron | network | RegionOne |
| | | public: https://172.21.6.20:9696 |
| | | RegionOne |
| | | internal: https://172.21.6.20:9696 |
| | | RegionOne |
| | | admin: https://172.21.6.20:9696 |
| | | | |
| | | |
| nova | compute | RegionOne |
| | | public: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | admin: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | internal: https://172.21.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+
region=RegionTwo
Network Subnet: 172.21.31/23
| neutron | network | RegionTwo |
| | | public: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | internal: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | admin: https://172.31.6.20:9696 |
| | | | |
| | | |
| nova | compute | RegionTwo |
| | | public: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | admin: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | internal: https://172.31.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+region=RegionOne
Network Subnet: 172.21.6/23
| neutron | network | RegionOne |
| | | public: https://172.21.6.20:9696 |
| | | RegionOne |
| | | internal: https://172.21.6.20:9696 |
| | | RegionOne |
| | | admin: https://172.21.6.20:9696 |
| | | |
| workloadmgr | workloads | RegionOne |
| | | internal: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionOne |
| | | public: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionOne |
| | | admin: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | |
| dmapi | datamover | RegionOne |
| | | internal: https://172.21.6.22:8784/v2 |
| | | RegionOne |
| | | public: https://172.21.6.22:8784/v2 |
| | | RegionOne |
| | | admin: https://172.21.6.22:8784/v2 |
| | | |
| nova | compute | RegionOne |
| | | public: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | admin: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | internal: https://172.21.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+
region=RegionTwo
Network Subnet: 172.21.31/23
| neutron | network | RegionTwo |
| | | public: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | internal: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | admin: https://172.31.6.20:9696 |
| | | |
| workloadmgr | workloads | RegionTwo |
| | | internal: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionTwo |
| | | public: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionTwo |
| | | admin: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | |
| dmapi | datamover | RegionTwo |
| | | internal: https://172.31.6.22:8784/v2 |
| | | RegionTwo |
| | | public: https://172.31.6.22:8784/v2 |
| | | RegionTwo |
| | | admin: https://172.31.6.22:8784/v2 |
| | | |
| nova | compute | RegionTwo |
| | | public: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | admin: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | internal: https://172.31.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+

docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-2023.2 workloadmgr snapshot-list [--workload_id <workload_id>]
[--tvault_node <host>]
[--date_from <date_from>]
[--date_to <date_to>]
[--all {True,False}]workloadmgr workload-snapshot [--full] [--display-name <display-name>]
[--display-description <display-description>]
<workload_id>workloadmgr snapshot-show [--output <output>] <snapshot_id>workloadmgr snapshot-delete <snapshot_id>workloadmgr snapshot-cancel <snapshot_id>OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCronHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:29:03 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82eopenstack image create \
--file <File Manager Image Path> \
--container-format bare \
--disk-format qcow2 \
--public \
--property hw_qemu_guest_agent=yes \
--property tvault_recovery_manager=yes \
--property hw_disk_bus=virtio \
tvault-file-managerguest-file-read
guest-file-write
guest-file-open
guest-file-closeSELINUX=disabledyum install python3 lvm2apt-get update
apt-get install qemu-guest-agent
systemctl enable qemu-guest-agentLoaded: loaded (/etc/init.d/qemu-guest-agent; generated)DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)systemctl edit qemu-guest-agent[Service]
ExecStart=
ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hooksystemctl restart qemu-guest-agentapt-get install python3workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>workloadmgr snapshot-mounted-list [--workloadid <workloadid>]workloadmgr snapshot-dismount <snapshot_id>podman rm -f triliovault_datamover_api
podman rm -f triliovault_datamover_api_db_sync
podman rm -f triliovault_datamover_api_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultdmapi
rm /var/lib/config-data/puppet-generated/triliovaultdmapi.md5sum
rm -rf /var/lib/config-data/triliovaultdmapi*
rm -f /var/lib/config-data/triliovault_datamover_api*rm -rf /var/log/containers/triliovault-datamover-api/podman rm -f triliovault_wlm_api
podman rm -f triliovault_wlm_api_cloud_trust_init
podman rm -f triliovault_wlm_api_db_sync
podman rm -f triliovault_wlm_api_config_dynamic
podman rm -f triliovault_wlm_api_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmapi
rm /var/lib/config-data/puppet-generated/triliovaultwlmapi.md5sum
rm -rf /var/lib/config-data/triliovaultwlmapi*
rm -f /var/lib/config-data/triliovault_wlm_api*rm -rf /var/log/containers/triliovault-wlm-api/podman rm -f triliovault_wlm_workloads
podman rm -f triliovault_wlm_workloads_config_dynamic
podman rm -f triliovault_wlm_workloads_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmworkloads
rm /var/lib/config-data/puppet-generated/triliovaultwlmworkloads.md5sum
rm -rf /var/lib/config-data/triliovaultwlmworkloads*rm -rf /var/log/containers/triliovault-wlm-api/podman rm -f triliovault_wlm_scheduler
podman rm -f triliovault_wlm_scheduler_config_dynamic
podman rm -f triliovault_wlm_scheduler_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmscheduler
rm /var/lib/config-data/puppet-generated/triliovaultwlmscheduler.md5sum
rm -rf /var/lib/config-data/triliovaultwlmscheduler*rm -rf /var/log/containers/triliovault-wlm-scheduler/podman rm -f triliovault-wlm-cron-podman-0
podman rm -f triliovault_wlm_cron_config_dynamic
podman rm -f triliovault_wlm_cron_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmcron
rm /var/lib/config-data/puppet-generated/triliovaultwlmcron.md5sum
rm -rf /var/lib/config-data//triliovaultwlmcron*rm -rf /var/log/containers/triliovault-wlm-cron/podman rm -f triliovault_datamover## Following steps are applicable for all supported RHOSP releases.
# Check triliovault backup target mount point
mount | grep trilio
# Unmount it
-- If it's NFS (COPY UUID_DIR from your compute host using above command)
umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
-- If it's S3
umount /var/lib/nova/triliovault-mounts
# Verify that it's unmounted
mount | grep trilio
df -h | grep trilio
# Remove mount point directory after verifying that backup target unmounted successfully.
# Otherwise actual data from backup target may get cleaned.
rm -rf /var/lib/nova/triliovault-mountsrm -rf /var/lib/config-data/puppet-generated/triliovaultdm/
rm /var/lib/config-data/puppet-generated/triliovaultdm.md5sum
rm -rf /var/lib/config-data/triliovaultdm*rm -rf /var/log/containers/triliovault-datamover/pcs resource delete triliovault-wlm-cronlisten triliovault_datamover_api
bind 172.30.5.23:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local
listen triliovault_wlm_api
bind 172.30.5.23:13781 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8781 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8780 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.localpodman restart haproxy-bundle-podman-0openstack service delete dmapi
openstack user delete dmapi
openstack service delete TrilioVaultWLM
openstack user delete triliovaultpodman exec -it galera-bundle-podman-0 mysql -u root## Clean database
DROP DATABASE dmapi;
## Clean dmapi user
=> List 'dmapi' user accounts
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
+-------+-----------------------------------------+
| user | host |
+-------+-----------------------------------------+
| dmapi | % |
| dmapi | 172.30.5.28 |
| dmapi | overcloudtrain1internalapi.trilio.local |
+-------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER dmapi@'%';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)
=> Verify that dmapi user got cleaned
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
Empty set (0.00 sec)
## Clean database
DROP DATABASE workloadmgr;
## Clean workloadmgr user
=> List 'workloadmgr' user accounts
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
+-------------+-----------------------------------------+
| user | host |
+-------------+-----------------------------------------+
| workloadmgr | % |
| workloadmgr | 172.30.5.28 |
| workloadmgr | overcloudtrain1internalapi.trilio.local |
+-------------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER workloadmgr@'%';
Query OK, 0 rows affected (0.012 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)
=> Verify that workloadmgr user got cleaned
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
Empty set (0.000 sec){
"mount":{
"mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
"options":{
}
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
{
"mounted_snapshots":[
{
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"snapshot_name":"snapshot",
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"mounturl":"[\"http://192.168.100.87\"]",
"status":"mounted"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
{
"mounted_snapshots":[
{
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"snapshot_name":"snapshot",
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"mounturl":"[\"http://192.168.100.87\"]",
"status":"mounted"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 16:03:49 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4{
"mount":
{
"options": null
}
}--source-platform➡️Workload source platform is required. Supported platforms is 'openstack'--instance <instance-id=instance-uuid>➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUIDworkloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]workloadmgr workload-create [--display-name <display-name>]
[--display-description <display-description>]
[--source-platform <source-platform>]
[--jobschedule <key=key-name>]
[--metadata <key=key-name>]
[--policy-id <policy_id>]
[--encryption <True/False>]
[--secret-uuid <secret_uuid>]
<instance-id=instance-uuid> [<instance-id=instance-uuid> ...]workloadmgr workload-show <workload_id> [--verbose <verbose>]usage: workloadmgr workload-modify [--display-name <display-name>]
[--display-description <display-description>]
[--instance <instance-id=instance-uuid>]
[--jobschedule <key=key-name>]
[--metadata <key=key-name>]
[--policy-id <policy_id>]
<workload_id>workloadmgr workload-delete [--database_only <True/False>] <workload_id>workloadmgr workload-unlock <workload_id>workloadmgr workload-reset <workload_id>HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 11:52:56 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:06:01 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:31:49 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491
{
"scheduler_enabled":true,
"trust":{
"created_at":"2020-10-23T14:35:11.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"value":"871ca24f38454b14b867338cb0e9b46c",
"description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2020-10-23T14:35:11.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
"settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
"settings_project_id":"c76b3355a164498aa95ddbc960adc238",
"key":"role_name",
"value":"member"
}
]
},
"is_valid":true,
"scheduler_obj":{
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_domain_id":"default",
"user":"ccddc7e7a015487fa02920f4d4979779",
"tenant":"c76b3355a164498aa95ddbc960adc238"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:45:27 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b
{"global_job_scheduler": true}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:49:29 GMT
Content-Type: application/json
Content-Length: 31
Connection: keep-alive
X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0
{"global_job_scheduler": false}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:50:11 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc
{"global_job_scheduler": true}workloadmgr workload-service-disable [--reason <reason>] <node_name>workloadmgr workload-service-enable <node_name>workloadmgr setting-create [--description <description>]
[--category <category>]
[--type <type>]
[--is-public {True,False}]
[--is-hidden {True,False}]
[--metadata <key=value>]
<name> <value>workloadmgr setting-update [--description <description>]
[--category <category>]
[--type <type>]
[--is-public {True,False}]
[--is-hidden {True,False}]
[--metadata <key=value>]
<name> <value>workloadmgr setting-show [--get_hidden {True,False}] <setting_name>workloadmgr setting-delete <setting_name>workloadmgr get-global-job-schedulerworkloadmgr disable-global-job-schedulerworkloadmgr enable-global-job-schedulerHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:55:43 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99
{
"settings":[
{
"created_at":"2021-02-04T11:55:43.890855",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":0,
"status":"available",
"is_public":false,
"is_hidden":false
}
]
}{
"settings":[
{
"category":null,
"name":<String Setting_name>,
"is_public":false,
"is_hidden":false,
"metadata":{
},
"type":<String Setting type>,
"value":<String Setting Value>,
"description":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:01:27 GMT
Content-Type: application/json
Content-Length: 380
Connection: keep-alive
X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c
{
"setting":{
"created_at":"2021-02-04T11:55:43.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":false,
"status":"available",
"metadata":[
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:05:59 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f
{
"settings":[
{
"created_at":"2021-02-04T11:55:43.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":0,
"status":"available",
"is_public":false,
"is_hidden":false
}
]
}{
"settings":[
{
"category":null,
"name":<String Setting_name>,
"is_public":false,
"is_hidden":false,
"metadata":{
},
"type":<String Setting type>,
"value":<String Setting Value>,
"description":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:49:17 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:21:57 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3
{
"trust":[
{
"created_at":"2020-11-26T13:10:53.000000",
"updated_at":null,
"deleted_at":null,
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:43:36 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0
{
"trust":[
{
"created_at":"2021-01-21T11:43:36.140407",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"1c981a15e7a54242ae54eee6f8d32e6a",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":1,
"status":"available",
"is_public":false,
"is_hidden":true,
"metadata":[
]
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:39:12 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4
{
"trust":{
"created_at":"2020-11-26T13:15:29.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"703dfabb4c5942f7a1960736dd84f4d4",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2020-11-26T13:15:29.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"86aceea1-9121-43f9-b55c-f862052374ab",
"settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
"settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
"key":"role_name",
"value":"member"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:41:51 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8{
"scheduler_enabled":true,
"trust":{
"created_at":"2021-01-21T11:43:36.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"1c981a15e7a54242ae54eee6f8d32e6a",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2021-01-21T11:43:36.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"d98d283a-b096-4a68-826a-36f99781787d",
"settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
"key":"role_name",
"value":"member"
}
]
},
"is_valid":true,
"scheduler_obj":{
"workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_domain_id":"default",
"user":"adfa32d7746a4341b27377d6f7c61adb",
"tenant":"4dfe98a43bfa404785a812020066b4d6"
}
}{
"trusts":{
"role_name":"member",
"is_cloud_trust":false
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:34:10 GMT
Content-Type: application/json
Content-Length: 7888
Connection: keep-alive
X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339
{
"workloads":[
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":"2020-11-09T09:53:30.000000",
"id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"Workload_1",
"description":"no-description",
"interval":null,
"storage_usage":null,
"instances":null,
"metadata":[
{
"created_at":"2020-11-09T09:57:23.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-05T10:27:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
},
{
"created_at":"2020-11-09T09:37:20.000000",
"updated_at":"2020-11-09T09:57:23.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"61615532-6165-45a2-91e2-fbad9eb0b284",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"b083bb70-e384-4107-b951-8e9e7bbac380",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-02T13:40:24.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"backup_media_target",
"value":"10.10.2.20:/upstream"
},
{
"created_at":"2020-11-05T10:27:14.000000",
"updated_at":"2020-11-09T09:57:23.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"e33c1eea-c533-4945-864d-0da1fc002070",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":"2020-11-02T14:10:30.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"hostnames",
"value":"[\"cirros-1\", \"cirros-2\"]"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"093a1221-edb6-4957-8923-cf271f7e43ce",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"pause_at_snapshot",
"value":"0"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"79baaba8-857e-410f-9d2a-8b14670c4722",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"policy_id",
"value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"preferredgroup",
"value":"[]"
},
{
"created_at":"2020-11-02T14:10:30.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"topology",
"value":"\"\\\"\\\"\""
},
{
"created_at":"2020-11-02T13:40:23.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"workload_approx_backup_size",
"value":"6"
}
],
"jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
"status":"locked",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
}
],
"scheduler_trust":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:42:01 GMT
Content-Type: application/json
Content-Length: 120143
Connection: keep-alive
X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c
{
"workloads":[
{
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:10:17.000000",
"id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"user_id":"6ef8135faedc4259baac5871e09f0044",
"project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"comdirect_test",
"description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
"interval":null,
"storage_usage":null,
"instances":null,
"metadata":[
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-05-16T09:13:54.000000",
"updated_at":null,
"value":"ca544215-1182-4a8f-bf81-910f5470887a",
"version":"3.2.46",
"key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
"deleted_at":null,
"id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:30.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"10.10.2.20:/upstream",
"version":"3.2.46",
"key":"backup_media_target",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"02dd0630-7118-485c-9e42-b01d23aa882c"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-05-16T09:13:51.000000",
"updated_at":null,
"value":"51693eca-8714-49be-b409-f1f1709db595",
"version":"3.2.46",
"key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
"deleted_at":null,
"id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
"version":"3.2.46",
"key":"hostnames",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"0",
"version":"3.2.46",
"key":"pause_at_snapshot",
"deleted_at":null,
"id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"[]",
"version":"3.2.46",
"key":"preferredgroup",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:11:49.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"\"\\\"\\\"\"",
"version":"3.2.46",
"key":"topology",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"77e436c0-0921-4919-97f4-feb58fb19e06"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:30.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"121",
"version":"3.2.46",
"key":"workload_approx_backup_size",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
}
],
"jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
"status":"locked",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
}
],
"scheduler_trust":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 11:03:55 GMT
Content-Type: application/json
Content-Length: 100
Connection: keep-alive
X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
{
"workloads":{
"imported_workloads":[
"faa03-f69a-45d5-a6fc-ae0119c77974"
],
"failed_workloads":[
]
}
}{
"workload_ids":[
"<workload_id>"
],
"upgrade":true
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:40:56 GMT
Content-Type: application/json
Content-Length: 1625
Connection: keep-alive
X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe
{
"quota_types":[
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"display_name":"Workloads",
"display_description":"Total number of workload creation allowed per project",
"status":"available"
},
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"b7273a06-2e08-11ea-889c-7440bb00b67d",
"display_name":"Snapshots",
"display_description":"Total number of snapshot creation allowed per project",
"status":"available"
},
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"be323f58-2e08-11ea-889c-7440bb00b67d",
"display_name":"VMs",
"display_description":"Total number of VMs allowed per project",
"status":"available"
},
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"c61324d0-2e08-11ea-889c-7440bb00b67d",
"display_name":"Volumes",
"display_description":"Total number of volume attachments allowed per project",
"status":"available"
},
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
"display_name":"Storage",
"display_description":"Total storage (in Bytes) allowed per project",
"status":"available"
}
]
}# Export virtual environment path depending on your setup
export venv_path="/opt/kolla-venv"
source $venv_path/bin/activategit clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/
# For Zed to Caracal
mkdir -p /usr/local/share/kolla-ansible/ansible/roles/triliovault
# For Epoxy
mkdir -p $venv_path/share/kolla-ansible/ansible/roles/triliovault
# For Rocky and Ubuntu Zed and Antelope
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
# For Rocky and Ubuntu Bobcat and Caracal
cp -R ansible/roles/triliovault-bobcat/* /usr/local/share/kolla-ansible/ansible/roles/triliovault/
# For Rocky and Ubuntu Epoxy
cp -R ansible/roles/triliovault-epoxy/* $venv_path/share/kolla-ansible/ansible/roles/triliovault/## For Rocky and Ubuntu
- Take backup of globals.yml
cp /etc/kolla/globals.yml /opt/
- Append Trilio global variables to globals.yml for Zed
cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Antelope
cat ansible/triliovault_globals_2023.1.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Bobcat
cat ansible/triliovault_globals_2023.2.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Caracal
cat ansible/triliovault_globals_2024.1.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Epoxy
cat ansible/triliovault_globals_2025.1.yml >> /etc/kolla/globals.ymlcd ansible
./scripts/generate_password.sh
## For Rocky and Ubuntu
- Take backup of passwords.yml
cp /etc/kolla/passwords.yml /opt/
- Append Trilio global variables to passwords.yml
cat triliovault_passwords.yml >> /etc/kolla/passwords.yml# For Rocky and Ubuntu
- Take backup of site.yml for Zed to Caracal
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
- Take backup of site.yml for Epoxy
cp $venv_path/share/kolla-ansible/ansible/site.yml /opt/
- Append Trilio site variables to site.yml for Zed
cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Antelope
cat ansible/triliovault_site_2023.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Bobcat
cat ansible/triliovault_site_2023.2.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Caracal
cat ansible/triliovault_site_2024.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Epoxy
cat ansible/triliovault_site_2025.1.yml >> $venv_path/share/kolla-ansible/ansible/site.ymlFor example:
If your inventory file name path '/root/multinode' then use following command.
cat ansible/triliovault_inventory.txt >> /root/multinodecd triliovault-cfg-scripts/common/pip3 install -U pyyamlpython ./generate_nfs_map.pycat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
4. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-wlm:{{ triliovault_tag }}
## EXAMPLE from Kolla Ubuntu OpenStack
docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-wlm:{{ triliovault_tag }}nova_libvirt_default_volumes:
- "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "/lib/modules:/lib/modules:ro"
- "/run/:/run/:shared"
- "/dev:/dev"
- "/sys/fs/cgroup:/sys/fs/cgroup"
- "kolla_logs:/var/log/kolla/"
- "libvirtd:/var/lib/libvirt"
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
- "
{% raw %}
{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
{% endraw %}
"
- "nova_libvirt_qemu:/etc/libvirt/qemu"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
- "/var/trilio:/var/trilio:shared"nova_compute_default_volumes:
- "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "/lib/modules:/lib/modules:ro"
- "/run:/run:shared"
- "/dev:/dev"
- "kolla_logs:/var/log/kolla/"
- "
{% raw %}
{% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
- "libvirtd:/var/lib/libvirt"
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
- "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
{% endraw %}
"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
- "/var/trilio:/var/trilio:shared"nova_compute_ironic_default_volumes:
- "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "kolla_logs:/var/log/kolla/"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
- "/var/trilio:/var/trilio:shared"mkdir -p /etc/kolla/config/horizonchown <DEPLOYMENT_USER>:<DEPLOYMENT_GROUP> /etc/kolla/config/horizonchown root:root /etc/kolla/config/horizonecho 'from openstack_dashboard.settings import HORIZON_CONFIG
HORIZON_CONFIG["customization_module"] = "trilio_dashboard.overrides"' >> /etc/kolla/config/horizon/custom_local_settingsecho 'from openstack_dashboard.settings import HORIZON_CONFIG
HORIZON_CONFIG["customization_module"] = "trilio_dashboard.overrides"' >> /etc/kolla/config/horizon/_9999-custom-settings.pyansible -i <kolla inventory file path> control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io" --become
# For Zed to Caracal
kolla-ansible -i <kolla inventory file path> pull --tags triliovault
# For Epoxy
kolla-ansible pull -i <kolla inventory file path> --tags triliovault# For Zed to Caracal
kolla-ansible -i <kolla inventory file path> deploy
# For Epoxy
kolla-ansible deploy -i <kolla inventory file path>[root@controller ~]# docker ps | grep datamover-api
9bf847ec4374 trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover_api
[root@controller ~]# ssh compute "docker ps | grep datamover"
2b590ab33dfa trilio/kolla-rocky-trilio-datamover:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover
[root@controller ~]# docker ps | grep horizon
1333f1ccdcf1 trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours (healthy) horizon
[root@controller ~]# docker ps -a | grep wlm
fedc17b12eaf trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Exited (0) 23 hours ago wlm_cloud_trust
60bc1f0d0758 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_cron
499b8ca89bd6 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_scheduler
7e3749026e8e trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_workloads
932a41bf7024 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_api# For Zed to Caracal
ansible-playbook -i {{ kolla inventory file }} /usr/local/share/kolla-ansible/ansible/roles/triliovault/tasks/wlm_cloud_trust.yml -e "@/etc/kolla/globals.yml"
# For Epoxy
ansible-playbook -i {{ kolla inventory file }} $venv_path/share/kolla-ansible/ansible/roles/triliovault/tasks/wlm_cloud_trust.yml -e "@/etc/kolla/globals.yml"ssh controller
docker logs wlm_cloud_trust docker ps -a | grep triliodocker logs triliovault_datamover_api
docker logs triliovault_datamover
docker logs triliovault_wlm_api
docker logs triliovault_wlm_scheduler
docker logs triliovault_wlm_cron
docker logs triliovault_wlm_workloads
docker logs wlm_cloud_trustdocker ps | grep horizon/var/log/kolla/triliovault-wlm-api/triliovault-wlm-api.log/var/log/kolla/triliovault-wlm-cron/triliovault-wlm-cron.log/var/log/kolla/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log/var/log/kolla/triliovault-wlm-workloads/triliovault-wlm-workloads.log/var/log/kolla/triliovault-datamover-api/triliovault-datamover-api.log/var/log/kolla/triliovault-datamover/triliovault-datamover.logHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:44:43 GMT
Content-Type: application/json
Content-Length: 342
Connection: keep-alive
X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b
{
"quota_type":{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"display_name":"Workloads",
"display_description":"Total number of workload creation allowed per project",
"status":"available"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:51:51 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927
{
"allowed_quotas":[
{
}
]
}{
"allowed_quotas":[
{
"project_id":"<project_id>",
"quota_type_id":"<quota_type_id>",
"allowed_value":"<integer>",
"high_watermark":"<Integer>"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:01:39 GMT
Content-Type: application/json
Content-Length: 766
Connection: keep-alive
X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0
{
"allowed_quotas":[
{
"id":"262b117d-e406-4209-8964-004b19a8d422",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"allowed_value":5,
"high_watermark":4,
"version":"4.0.115",
"quota_type_name":"Workloads"
},
{
"id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
"allowed_value":-1,
"high_watermark":-1,
"version":"4.0.115",
"quota_type_name":"Storage"
},
{
"id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
"allowed_value":50,
"high_watermark":25,
"version":"4.0.115",
"quota_type_name":"VMs"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:15:07 GMT
Content-Type: application/json
Content-Length: 268
Connection: keep-alive
X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7
{
"allowed_quotas":{
"id":"262b117d-e406-4209-8964-004b19a8d422",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"allowed_value":5,
"high_watermark":4,
"version":"4.0.115",
"quota_type_name":"Workloads"
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:24:04 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2
{"allowed_quotas": [{}]}{
"allowed_quotas":{
"project_id":"c76b3355a164498aa95ddbc960adc238",
"allowed_value":"20000",
"high_watermark":"18000"
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:33:09 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 14:55:40 GMT
Content-Type: application/json
Content-Length: 3480
Connection: keep-alive
X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948
{
"workloads":[
{
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
"name":"worklaod1",
"snapshots_info":"",
"description":"no-description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"available",
"created_at":"2020-10-26T12:07:01.000000",
"updated_at":"2020-10-29T12:22:26.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
}
]
},
{
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
"name":"workload2",
"snapshots_info":"",
"description":"no-description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"available",
"created_at":"2020-10-20T09:51:15.000000",
"updated_at":"2020-10-29T10:03:33.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
}
]
}
]
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 15:42:02 GMT
Content-Type: application/json
Content-Length: 703
Connection: keep-alive
X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede
{
"workload":{
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
"name":"API created",
"snapshots_info":"",
"description":"API description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"creating",
"created_at":"2020-10-29T15:42:01.000000",
"updated_at":"2020-10-29T15:42:01.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
}
]
}
}retention_policy_type
retention_policy_value
interval{
"workload":{
"name":"<name of the Workload>",
"description":"<description of workload>",
"workload_type_id":"<ID of the chosen Workload Type",
"source_platform":"openstack",
"instances":[
{
"instance-id":"<Instance ID>"
},
{
"instance-id":"<Instance ID>"
}
],
"jobschedule":{
"retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
"retention_policy_value":"<Integer>"
"timezone":"<timezone>",
"start_date":"<Date format: MM/DD/YYYY>"
"end_date":"<Date format MM/DD/YYYY>",
"start_time":"<Time format: HH:MM AM/PM>",
"interval":"<Format: Integer hr",
"enabled":"<True/False>"
},
"metadata":{
<key>:<value>,
"policy_id":"<policy_id>"
}
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 12:08:42 GMT
Content-Type: application/json
Content-Length: 1536
Connection: keep-alive
X-Compute-Request-Id: req-afb76abb-aa33-427e-8219-04fc2b91bce0
{
"workload":{
"created_at":"2020-10-29T15:42:01.000000",
"updated_at":"2020-10-29T15:42:18.000000",
"id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"API created",
"description":"API description",
"interval":null,
"storage_usage":{
"usage":0,
"full":{
"snap_count":0,
"usage":0
},
"incremental":{
"snap_count":0,
"usage":0
}
},
"instances":[
{
"id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
"name":"cirros-4",
"metadata":{
}
},
{
"id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
"name":"cirros-3",
"metadata":{
}
}
],
"metadata":{
"hostnames":"[]",
"meta":"data",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"preferredgroup":"[]",
"workload_approx_backup_size":"6"
},
"jobschedule":{
"retention_policy_type":"Number of Snapshots to Keep",
"end_date":"15/27/2020",
"start_time":"3:00 PM",
"interval":"5",
"enabled":false,
"retention_policy_value":"10",
"timezone":"UTC+2",
"start_date":"10/27/2020",
"fullbackup_interval":"-1",
"appliance_timezone":"UTC",
"global_jobscheduler":true
},
"status":"available",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
}
],
"scheduler_trust":null
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 12:31:42 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137retention_policy_type
retention_policy_value
interval{
"workload":{
"name":"<name>",
"description":"<description>"
"instances":[
{
"instance-id":"<instance_id>"
},
{
"instance-id":"<instance_id>"
}
],
"jobschedule":{
"retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
"retention_policy_value":"<Integer>",
"timezone":"<timezone>",
"start_time":"<HH:MM AM/PM>",
"end_date":"<MM/DD/YYYY>",
"interval":"<Integer hr>",
"enabled":"<True/False>"
},
"metadata":{
"meta":"data",
"policy_id":"<policy_id>"
},
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:31:00 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:41:55 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:52:30 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 12:58:38 GMT
Content-Type: application/json
Content-Length: 266
Connection: keep-alive
X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9
{
"snapshots":[
{
"id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
"created_at":"2020-11-02T14:03:18.000000",
"status":"available",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"name":"snapshot",
"description":"-",
"host":"TVM1"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 13:58:38 GMT
Content-Type: application/json
Content-Length: 283
Connection: keep-alive
X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c
{
"snapshot":{
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"created_at":"2020-11-04T13:58:37.694637",
"status":"creating",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"name":"API taken 2",
"description":"API taken description 2",
"host":""
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:07:18 GMT
Content-Type: application/json
Content-Length: 6609
Connection: keep-alive
X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd
{
"snapshot":{
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":"2020-11-04T14:06:03.000000",
"finished_at":"2020-11-04T14:06:03.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"instances":[
{
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"name":"cirros-2",
"status":"available",
"metadata":{
"availability_zone":"nova",
"config_drive":"",
"data_transfer_time":"0",
"object_store_transfer_time":"0",
"root_partition_type":"Linux",
"trilio_ordered_interfaces":"192.168.100.80",
"vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"workload_name":"Workload_1"
},
"flavor":{
"vcpus":"1",
"ram":"512",
"disk":"1",
"ephemeral":"0"
},
"security_group":[
{
"name":"default",
"security_group_type":"neutron"
}
],
"nics":[
{
"mac_address":"fa:16:3e:cf:10:91",
"ip_address":"192.168.100.80",
"network":{
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"robert_internal",
"cidr":null,
"network_type":"neutron",
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
"name":"robert_internal",
"cidr":"192.168.100.0/24",
"ip_version":4,
"gateway_ip":"192.168.100.1"
}
}
}
],
"vdisks":[
{
"label":null,
"resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
"restore_size":1073741824,
"vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
"volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
"volume_size":"1",
"volume_type":"iscsi",
"volume_mountpoint":"/dev/vda",
"availability_zone":"nova",
"metadata":{
"readonly":"False",
"attached_mode":"rw"
}
}
]
},
{
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"name":"cirros-1",
"status":"available",
"metadata":{
"availability_zone":"nova",
"config_drive":"",
"data_transfer_time":"0",
"object_store_transfer_time":"0",
"root_partition_type":"Linux",
"trilio_ordered_interfaces":"192.168.100.176",
"vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"workload_name":"Workload_1"
},
"flavor":{
"vcpus":"1",
"ram":"512",
"disk":"1",
"ephemeral":"0"
},
"security_group":[
{
"name":"default",
"security_group_type":"neutron"
}
],
"nics":[
{
"mac_address":"fa:16:3e:cf:4d:27",
"ip_address":"192.168.100.176",
"network":{
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"robert_internal",
"cidr":null,
"network_type":"neutron",
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
"name":"robert_internal",
"cidr":"192.168.100.0/24",
"ip_version":4,
"gateway_ip":"192.168.100.1"
}
}
}
],
"vdisks":[
{
"label":null,
"resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
"restore_size":1073741824,
"vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
"volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"volume_size":"1",
"volume_type":"iscsi",
"volume_mountpoint":"/dev/vda",
"availability_zone":"nova",
"metadata":{
"readonly":"False",
"attached_mode":"rw"
}
}
]
}
],
"name":"API taken 2",
"description":"API taken description 2",
"host":"TVM1",
"size":44171264,
"restore_size":2147483648,
"uploaded_size":44171264,
"progress_percent":100,
"progress_msg":"Snapshot of workload is complete",
"warning_msg":null,
"error_msg":null,
"time_taken":428,
"pinned":false,
"metadata":[
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"backup_media_target",
"value":"10.10.2.20:/upstream"
},
{
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"cancel_requested",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:29.000000",
"updated_at":"2020-11-04T14:05:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"data_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"hostnames",
"value":"[\"cirros-1\", \"cirros-2\"]"
},
{
"created_at":"2020-11-04T14:05:29.000000",
"updated_at":"2020-11-04T14:05:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"object_store_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"pause_at_snapshot",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"572a0b21-a415-498f-b7fa-6144d850ef56",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"policy_id",
"value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"preferredgroup",
"value":"[]"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"topology",
"value":"\"\\\"\\\"\""
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"33762790-8743-4e20-9f50-3505a00dbe76",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"workload_approx_backup_size",
"value":"6"
}
],
"restores_info":""
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:18:36 GMT
Content-Type: application/json
Content-Length: 56
Connection: keep-alive
X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc
{"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:26:44 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68d{
"snapshot":{
"is_scheduled":<true/false>,
"name":"<name>",
"description":"<description>"
}
}defaults.yaml in overcloud deploy command with `-e` option as shown below.triliovault_nfs_map_input.yml in the current directory and provide compute host and NFS share/IP map.triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yamltrilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.MultiIPNfsEnabled is set to true in the trilio_env.yaml file and that NFS is used as a backup target.
workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
$ aws s3 sync s3://production-s3-bucket/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/ s3://dr-site-s3-bucket-bucket/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/
#qemu-img info --backing-chain bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+aws s3 sync s3://production-s3-bucket/ s3://dr-site-s3-bucket-bucket/#qemu-img info --backing-chain bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+--display-description <display-description> ➡️ Optional description for restore.--display-description <display-description> ➡️ Optional description for restore.--display-description <display-description> ➡️ Optional description for restore.oneclickrestore <True/False>➡️If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.Nics ➡️ list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:new_volume_type ➡️ The Volume Type to use for the restored Volume. Leave empty for Volume Type NoneHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 13:56:08 GMT
Content-Type: application/json
Content-Length: 1399
Connection: keep-alive
X-Compute-Request-Id: req-4618161e-64e4-489a-b8fc-f3cb21d94096
{
"policy_list":[
{
"id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":"2020-10-26T12:52:22.000000",
cd /home/stack
source stackrc
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
chmod +x *.shcp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/vddk.tar.gz
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
'OS::TripleO::Services::TrilioDatamoverApi'
'OS::TripleO::Services::TrilioWlmApi'
'OS::TripleO::Services::TrilioWlmWorkloads'
'OS::TripleO::Services::TrilioWlmScheduler'
'OS::TripleO::Services::TrilioWlmCron''OS::TripleO::Services::TrilioDatamover'parameter_defaults:
ContainerImagePrepare:
- push_destination: false
set:
namespace: registry.redhat.io/...
...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
myuser: 'p@55w0rd!'
registry.connect.redhat.com:
myuser: 'p@55w0rd!'
ContainerImageRegistryLogin: true$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.1
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloud2-161.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.1 |
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.1 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloud2-161.ctlplane.trilio.local 5.0.14-rhosp16.1
## Verify changes
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.1
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.2
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloudqa162.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.2 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa162.ctlplane.trilio.local 5.0.14-rhosp16.2
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.2
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp17.1
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa17.ctlplane.trilio.local 5.2.2-rhosp17.1
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp17.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp17.1
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
./generate_passwords.sh-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/passwords.yamlsource <OVERCLOUD_RC_FILE>vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yamlopenstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin
# Example
openstack role add --user admin --domain default admincd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts
./create_wlm_ids_conf.shcat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/triliovault_wlm_ids.confmodprobe nbd nbds_max=128
lsmod | grep nbdmodprobe fuse
lsmod | grep fusesource stackrccd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./upload_puppet_module.sh
## Output of the above command looks like following
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-B1bp1Bk/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object | container | etag |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 17ed9cb7a08f67e1853c610860b8ea99 |
+-----------------------+---------------------+----------------------------------+
Upload complete
## Above command creates the following file
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yamlcd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
./upload_puppet_module.sh
## Output of above command looks like following
Creating tarball...
Tarball created.
renamed '/tmp/puppet-modules-MUIyvXI/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
[stack@uc17-1 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
parameter_defaults:
DeployArtifactFILEs:
- /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz
## Above command creates following file.
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/defaults.yaml openstack overcloud deploy --stack overcloudtrain5 --templates \
--libvirt-type qemu \
--ntp-server 192.168.1.34 \
-e /home/stack/templates/node-info.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/templates/ceph-config.yaml \
-e /home/stack/templates/cinder_size.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
-e /home/stack/templates/configure-barbican.yaml \
-e /home/stack/templates/multidomain_horizon.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
-e /home/stack/templates/tls-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles_data.yamlopenstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILEDpodman logs <trilio-container-name>
tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.logcd triliovault-cfg-scripts/common/(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
TriliovaultMultiIPNfsMap:
overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfssudo pip3 install PyYAML==5.1 3python3 ./generate_nfs_map.pygrep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfglisten triliovault_datamover_api
bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
option forwardfor
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.localretries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/services/triliovault-datamover-api.yaml tripleo::haproxy::trilio_datamover_api::options:
'retries': '5'
'maxconn': '50000'
'balance': 'roundrobin'
'timeout http-request': '10m'
'timeout queue': '10m'
'timeout connect': '10m'
'timeout client': '10m'
'timeout server': '10m'
'timeout check': '10m'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yamlTrilioDatamoverOptVolumes:
- <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>
## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-on-host
## Then provide that mount in the below format
TrilioDatamoverOptVolumes:
- /mnt/mount-on-host:/mnt/mount-inside-container[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-inside-containerworkloadmgr restore-list [--snapshot_id <snapshot_id>]workloadmgr restore-show [--output <output>] <restore_id>workloadmgr restore-delete <restores_id>workloadmgr restore-cancel <restore_id>workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
[--display-description <display-description>]
<snapshot_id>workloadmgr snapshot-selective-restore [--display-name <display-name>]
[--display-description <display-description>]
[--filename <filename>]
<snapshot_id>workloadmgr snapshot-inplace-restore [--display-name <display-name>]
[--display-description <display-description>]
[--filename <filename>]
<snapshot_id>{
oneclickrestore: False,
restore_type: selective,
type: openstack,
openstack:
{
instances:
[
{
include: True,
id: 890888bc-a001-4b62-a25b-484b34ac6e7e,
name: cdcentOS-1,
availability_zone:,
nics: [],
vdisks:
[
{
id: 4cc2b474-1f1b-4054-a922-497ef5564624,
new_volume_type:,
availability_zone: nova
}
],
flavor:
{
ram: 512,
ephemeral: 0,
vcpus: 1,
swap:,
disk: 1,
id: 1
}
}
],
restore_topology: True,
networks_mapping:
{
networks: []
}
}
}'instances':[
{
'name':'cdcentOS-1-selective',
'availability_zone':'US-East',
'nics':[
{
'mac_address':'fa:16:3e:00:bd:60',
'ip_address':'192.168.0.100',
'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
'network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
}
}
],
'vdisks':[
{
'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
'new_volume_type':'ceph',
'availability_zone':'nova'
}
],
'flavor':{
'ram':2048,
'ephemeral':0,
'vcpus':1,
'swap':'',
'disk':20,
'id':'2'
},
'include':True,
'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
}
]restore_topology:Truerestore_topology:False{
'oneclickrestore':False,
'openstack':{
'instances':[
{
'name':'cdcentOS-1-selective',
'availability_zone':'US-East',
'nics':[
{
'mac_address':'fa:16:3e:00:bd:60',
'ip_address':'192.168.0.100',
'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
'network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
}
}
],
'vdisks':[
{
'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
'new_volume_type':'ceph',
'availability_zone':'nova'
}
],
'flavor':{
'ram':2048,
'ephemeral':0,
'vcpus':1,
'swap':'',
'disk':20,
'id':'2'
},
'include':True,
'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
}
],
'restore_topology':False,
'networks_mapping':{
'networks':[
{
'snapshot_network':{
'subnet':{
'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
},
'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
},
'target_network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
'name':'internal'
}
}
]
}
},
'restore_type':'selective',
'type':'openstack'
}{
'oneclickrestore':False,
'restore_type':'inplace',
'type':'openstack',
'openstack':{
'instances':[
{
'restore_boot_disk':True,
'include':True,
'id':'ba8c27ab-06ed-4451-9922-d919171078de',
'vdisks':[
{
'restore_cinder_volume':True,
'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
}
]
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 14:18:42 GMT
Content-Type: application/json
Content-Length: 2160
Connection: keep-alive
X-Compute-Request-Id: req-0583fc35-0f80-4746-b280-c17b32cc4b25
{
"policy":{
"id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":"2020-10-26T12:52:22.000000",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"status":"available",
"name":"Gold",
"description":"",
"field_values":[
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_value",
"value":"10"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"interval",
"value":"5"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"79070c67-9021-4220-8a79-648ffeebc144",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_type",
"value":"Number of Snapshots to Keep"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"fullbackup_interval",
"value":"-1"
}
],
"metadata":[
],
"policy_assignments":[
{
"created_at":"2020-10-26T12:53:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"3e3f1b12-1b1f-452b-a9d2-b6e5fbf2ab18",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"policy_name":"Gold",
"project_name":"admin"
},
{
"created_at":"2020-10-29T15:39:13.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"Gold",
"project_name":"robert"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:14:01 GMT
Content-Type: application/json
Content-Length: 338
Connection: keep-alive
X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2
{
"policies":[
{
"created_at":"2020-10-29T15:39:13.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"Gold",
"project_name":"robert"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:24:03 GMT
Content-Type: application/json
Content-Length: 1413
Connection: keep-alive
X-Compute-Request-Id: req-05e05333-b967-4d4e-9c9b-561f1a7add5a
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"status":"available",
"name":"CLI created",
"description":"CLI created",
"metadata":[
],
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"4 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"10"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of Snapshots to Keep"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"-1"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:32:13 GMT
Content-Type: application/json
Content-Length: 1515
Connection: keep-alive
X-Compute-Request-Id: req-9104cf1c-4025-48f5-be92-1a6b7117bf95
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"status":"available",
"name":"API created",
"description":"API created",
"metadata":[
],
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"8 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"20"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of days to retain Snapshots"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"7"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:46:23 GMT
Content-Type: application/json
Content-Length: 2318
Connection: keep-alive
X-Compute-Request-Id: req-169a53e4-b1c9-4bd1-bf68-3416d177d868
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"status":"available",
"name":"API created",
"description":"API created",
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"8 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"20"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of days to retain Snapshots"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"7"
}
],
"metadata":[
],
"policy_assignments":[
{
"created_at":"2020-11-17T09:46:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4794ed95-d8d1-4572-93e8-cebd6d4df48f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"project_id":"cbad43105e404c86a1cd07c48a737f9c",
"policy_name":"API created",
"project_name":"services"
},
{
"created_at":"2020-11-17T09:46:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"68f187a6-3526-4a35-8b2d-cb0e9f497dd8",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"API created",
"project_name":"robert"
}
]
},
"failed_ids":[
]
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:56:03 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive{
"workload_policy":{
"field_values":{
"fullbackup_interval":"<-1 for never / 0 for always / Integer>",
"retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
"interval":"<Integer hr>",
"retention_policy_value":"<Integer>"
},
"display_name":"<String>",
"display_description":"<String>",
"metadata":{
<key>:<value>
}
}
}{
"policy":{
"field_values":{
"fullbackup_interval":"<-1 for never / 0 for always / Integer>",
"retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
"interval":"<Integer hr>",
"retention_policy_value":"<Integer>"
},
"display_name":"String",
"display_description":"String",
"metadata":{
<key>:<value>
}
}
}{
"policy":{
"remove_projects":[
"<project_id>"
],
"add_projects":[
"<project_id>",
]
}
}subnet➡️subnet the port is assigned to. Contains the following information:HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 11:28:43 GMT
Content-Type: application/json
Content-Length: 4308
Connection: keep-alive
X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463
{
"restores":[
{
"id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"created_at":"2020-11-05T10:17:40.000000",
"updated_at":"2020-11-05T10:17:40.000000",
"finished_at":"2020-11-05T10:27:20.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
}
],
"name":"OneClick Restore",
"description":"-",
"host":"TVM2",
"size":2147483648,
"uploaded_size":2147483648,
"progress_percent":100,
"progress_msg":"Restore from snapshot is complete",
"warning_msg":null,
"error_msg":null,
"time_taken":580,
"restore_options":{
"name":"OneClick Restore",
"oneclickrestore":true,
"restore_type":"oneclick",
"openstack":{
"instances":[
{
"name":"cirros-2",
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"availability_zone":"nova"
},
{
"name":"cirros-1",
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"availability_zone":"nova"
}
]
},
"type":"openstack",
"description":"-"
},
"metadata":[
{
"created_at":"2020-11-05T10:27:20.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"91ab2495-1903-4d75-982b-08a4e480835b",
"restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"key":"data_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-05T10:27:20.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"e0e01eec-24e0-4abd-9b8c-19993a320e9f",
"restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"key":"object_store_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-05T10:27:20.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"eb909267-ba9b-41d1-8861-a9ec22d6fd84",
"restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"key":"restore_user_selected_value",
"value":"Oneclick Restore"
}
]
},
{
"id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
"created_at":"2020-11-04T14:37:31.000000",
"updated_at":"2020-11-04T14:37:31.000000",
"finished_at":"2020-11-04T14:45:27.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"error",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
}
],
"name":"OneClick Restore",
"description":"-",
"host":"TVM2",
"size":2147483648,
"uploaded_size":2147483648,
"progress_percent":100,
"progress_msg":"",
"warning_msg":null,
"error_msg":"Failed restoring snapshot: Error creating instance e271bd6e-f53e-4ebc-875a-5787cc4dddf7",
"time_taken":476,
"restore_options":{
"name":"OneClick Restore",
"oneclickrestore":true,
"restore_type":"oneclick",
"openstack":{
"instances":[
{
"name":"cirros-2",
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"availability_zone":"nova"
},
{
"name":"cirros-1",
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"availability_zone":"nova"
}
]
},
"type":"openstack",
"description":"-"
},
"metadata":[
{
"created_at":"2020-11-04T14:45:27.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"be6dc7e2-1be2-476b-9338-aed986be3b55",
"restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
"key":"data_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:45:27.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"2e4330b7-6389-4e21-b31b-2503b5441c3e",
"restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
"key":"object_store_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:45:27.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"561c806b-e38a-496c-a8de-dfe96cb3e956",
"restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
"key":"restore_user_selected_value",
"value":"Oneclick Restore"
}
]
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:04:45 GMT
Content-Type: application/json
Content-Length: 2639
Connection: keep-alive
X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f
{
"restore":{
"id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"created_at":"2020-11-05T10:17:40.000000",
"updated_at":"2020-11-05T10:17:40.000000",
"finished_at":"2020-11-05T10:27:20.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"snapshot_details":{
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":"2020-11-05T10:27:22.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"snapshot_type":"full",
"display_name":"API taken 2",
"display_description":"API taken description 2",
"size":44171264,
"restore_size":2147483648,
"uploaded_size":44171264,
"progress_percent":100,
"progress_msg":"Creating Instance: cirros-2",
"warning_msg":null,
"error_msg":null,
"host":"TVM1",
"finished_at":"2020-11-04T14:06:03.000000",
"data_deleted":false,
"pinned":false,
"time_taken":428,
"vault_storage_id":null,
"status":"available"
},
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"instances":[
{
"id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
"name":"cirros-2",
"status":"available",
"metadata":{
"config_drive":"",
"instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"production":"1"
}
},
{
"id":"b083bb70-e384-4107-b951-8e9e7bbac380",
"name":"cirros-1",
"status":"available",
"metadata":{
"config_drive":"",
"instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
"production":"1"
}
}
],
"networks":[
],
"subnets":[
],
"routers":[
],
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
}
],
"name":"OneClick Restore",
"description":"-",
"host":"TVM2",
"size":2147483648,
"uploaded_size":2147483648,
"progress_percent":100,
"progress_msg":"Restore from snapshot is complete",
"warning_msg":null,
"error_msg":null,
"time_taken":580,
"restore_options":{
"name":"OneClick Restore",
"oneclickrestore":true,
"restore_type":"oneclick",
"openstack":{
"instances":[
{
"name":"cirros-2",
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"availability_zone":"nova"
},
{
"name":"cirros-1",
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"availability_zone":"nova"
}
]
},
"type":"openstack",
"description":"-"
},
"metadata":[
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:21:07 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4dHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 15:13:30 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:30:56 GMT
Content-Type: application/json
Content-Length: 992
Connection: keep-alive
X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae
{
"restore":{
"id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
"created_at":"2020-11-05T14:30:56.048656",
"updated_at":"2020-11-05T14:30:56.048656",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
}
],
"name":"One Click Restore",
"description":"One Click Restore",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"openstack":{
},
"type":"openstack",
"oneclickrestore":true,
"vmware":{
},
"restore_type":"oneclick"
},
"metadata":[
]
}
}{
"restore":{
"options":{
"openstack":{
},
"type":"openstack",
"oneclickrestore":true,
"vmware":{},
"restore_type":"oneclick"
},
"name":"One Click Restore",
"description":"One Click Restore"
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 09:53:31 GMT
Content-Type: application/json
Content-Length: 1713
Connection: keep-alive
X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7
{
"restore":{
"id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
"created_at":"2020-11-09T09:53:31.037588",
"updated_at":"2020-11-09T09:53:31.037588",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
}
],
"name":"API",
"description":"API Created",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"openstack":{
"instances":[
{
"vdisks":[
{
"new_volume_type":"iscsi",
"id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"availability_zone":"nova"
}
],
"name":"cirros-1-selective",
"availability_zone":"nova",
"nics":[
],
"flavor":{
"vcpus":1,
"disk":1,
"swap":"",
"ram":512,
"ephemeral":0,
"id":"1"
},
"include":true,
"id":"e33c1eea-c533-4945-864d-0da1fc002070"
},
{
"include":false,
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
}
],
"restore_topology":false,
"networks_mapping":{
"networks":[
{
"snapshot_network":{
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
},
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
},
"target_network":{
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
},
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"internal"
}
}
]
}
},
"restore_type":"selective",
"type":"openstack",
"oneclickrestore":false
},
"metadata":[
]
}
}{
"restore":{
"name":"<restore name>",
"description":"<restore description>",
"options":{
"openstack":{
"instances":[
{
"name":"<new name of instance>",
"include":<true/false>,
"id":"<original id of instance to be restored>"
"availability_zone":"<availability zone>",
"vdisks":[
{
"id":"<original ID of Volume>",
"new_volume_type":"<new volume type>",
"availability_zone":"<Volume availability zone>"
}
],
"nics":[
{
'mac_address':'<mac address of the pre-created port>',
'ip_address':'<IP of the pre-created port>',
'id':'<ID of the pre-created port>',
'network':{
'subnet':{
'id':'<ID of the subnet of the pre-created port>'
},
'id':'<ID of the network of the pre-created port>'
}
],
"flavor":{
"vcpus":<Integer>,
"disk":<Integer>,
"swap":<Integer>,
"ram":<Integer>,
"ephemeral":<Integer>,
"id":<Integer>
}
}
],
"restore_topology":<true/false>,
"networks_mapping":{
"networks":[
{
"snapshot_network":{
"subnet":{
"id":"<ID of the original Subnet ID>"
},
"id":"<ID of the original Network ID>"
},
"target_network":{
"subnet":{
"id":"<ID of the target Subnet ID>"
},
"id":"<ID of the target Network ID>",
"name":"<name of the target network>"
}
}
]
}
},
"restore_type":"selective",
"type":"openstack",
"oneclickrestore":false
}
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 12:53:03 GMT
Content-Type: application/json
Content-Length: 1341
Connection: keep-alive
X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743
{
"restore":{
"id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
"created_at":"2020-11-09T12:53:02.726757",
"updated_at":"2020-11-09T12:53:02.726757",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
}
],
"name":"API",
"description":"API description",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"restore_type":"inplace",
"type":"openstack",
"oneclickrestore":false,
"openstack":{
"instances":[
{
"restore_boot_disk":true,
"include":true,
"id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
"vdisks":[
{
"restore_cinder_volume":true,
"id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
}
]
},
{
"restore_boot_disk":true,
"include":true,
"id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
"vdisks":[
{
"restore_cinder_volume":true,
"id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
}
]
}
]
}
},
"metadata":[
]
}
}{
"restore":{
"name":"<restore-name>",
"description":"<restore-description>",
"options":{
"restore_type":"inplace",
"type":"openstack",
"oneclickrestore":false,
"openstack":{
"instances":[
{
"restore_boot_disk":<Boolean>,
"include":<Boolean>,
"id":"<ID of the instance the volumes are attached to>",
"vdisks":[
{
"restore_cinder_volume":<boolean>,
"id":"<ID of the Volume to restore>"
}
]
}
]
}
}
}
}# mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mntworkload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
/…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload# cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ / var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl none bind 0 0# source {customer admin rc file}
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain># workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+# workloadmgr workload-delete <workload_id># source {customer admin rc file}
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
# vi /etc/workloadmgr/workloadmgr.confvault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path># systemctl restart wlm-workloads# vi /etc/tvault-contego/tvault-contego.confvault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path># systemctl restart tvault-contego#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ / var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl none bind 0 0# source {customer admin rc file}
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain># workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+# vi /etc/workloadmgr/workloadmgr.confvault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path># systemctl restart wlm-workloads# vi /etc/tvault-contego/tvault-contego.confvault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path># systemctl restart tvault-contego# source {customer admin rc file}
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
