Continuous Restore
This document describes setting up continuous restore for an application.
This document describes step by step instructions on how to set up continuous restore for an application. We assume user already has two running clusters with T4K installed on each cluster. Let's call them TVK1 and TVK2. TVK1 ID is
e5e22c8a0-de58-4b20-8962-7261755b1173
and TVK2 ID is 23d8f984-d7e8-4c52-b3c2-e6f9af5bbf13
. The event target is a shared backup target between TVK1 and TVK2. It could be NFS or S3, but it has no bearing on the discussion. In this example, we use S3 based event targetCreate an event target on both clusters.
1
apiVersion: triliovault.trilio.io/v1
2
kind: Target
3
metadata:
4
annotations:
5
trilio.io/event-target: "true"
6
name: s3-event-target
7
namespace: default
8
spec:
9
objectStoreCredentials:
10
bucketName: aj-test-s3
11
credentialSecret:
12
name: s3-cred-secret
13
namespace: default
14
region: us-east-1
15
type: ObjectStore
16
vendor: AWS
T4K 3.0.0 has introduced a new flag in the target create wizard in UI. You can set this flag to True when creating an event target.

create target wizard
Trilio recommends creating the event target in
default
namespace.When the event target resource is created in TVK1 and TVK2, the target controller automatically spawns two services, a syncher, and a service manager, in
default
namespace.
syncher and manager pods
The event target controller also creates a new service account in the default namespace.

Service Account
Make sure pods and service accounts are created on both clusters.
The syncher service will populate the state information on to event target, which lets each cluster discover each other. The
/service-info/heartbeats
directory on the event target should look below.
heartbeats of both clusters
The event target object on each cluster includes the clusters it discovered on the event target. For example:


event target object with discovered remote cluster
Create a new backup plan with a continuous restore feature

Create BackupPlan with Continuous Restore
In the above example, the user chose the remote cluster with an ID
23d8f984-d7e8-4c52-b3c2-e6f9af5bbf13
with ap
as a continuous restore policy. The policy yaml file is shown as follows:
continuous restore policy
The above policy states that the T4K maintains three consistent sets on the remote cluster.
The syncher service on the local cluster creates the following entry.

The manager on the remote cluster recognizes that the source cluster
e5e22c8a0-de58-4b20-8962-7261755b1173
is referring to it in its backup cluster based on the data on the event target. It now spawns a watcher and continuous restore service on the remote cluster.
watcher and continuous restore service on remote cluster
Similarly, the manager spawns continuous restore responder and watcher services on the local cluster

Watcher and Responder on local cluster
Once continuous restore service and continuous restore responder reconciles the backup plan between clusters, the backup plan in the UI is shown as follows:

Once a new backup is successfully generated on the source cluster, a new consistency set is automatically created on the remote cluster, as shown below:

consistent set on remote cluster

inbound consistent sets on remote cluster
Once the consistent set is successfully created, the backup summary on the source displays the consistent set information as shown below:

Consistent Set Summary on Local Cluster
Users can restore from a consistent set. The current release of the UI only restores from the latest consistent set. Users can choose the consistent set they want to restore from CLI.

Click through the restore wizard.

The restore operation can be monitored from CLI.

The application pod is running as shown here.

\
Last modified 21h ago