Introduction to ETCD

Etcd is the persistent data store for Kubernetes. It is a distributed key-value store that records the state of all resources in a Kubernetes cluster. Etcd is a distributed reliable key-value store which is simple, fast and secure. It acts like a backend service discovery and database, runs on different servers in Kubernetes clusters at the same time to monitor changes in clusters and to store state/configuration data that should to be accessed by a Kubernetes master or clusters.

RKE Cluster Backup & Disaster Recovery (DR)

RKE clusters can be configured to take snapshots of etcd. In a disaster scenario, you can restore these snapshots. This snapshots can be shared outside cluster like s3 storage so that in case if we loose server, we will have backups to restore.

ETCD backup and restore using rke-etcd-backup-restore

The plugin helps user to perform ETCD backup and restore of RKE1 clusters and enables the user to store the snapshot on s3 storage created using T4K target.

Important Notes & Plugin Prerequisites

⚠️ Important notes for the plugin
  1. Please do not switch of any node in cluster while restore is in progress and do not abort restore task in between, else you may loose cluster accessibility**

  2. Restore functionality will only work on same cluster from where the backup was taken**

  3. Restore will only work if cluster is accessible and one of the etcd nodes in the cluster should be up and running.

  4. Plugin is supported on RKE1 cluster(Local cluster created on Rancher server is not supported)

  5. Plugin supports ETCD backup and restore of downstream cluster created on Rancher server and NOT imported clusters.

  6. Backup T4K Target URL should have DNS name not IP.

  1. krew - kubectl-plugin manager. Install from here.

  2. kubectl - Kubernetes command-line tool. Install from here.

  3. Trilio for Kubernetes and backup target. Install from here.

ETCD backup restore use cases
  1. An etcd backup ensures that the cluster can be restored if an upgrade failure occurs

  2. You have deleted something critical in the cluster by mistake.

  3. You have lost the majority of your control plane hosts, leading to ETCD quorum loss.

Installation, Upgrade, Removal of Plugins

  • Add T4K custom plugin index of krew:

    kubectl krew index add tvk-interop-plugin https://github.com/trilioData/tvk-interop-plugins.git
  • Installation:

    kubectl krew install tvk-interop-plugin/rke-etcd-backup-restore
  • Upgrade:

    kubectl krew upgrade rke-etcd-backup-restore
  • Removal:

    kubectl krew uninstall rke-etcd-backup-restore


Bash or ZSH shells:

  set -ex; cd "$(mktemp -d)" &&
  OS="$(uname)" &&
  if [[ -z ${TVK_RKE_ETCD_BACKUP_RESTORE_VERSION} ]]; then version=$(curl -s https://api.github.com/repos/trilioData/tvk-interop-plugins/releases/ | grep -oP '"tag_name": "\K(.*)(?=")'); fi &&
  echo "Installing version=${TVK_RKE_ETCD_BACKUP_RESTORE_VERSION}" &&
  package_name="rke-etcd-backup-restore-${OS}.tar.gz" &&
  curl -fsSLO "https://github.com/trilioData/tvk-interop-plugins/releases/download/"${TVK_RKE_ETCD_BACKUP_RESTORE_VERSION}"/${package_name}" &&
  tar zxvf ${package_name} && sudo mv rke-etcd-backup-restore /usr/local/bin/kubectl-rke_etcd_backup_restore

Verify installation with: kubectl rke-etcd-backup-restore --help


ETCD Backup and restore on Rancher cluster. Available flags: -backup -restore.
   [-h] [-backup] [-restore] [--target-name TARGET_NAME]
   [--target-namespace TARGET_NAMESPACE] --rancher-url RANCHER_URL
   --bearer-token BEARER_TOKEN --cluster-name CLUSTER_NAME
   [--log-location LOG_LOC]




Flag to notify backup is to be taken.


Falg to notify restore is to be performed.


The name of a single datastore on which etcd backup needs to be stored i.e. T4K target name.


Namespace name in which T4K target is created.


Rancher server URL


Token to access rancher server More info here


Cluster name to perform Backup/Restore on.


Log file name along with path where the logs should be save default - /tmp/etcd-ocp-backup.log

Arguments Details:

  • -backup: Flag to notify the plugin to perform backup.

  • -restore: Flag to notify the plugin to perform restore.

  • --target-name: T4K target name.The target should be created and in available state. Currently S3 target type is supported. This target should be the target where the backups should be stored. This argument is mandatory if -backup flag is provided.

  • --target-namespace: Namespace name in which T4K target resides. This argument is mandatory if -backup flag is provided.

  • --rancher-url: This is the rancher server URL through which rancher can be accessed. should be given in the below form: "https://<rancher server ip>/" This is the URL to access rancher server. This is mandatory argument.

  • --bearer-token: This is the token provided by rancher server to access its cluster/apis without using password. More info about how to get bearer-token can be found at https://rancher.com/docs/rancher/v2.5/en/user-settings/api-keys/ The scope of API key should be "No scope" as to access API's, plugin needs access to complete scope of Rancher server This is mandatory argument.

  • --cluster-name: Rancher server hosts many RKE cluster, so specify the one cluster name for which ETCD backup is to be taken. This is mandatory argument.

  • --log-location: specify the log file location. default: /tmp/etcd-ocp-backup.log


A user may specify more than one option with each command execution. For example, to create a backup with a configured target name and associated namespace, and to set the cluster API URL with the associated bearer token, execute the following single command:

kubectl rke-etcd-backup-restore -backup --target-name <target-name> --target-namespace <target-namespace> --rancher-url <https://rancher server ip/> --bearer-token <bearer_token> --cluster-name <cluster_name>

Then, to restore from the same cluster API URL with the associated bearer token, execute the following single command:

kubectl rke-etcd-backup-restore -restore --rancher-url <https://rancher server ip/> --bearer-token <bearer_token> --cluster-name <cluster_name>

Restoring to a previous cluster state can be destructive and destabilizing action to take on a running cluster.

Important Additional Information

  1. If restoring the backup which is a different T4K version than the one you are currently using, the operation fails and cluster accessibility is lost. The workaround is to delete the current T4K and then re-try restoring.

  2. Supported GLIBC version 2.27.

  3. Plugin is tested with RKE1 with kubernetes version v1.21.9.