Restore Flags Guide
Overview
TrilioVault for Kubernetes (TVK) provides a comprehensive set of restore flags that give you granular control over how applications and data are restored from backups. These flags allow you to customize restore behavior for different scenarios, including disaster recovery, cross-cluster migrations, and application updates.
This guide provides detailed information about each restore flag, including its purpose, behavior, and examples.
Restore Flags
skipIfAlreadyExists
Purpose: Skips restoring resources that already exist in the restore namespace.
Behavior:
Checks if a resource with the same name already exists
If exists, skips restoration and logs a warning
Only applicable to metadata resources (Deployments, Services, ConfigMaps, Secrets, Pods, Custom Resources, etc.), not data resources (PV/PVC)
Useful when you want to preserve existing resources
Scenario: Restoring to a namespace where some resources already exist.
Existing Resources in Namespace:
Deployment:
web-app(exists with 2 replicas, custom configuration)Service:
web-app-service(does not exist)ConfigMap:
app-config(exists with different values)
Backup Contains:
Deployment:
web-app(3 replicas, original configuration)Service:
web-app-serviceConfigMap:
app-config(original values)
After Restore with skipIfAlreadyExists: true:
Restored Resources:
⚠️ Deployment:
web-app- SKIPPED (already exists, preserved with 2 replicas)✅ Service:
web-app-service- RESTORED (didn't exist)⚠️ ConfigMap:
app-config- SKIPPED (already exists, preserved with current values)
Result: Existing resources are preserved, and only missing resources are restored. The existing deployment continues running with its current configuration (2 replicas), and the service is newly created.
patchIfAlreadyExists
Purpose: Enables Trilio to patch existing metadata resources during restore using a 3-way merge, ensuring the cluster's current modifications are preserved while applying changes from the backup.
Behavior:
If a resource already exists → it is patched, not recreated
Works only for metadata resources (Deployments, Services, ConfigMaps, Secrets, Pods, CRs)
Does not apply to data resources (PVCs/PVs/VolumeSnapshots)
Prevents pre-restore validation failures when resources already exist
Merge Strategy:
Tracks 3 states: Original (backup) ⬌ Modified (desired) ⬌ Current (cluster)
Preserves local edits that don't conflict with backup
Patch types: Strategic merge patch → Built-in Kubernetes resources JSON merge patch → CRDs/unregistered resources
Scenario: Restoring to a namespace where a deployment exists but was accidentally modified.
Existing Deployment in Cluster:
Backup Contains (Original):
After Restore with patchIfAlreadyExists: true:
Patched Deployment:
Result: The deployment is patched to restore the original replica count (3) and image version (nginx:1.18) from backup, while preserving the user-added memory request. The resource is not recreated, maintaining its history and relationships.
Note: skipIfAlreadyExists and patchIfAlreadyExists are mutually exclusive. You cannot use both flags simultaneously.
retainHelmReleaseName
Purpose: Retains the original Helm release name during restore instead of generating a new unique name.
Behavior:
By default, a new unique release name is generated to avoid conflicts
When enabled, uses the original release name from backup
Applies to both standalone Helm charts and Helm-based operators
Special case: Automatically retained for local subchart dependencies
Scenario: Restoring a Helm-managed application.
Backup Contains:
Helm Release:
my-web-app(original name)Resources managed by Helm:
Deployment:
my-web-app-web(name includes release name)Service:
my-web-app-servicePVC:
my-web-app-data(name includes release name)
After Restore with retainHelmReleaseName: false (default):
Restored Resources:
✅ Helm Release:
my-web-app-a1b2c3- NEW NAME GENERATED✅ Deployment:
my-web-app-a1b2c3-web- RESTORED (new name)✅ Service:
my-web-app-a1b2c3-service- RESTORED (new name)✅ PVC:
my-web-app-a1b2c3-data- RESTORED (new name)
Result: Application is restored with a new release name to avoid conflicts. Resource names are updated to reflect the new release name.
After Restore with retainHelmReleaseName: true:
Restored Resources:
✅ Helm Release:
my-web-app- ORIGINAL NAME RETAINED✅ Deployment:
my-web-app-web- RESTORED (original name)✅ Service:
my-web-app-service- RESTORED (original name)✅ PVC:
my-web-app-data- RESTORED (original name)
Result: Application is restored with the exact same release name and resource names as the backup, maintaining complete compatibility with existing BackupPlans or dependencies that reference the original release name.
restoreStorageClass
Purpose: Controls whether StorageClass resources are restored during the restore operation.
Behavior:
When
true: Restores StorageClass resources from backup if not present on clusterWhen
falseornil: Skips StorageClass restorationDefault behavior:
false(StorageClasses are not restored by default)StorageClasses are cluster-scoped resources that define storage provisioners
If backup storageclass is default, then that storageclass is not restored as default
Scenario: Restoring to a new cluster that doesn't have the custom StorageClass.
Backup Contains:
StorageClass:
fast-ssd(provisioner: kubernetes.io/aws-ebs, parameters: type=gp3)PVC:
app-data(storageClassName: fast-ssd, 100Gi)
Cluster State Before Restore:
❌ StorageClass:
fast-ssd- NOT PRESENT❌ PVC:
app-data- NOT PRESENT
After Restore with restoreStorageClass: true:
Restored Resources:
✅ StorageClass:
fast-ssd- RESTORED (cluster-scoped)✅ PVC:
app-data- RESTORED (can now bind because StorageClass exists)
After Restore with restoreStorageClass: false (default):
Restored Resources:
❌ StorageClass:
fast-ssd- SKIPPED⚠️ PVC:
app-data- RESTORED but may be in Pending state (StorageClass missing)
Result: With the flag enabled, both the StorageClass and PVC are restored, ensuring the PVC can bind properly. Without the flag, the PVC may fail to bind if the StorageClass doesn't exist in the cluster.
restoreVMMACAddress
Purpose: Preserves Virtual Machine MAC addresses during restore instead of removing them.
Behavior:
By default, MAC addresses are removed to prevent conflicts
When enabled, original MAC addresses from backup are preserved
Applies to VirtualMachine and VirtualMachinePool resources
Handles multiple network interfaces
Scenario: Restoring a Virtual Machine with a licensed application that depends on MAC address.
Backup Contains VirtualMachine:
After Restore with restoreVMMACAddress: false (default):
Restored VirtualMachine:
Result: VM is restored but MAC addresses are removed. KubeVirt auto-assigns new MAC addresses. If the application license is tied to the original MAC address, the license may not work.
After Restore with restoreVMMACAddress: true:
Restored VirtualMachine:
Result: VM is restored with the exact same MAC addresses as the backup. The licensed application recognizes the MAC address and continues to work correctly.
Note: Cannot be used together with custom MAC address transformations.
imageRestore
Purpose: Enables restoration of container images from backup to a target registry.
Behavior:
Extracts images from qcow2 format stored in backup
Pushes images to the specified target registry if the image with same sha doesn't exist on the original and target registry
Updates resource specifications to use restored images
Handles image existence checks and tag generation
Scenario: Restoring an application where the original registry is not accessible (air-gapped environment).
Backup Contains:
Deployment with image:
quay.io/original-repo/web-app:v1.2.3(original registry not accessible)Service:
web-app-serviceConfigMap:
app-config
Target Registry Configuration:
Registry:
docker.ioRepository:
my-company/restored-images
After Restore with imageRestore: true:
Step 1: Image Restore Phase
✅ Image extracted from qcow2 backup
✅ Image pushed to:
docker.io/my-company/restored-images/web-app:v1.2.3✅ Image SHA verified and updated
Step 2: Metadata Restore Phase
✅ Deployment - RESTORED with updated image reference:
Before:
quay.io/original-repo/web-app:v1.2.3After:
docker.io/my-company/restored-images/web-app:v1.2.3
✅ Service:
web-app-service- RESTORED✅ ConfigMap:
app-config- RESTORED
Result: The application is restored with images available in the new registry. Pods can pull images from docker.io/my-company/restored-images/web-app:v1.2.3 instead of the inaccessible original registry.
Required Configuration: When imageRestore: true, you must provide imageRegistry configuration with registry, repository, and authentication secret.
overrideImageIfExist
Purpose: Controls whether to override existing images in the target registry when the same image name and tag already exists.
Behavior:
When
true: Overwrites existing images with the same name and tagWhen
false: Generates incremental tags (e.g.,tag-1,tag-2) to avoid conflictsWorks together with
imageRestoreflagUpdates image SHA after restore
Scenario: Restoring an image that already exists in the target registry with a different SHA.
Backup Contains:
Image:
web-app:v1.2.3with SHAabc123...(from backup)
Target Registry State:
Image exists:
docker.io/my-repo/web-app:v1.2.3with SHAxyz789...(different content)
After Restore with overrideImageIfExist: false (default):
Image Restore:
✅ System detects existing image with different SHA
✅ Generates new tag:
web-app:v1.2.3-1✅ Pushes image to:
docker.io/my-repo/web-app:v1.2.3-1with SHAabc123...
Metadata Restore:
✅ Deployment updated to use:
docker.io/my-repo/web-app:v1.2.3-1
Result: The existing image is preserved, and a new version is created with an incremental tag. Both versions coexist in the registry.
After Restore with overrideImageIfExist: true:
Image Restore:
✅ System detects existing image
✅ Overwrites existing image:
docker.io/my-repo/web-app:v1.2.3✅ New SHA:
abc123...(replacesxyz789...)
Metadata Restore:
✅ Deployment updated to use:
docker.io/my-repo/web-app:v1.2.3
Result: The existing image is overwritten with the backup version. The old image content is replaced, and the deployment uses the restored image version.
skipTLSVerify
Purpose: Skips TLS certificate verification when pushing/pulling images to/from registries.
Behavior:
Bypasses certificate chain and hostname verification
Allows connections to registries with self-signed or untrusted certificates
Applies to both authentication and image operations
Configured in the ImageRegistry specification
Scenario: Restoring images to an internal registry with self-signed certificates.
Backup Contains:
Deployment with image:
quay.io/public/web-app:v1.2.3
Target Registry:
Registry:
https://internal-registry.company.local(self-signed certificate)Repository:
restored-images
After Restore with skipTLSVerify: false (default):
Image Restore Attempt:
❌ Registry authentication fails:
x509: certificate signed by unknown authority❌ Image push fails: TLS verification error
❌ Restore fails with certificate errors
Result: Restore cannot complete because the registry's self-signed certificate is not trusted.
After Restore with skipTLSVerify: true:
Image Restore:
✅ Registry authentication succeeds (TLS verification skipped)
✅ Image pushed to:
https://internal-registry.company.local/restored-images/web-app:v1.2.3✅ Image SHA verified
Metadata Restore:
✅ Deployment - RESTORED with updated image:
Before:
quay.io/public/web-app:v1.2.3After:
https://internal-registry.company.local/restored-images/web-app:v1.2.3
Result: Images are successfully restored to the internal registry despite self-signed certificates. The application can pull images from the internal registry.
Security Note: This flag should only be used in development/test environments. For production, use proper CA certificates instead of skipping verification.
Action Flags
cleanupOnFailure
Purpose: Automatically cleans up partially restored resources when a restore operation fails, ensuring the cluster returns to its pre-restore state.
Behavior:
Triggers automatically when restore status becomes
FailedDeletes newly created resources that were restored
Reverts existing resources that were patched during restore (using last-applied-configuration annotation)
Deletes PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) created during restore
Deletes datamover jobs that were created for data restore
Cleans up OLM resources for operator applications
Can be enabled even after restore has failed (one-way flag: can be set to true, but cannot be changed back to false)
Does not trigger for validation phase failures (RestoreValidation, RestoreTargetValidation)
Scenario: A restore operation fails partway through, leaving some resources created and some existing resources modified.
Initial Cluster State:
Deployment:
web-app(exists with 2 replicas, image: nginx:1.20)Service:
web-app-service(does not exist)ConfigMap:
app-config(exists with current values)
Backup Contains:
Deployment:
web-app(3 replicas, image: nginx:1.18)Service:
web-app-serviceConfigMap:
app-config(backup values)PVC:
web-data(100Gi)
Restore Process (with patchIfAlreadyExists: true):
✅ Deployment
web-app- PATCHED (replicas: 2→3, image: nginx:1.20→nginx:1.18)✅ Service
web-app-service- CREATED✅ ConfigMap
app-config- PATCHED (values updated)❌ PVC
web-data- FAILED (storage quota exceeded)
Restore Status: Failed
After Restore Failure with cleanupOnFailure: false (default):
Cluster State:
⚠️ Deployment:
web-app- MODIFIED (still has 3 replicas, nginx:1.18 from restore)⚠️ Service:
web-app-service- CREATED (orphaned resource)⚠️ ConfigMap:
app-config- MODIFIED (still has backup values)❌ PVC:
web-data- NOT CREATED (failed)
Result: Cluster is in an inconsistent state. Partially restored resources remain, and existing resources were modified but the restore didn't complete.
After Restore Failure with cleanupOnFailure: true:
Cleanup Process:
✅ Service
web-app-service- DELETED (newly created resource)✅ Deployment
web-app- REVERTED (back to 2 replicas, nginx:1.20 using last-applied-config)✅ ConfigMap
app-config- REVERTED (back to original values using last-applied-config)✅ Datamover jobs - DELETED (if any were created)
Final Cluster State:
✅ Deployment:
web-app- REVERTED (back to original: 2 replicas, nginx:1.20)✅ Service:
web-app-service- DELETED (never existed originally)✅ ConfigMap:
app-config- REVERTED (back to original values)✅ No PVC created (restore failed before creation)
Result: Cluster is returned to its exact pre-restore state. All changes made during the failed restore are undone, and newly created resources are removed.
Important Notes:
The flag can be enabled at any time, even after a restore has already failed
Once set to
true, it cannot be changed back tofalse(prevents accidental disabling)Cleanup uses the
last-applied-configurationannotation to revert patched resources to their original stateIf a resource was created during restore (not patched), it is deleted
Data volumes (PVCs/PVs) are deleted with background propagation policy to ensure complete cleanup
protectRestoredApp
Purpose: Automatically creates protecting resources (Target, BackupPlan, Hooks, Policies, Secrets) after a successful restore.
Behavior:
Runs after successful application restore
Recreates Target, BackupPlan, Hooks, Policies, and Target Secrets
Handles resource conflicts intelligently (creates with new names if conflicts exist)
Updates dependencies automatically (e.g., updates Target reference in BackupPlan)
Scenario: Restoring an application and automatically setting up backup protection.
Backup Contains:
Application: Deployment
web-app, Serviceweb-app-svc, PVCweb-dataOriginal BackupPlan:
web-app-backup-plan(references Targets3-target)Original Target:
s3-target(S3 bucket configuration)Original Policy:
retention-policy(30-day retention)
After Application Restore (Before Protection Phase):
✅ Deployment:
web-app- RESTORED✅ Service:
web-app-svc- RESTORED✅ PVC:
web-data- RESTORED❌ BackupPlan:
web-app-backup-plan- NOT PRESENT❌ Target:
s3-target- NOT PRESENT❌ Policy:
retention-policy- NOT PRESENT
After Restore with protectRestoredApp: true (Protection Phase Complete):
Restored Application:
✅ Deployment:
web-app- RESTORED✅ Service:
web-app-svc- RESTORED✅ PVC:
web-data- RESTORED
Created Protecting Resources:
✅ Target:
s3-target- CREATED (from backup)✅ Policy:
retention-policy- CREATED (from backup)✅ BackupPlan:
web-app-backup-plan- CREATED (references restored Target and Policy)
Result: The application is fully restored and immediately protected. You can create a new Backup using the restored BackupPlan without manually creating Target, Policy, or BackupPlan resources.
restoreNamespaceLabelsAnnotations
Purpose: Restores namespace labels and annotations from backup to the restored namespace during restore operations. This ensures that external systems and integrations that depend on namespace metadata continue to function correctly after restore.
Behavior:
Captures namespace labels and annotations from backup
Applies them to the restored namespace using one of three strategies: Override, Append, or Replace
Optional feature, disabled by default
Supports both single namespace and cluster restore operations
For cluster restores, can be configured globally or per-namespace
For detailed information about restoration strategies, configuration examples, and use cases, see Restore Namespace Labels and Annotations.
Advanced Flags
patchCRD
Purpose: Patches existing CustomResourceDefinitions (CRDs) instead of skipping them.
Behavior:
Detects existing CRDs with the same name
Performs a 3-way merge patch to update CRD specifications
Validates CRD compatibility (scope and version) before patching
Ensures CRD changes are applied safely
Scenario: A CRD exists in the cluster but needs to be updated with schema changes from backup.
Existing CRD in Cluster:
Backup Contains (Updated CRD):
After Restore with patchCRD: true:
Patched CRD:
Result: The CRD is patched to include the new replicas field from the backup, allowing Custom Resources to use this new field. Existing Custom Resources continue to work, and new ones can use the updated schema.
Note: patchCRD and skipOperatorResources are mutually exclusive. You cannot use both flags simultaneously.
onlyMetadata
Purpose: Restores only the metadata components of an application, skipping data volumes.
Behavior:
Restores Kubernetes resources like Deployments, Services, ConfigMaps, Secrets, Pods, Custom Resources, etc.
Skips PersistentVolumeClaims (PVCs), PersistentVolumes (PVs), and VolumeSnapshots
Useful when you want to restore application configuration without restoring data
StorageClass: Can be restored if restoreStorageClass: true is set
Backup Contains:
Deployment:
web-app(3 replicas)Service:
web-app-serviceConfigMap:
app-configSecret:
app-secretPVC:
web-app-data(10Gi)
After Restore with onlyMetadata: true:
Restored Resources:
✅ Deployment:
web-app(3 replicas) - RESTORED✅ Service:
web-app-service- RESTORED✅ ConfigMap:
app-config- RESTORED✅ Secret:
app-secret- RESTORED❌ PVC:
web-app-data- SKIPPED (data resource)
Result: Application structure is restored, but pods will be in Pending state because the PVC is missing. You can manually create the PVC or restore it separately.
onlyData
Purpose: Restores only the data volume components, skipping metadata resources.
Behavior:
Restores PersistentVolumeClaims (PVCs), PersistentVolumes (PVs), and VolumeSnapshots
Skips all metadata resources like Deployments, Services, ConfigMaps, etc.
Useful when you want to restore data volumes without recreating the application structure
StorageClass: Can be restored if restoreStorageClass: true is set
Operator resources: When onlyData: true, operator resources and custom resources are skipped
Backup Contains:
Deployment:
database-app(2 replicas)Service:
database-serviceConfigMap:
db-configPVC:
database-data(50Gi) with dataPVC:
database-logs(20Gi) with data
After Restore with onlyData: true:
Restored Resources:
❌ Deployment:
database-app- SKIPPED (metadata resource)❌ Service:
database-service- SKIPPED (metadata resource)❌ ConfigMap:
db-config- SKIPPED (metadata resource)✅ PVC:
database-data(50Gi) - RESTORED with data✅ PVC:
database-logs(20Gi) - RESTORED with data
Result: Data volumes are restored and available, but the application is not running. You need to deploy the application separately to use the restored data.
omitMetadata
Purpose: Removes labels and annotations from resources during restore.
Behavior:
Strips all labels and annotations from restored resources
Preserves the resource name
The flag is skipped for resources that are part of a Helm release, because Helm needs the metadata to manage those resources
Backup Contains Deployment with:
After Restore with omitMetadata: true:
Restored Deployment:
Result: The deployment is restored with a clean metadata section, without any labels or annotations from the backup. The resource name and spec remain intact.
skipOperatorResources
Purpose: The skipOperatorResources flag skips restoring operator infrastructure (Deployments, Services, CRDs, etc.) and only restores Custom Resources (CRs) and their managed resources. Use this when the operator is already installed and you only need to restore the application.
Behavior:
Skips operator Deployments, Services, ConfigMaps, and other infrastructure components
Restores only Custom Resources (CRs) that the operator manages
Assumes the operator is already installed in the target cluster
Also skips operator-related data volumes if present
Scenario: MySQL Operator is already installed in the cluster, but the MySQL database instance needs to be restored.
Backup Contains:
Operator Deployment:
mysql-operator(infrastructure)Operator Service:
mysql-operator-service(infrastructure)CRD:
mysqlclusters.mysql.oracle.com(infrastructure)Custom Resource:
MySQLClusternamedproduction-db(application)PVC:
mysql-data(50Gi) (data)
Cluster State Before Restore:
✅ Operator Deployment:
mysql-operator- ALREADY EXISTS✅ Operator Service:
mysql-operator-service- ALREADY EXISTS✅ CRD:
mysqlclusters.mysql.oracle.com- ALREADY EXISTS❌ Custom Resource:
production-db- NOT PRESENT❌ PVC:
mysql-data- NOT PRESENT
After Restore with skipOperatorResources: true:
Restored Resources:
⏭️ Operator Deployment:
mysql-operator- SKIPPED (operator infrastructure)⏭️ Operator Service:
mysql-operator-service- SKIPPED (operator infrastructure)⏭️ CRD:
mysqlclusters.mysql.oracle.com- SKIPPED (operator infrastructure)✅ Custom Resource:
MySQLClusternamedproduction-db- RESTORED (application)✅ PVC:
mysql-data(50Gi) - RESTORED (data)
Result: Only the MySQL database Custom Resource and its data are restored. The existing operator detects the restored CR and creates the necessary StatefulSet, Services, and Pods to run the database instance.
disableIgnoreResources
Purpose: Disables the default ignore list, allowing normally ignored resources to be restored.
Behavior:
By default, certain resources are ignored (Namespace, APIService, OpenShift Console resources)
When enabled, these resources are restored
Applies to both validation and restore phases
Useful for advanced restore scenarios
Scenario: Restoring a backup that includes Namespace resources.
Backup Contains:
Namespace:
production(normally ignored)Deployment:
web-appin namespaceproductionService:
web-app-svcin namespaceproductionConfigMap:
app-configin namespaceproduction
After Restore with disableIgnoreResources: false (default):
Restored Resources:
❌ Namespace:
production- IGNORED (default behavior)⚠️ Deployment:
web-app- RESTORED (but namespace must exist)⚠️ Service:
web-app-svc- RESTORED (but namespace must exist)⚠️ ConfigMap:
app-config- RESTORED (but namespace must exist)
Result: Resources are restored, but you must manually create the namespace first, or restore fails if the namespace doesn't exist.
After Restore with disableIgnoreResources: true:
Restored Resources:
✅ Namespace:
production- RESTORED (normally ignored, now restored)✅ Deployment:
web-app- RESTORED in namespaceproduction✅ Service:
web-app-svc- RESTORED in namespaceproduction✅ ConfigMap:
app-config- RESTORED in namespaceproduction
Result: The namespace is restored along with all resources, creating a complete restore including the namespace itself.
useOCPNamespaceUIDRange
Purpose: This ensures that the restored data on OpenShift uses the UID range defined by the target namespace’s SCC (Security Context Constraint) instead of the original UIDs stored in the backup.
Behavior:
OpenShift-specific flag (only works on OpenShift clusters)
Creates helper PVCs in install namespace during primitive restore
Uses namespace SCC UID range annotation for file ownership
Ensures data ownership complies with OpenShift security policies
Scenario: Restoring data to an OpenShift namespace with a different UID range than the source.
Backup Contains:
PVC:
app-data(100Gi) with files owned by UID 1000 (from source namespace)
Source Namespace (Backup):
UID Range:
1000/10000(annotation:openshift.io/sa.scc.uid-range: 1000/10000)Files in PVC owned by: UID 1000
Target Namespace (Restore):
UID Range:
2000/10000(annotation:openshift.io/sa.scc.uid-range: 2000/10000)SCC requires files to be owned by UID range 2000-11999
After Restore with useOCPNamespaceUIDRange: false (default):
Restored PVC:
✅ PVC:
app-data- RESTORED⚠️ File ownership: UID 1000 (original from backup)
❌ Problem: Pods in target namespace cannot access files (UID mismatch with SCC)
After Restore with useOCPNamespaceUIDRange: true:
Restored PVC:
✅ Helper PVC created in install namespace
✅ DataOwnerUpdate job updates file ownership to UID 2000 (target namespace range)
✅ PVC:
app-data- RESTORED with files owned by UID 2000✅ Result: Pods can access files (UID matches SCC requirements)
Result: Data is restored with file ownership that complies with the target namespace's SCC UID range, ensuring pods can access the data without permission errors.
resourcesReadyWaitSeconds
Purpose: Configures the wait time (in seconds) for restored application resources to become ready before proceeding.
Behavior:
Default: 600 seconds (10 minutes)
Minimum: 0 seconds
Maximum: 1200 seconds (20 minutes)
Waits for Deployments, StatefulSets, Pods, etc. to become ready
Checks replica availability every 5 seconds
Scenario: Restoring an operator application where the operator needs time to start before Custom Resources can be created.
Backup Contains:
Operator Deployment:
mysql-operator(needs to start and register webhooks)Custom Resource:
MySQLClusternamedproduction-db
Restore Process with resourcesReadyWaitSeconds: 300 (5 minutes):
Step 1: Operator Resources Restored
✅ Deployment:
mysql-operator- RESTORED (but pods not ready yet)
Step 2: Wait Phase (5 minutes)
⏳ System waits for
mysql-operatordeployment pods to become ready⏳ Checks every 5 seconds: Are all replicas available?
⏳ After 2 minutes: Pods are ready, webhooks are registered
Step 3: Custom Resource Restored
✅ Custom Resource:
MySQLClusternamedproduction-db- RESTORED✅ Webhook validation succeeds (operator is ready)
✅ Operator creates StatefulSet and manages the database
Result: The system waits for the operator to be ready before attempting to restore Custom Resources, preventing webhook validation failures.
If resourcesReadyWaitSeconds: 0 or too short:
⚠️ Custom Resource restore may fail with webhook errors
⚠️ Operator webhooks not ready to validate CR creation
Best Practices
Use
skipIfAlreadyExistsorpatchIfAlreadyExistswhen restoring to namespaces with existing resourcesEnable
protectRestoredAppfor production restores to ensure immediate backup capabilityEnable
cleanupOnFailurefor production restores to automatically clean up on failureSet appropriate
resourcesReadyWaitSecondsfor applications with webhooks or slow startup timesUse
onlyMetadataoronlyDatafor granular control over what gets restoredEnable
restoreStorageClasswhen migrating between clusters with different storage configurationsUse
skipOperatorResourceswhen operators are already installed in the target clusterAvoid
skipTLSVerifyin production - use proper CA certificates insteadUse
restoreNamespaceLabelsAnnotationswhen restoring applications that depend on namespace metadata (service mesh, policies, etc.)
Last updated
Was this helpful?