Release Notes

A list of known issues and limitations with the current release are provided within this section.

3.1.3

Bugs fixed in this release:

  • Reversed UI modifications on the Backup overview page. Clicking a Backup row now opens the Backup details popup.

  • Resolved a user interface (UI) issue in the BackupPlan creation form by removing the selected application count from the tab heading..

3.1.2

Bugs fixed in this release:

  • Fixed an issue where the control plane was crashing while using Azure blob storage.

  • Fixed an issue where yaml loading failed while browsing the target.

  • Fixed the phase timings reported during Backups.

  • Fixed UI placement issues after editing BackupPlan.

3.1.1

Features Added:

  • Added backup and recovery support for virtual machines running on Upstream Kubernetes with KubeVirt

Bugs fixed in this release:

  • Added support to backup helm charts containing sub-charts from the OCI repository.

  • Fixed an issue where Resources created by the OLM operator were not backed up.

  • Fixed the issue where Exclude resources section was not showing all resources from the selected namespace while creating backup plan.

  • Handled the issue where backup was failing when VM has a disk attached as datavolume under spec.template.spec.volume[0].datavolume

3.1.0

Features Added:

  • Added backup and recovery support for virtual machines running on OpenShift.

  • Added multi-language support for UI. Initially, only Spanish is supported in addition to English.

  • Updated Encryption support so users can specify the encryption secret while browsing their backups.

  • With respect to the continuous restore feature, now restore is possible using ContinuousRestorePlan. Previously, it was supported using ConsistentSet only.

Bugs fixed in this release:

  • Removed license type, CPU/Node resource, and cluster identifier validations from the license.

  • Handled the PVC name transformation issue while doing restores.

  • Handled the issue where the Agreement button was getting stuck sometimes.

  • Fixed the Protecting restored application feature which was breaking with object storage.

Known issues in this release:

  • Transformation while performing a restore is not working properly if there is a "/" (forward slash) in the key.

Example: In the Path /metadata/annotations/storageclass.kubernetes.io/is-default-class there is a forward slash in the key storageclass.kubernetes.io/is-default-class

Known Limitations in this release:

  • Continuous Restore is not supported for OpenShift Virtualization.

3.0.4

Bugs fixed in this release:

  • Updated dex container image to fix the login related issues.

  • Added support to backup helm charts containing sub-charts from the OCI repository.

  • PVC name transformation support has been added to the restore process.

  • Fixed an issue that occurred while accepting the agreement after login.

  • Fixed an issue where Resources created by the OLM operator were not backed up.

3.0.3

Bugs fixed in this release:

  • A race condition that could trigger when multiple backups are scheduled in parallel resulted in failed backups.

  • Some clusterBackupPlans carried over from a 2.10.x installation, upgraded to a 3.0.x release would fail to run.

  • Helm application backups were failing when Helm's revision history is set to the default of 10.

  • ClusterRole and ClusterRoleBindings related to Trilio backup operations were not getting cleaned up.

  • There was a permissions issue with the NFS sync pod.

  • Upgraded the default memory limits from 5Gi to 10Gi for data jobs.

Known issues in this release:

  • Transformation while performing a restore is not working properly if there is a "/" (forward slash) in the key.

Example: In the Path /metadata/annotations/storageclass.kubernetes.io/is-default-class there is a forward slash in the key storageclass.kubernetes.io/is-default-class

3.0.2

3.0.2 is a maintenance release, released on Dec 13th, 2022 and includes the following fixes

  • Backups fail for robin.io

  • Dex auth is not working.

  • Creating an Azure target always comes back as Unavailable

  • Continuous restore fails for helm release with transform

  • Master encryption secret not present in case of OCP

  • Inbound and outbound dashboard issues

  • The cleanup policy is not working

  • T4K 3.0.1 breaks the oc delete ns command

3.0.0

Continuous Restore

Continuous restore is a brand new feature for the 3.0.0 release. This feature significantly enhances the disaster recovery of applications in terms of time to recovery and scale. With continuous restore, T4K can restore applications' data from their latest backup images to one or more clusters. Users can specify the number of clusters and the number of latest backups to restore at each cluster in the applications' backupplan.

Use T4K Pay As You Go Model on AWS

T4K is available on the AWS marketplace for a pay-as-you-go model. You can install T4K on your clusters from the AWS marketplace and pay using the number of CPUs per hour model. There are no wasted resources; you only pay for the number of hours you use T4K. No need to pay upfront; install T4K and start backing your applications.

Metadata Encryption In T4K

Now T4K supports metadata encryption as well. Earlier releases only supported encryption for application data. 3.0.0 extends the encryption to application metadata.

Observability in T4K

T4K UI is enhanced to display T4K logs so users can view and analyze backup and restore activity from T4K UI.

Installation Method change for OpenShift

In previous releases, we did not have a Trilio operator on the OCP side to watch the T4K resources and do the configuration level changes in the T4K. The T4K configuration/installation directly happens through the OLM operator. OLM operator limits the configuration we can do through the OCP UI; also, it doesn't retain any configuration done on the CSV after an upgrade. This release brings our operator up through OLM and creates T4K through a TVM CR.

2.10.6

Release Date

2022-09-23

What's New

  1. T4K is now available on the AWS marketplace with BYOL option.

Bugs Fixed

  1. PersistentVolumeClaims were not getting excluded from backups. This has been fixed in this release.

Known Issues

  1. When a backup is taken from one cluster and the backup target is replicated to a second cluster and then a restore is attempted, the restore may fail. This failure is triggered by the PackageManifest resource. The workaround for this restore failure is to exclude the PackageManifest.

excludeResources:
    gvkSelector:
      - groupVersionKind:
          group: "packages.operators.coreos.com"
          kind: "PackageManifest"
          version: "v1"

2.10.5

Release Date

2022-09-02

What's New

  1. With this feature, T4K can accept the annotations as input. These annotations can be passed to all jobs and deployments. This feature is specifically implemented to skip the sidecar insertion by service mesh like Istio or a security platform like Portshift.

  2. Added support to specify the log level for data mover. Earlier, we had to run some temporary data mover pod, enable the log level and debug the issue. Users can specify the logging level in the k8s-triliovault-config now. In case of data mover failures, the user needs to specify datamoverLogLevel: Debug in k8s-triliovault-config configmap and check the logs.

  3. In the case of non-CSI volumes, we are taking backup of PV and PVC configuration. While restoring such configuration, the transformation for PV configuration was not supported with the old T4K version. PV transformation support has been added now.

Bugs Fixed

  1. Fixed invalid encryption key issues. Webhook was validating the encryption key to avoid backup failures because of an invalid encryption key.

  2. The cluster backup plan continuously went in InProgress and Unavailable state if namespace matching labelSelectors is not present. This has been fixed.

  3. Fixed the ambiguous license error message

  4. On OCP 4.11 version, the triliovault-dex pod failed because of recent k8s changes in handling SA and tokens. This has been fixed.

Known Issues:

  • A known issue exists in which some users may experience Trilio pods going into an OOMKilled state. This can happen in environments where many objects exist, for which additional processing power is needed. Users may easily adjust their environment’s memory and CPU limits as a workaround for this issue. Please refer to the troubleshooting section to find more details on achieving this.

  • Data browsing for encrypted backups is not yet supported.

2.10.4

Release Date

2022-08-20

Bugs Fixed

  • Restore from a backup using the target browser functionality was not working. It is fixed in this release.

2.10.3

Release Date

2022-08-17

What's New

  • UI Customization

    1. Logo: Logo of our app (Places affected are Login page, Left navbar).

    2. Favicon: The icon which we see on the tab.

    3. CSS: The whole CSS (theme, font, alignment) of the app can be changed by targeting those elements classes and writing custom CSS

    4. Login page text: It should be a JSON file where the custom text will be provided based on that login page text will get changed.

  • Introduced a new option on S3 target spec to ignore client certification validation

  • DEX support is enhanced to support multiple connector types in a generic format

  • Added support for custom resource transformation during a restore operation

Bugs Fixed

  • Cinder snapshots were not getting deleted when Kubernetes was configured to use the OpenStack Cinder CSI snapshot driver.

  • Target PVCs were getting backed up when the backup plan was using gvkSelector. Now T4K will exclude Target PVCs from backup.

  • Container image backups were failing with immutable backup targets. This has been fixed.

  • NFS Target PVC storage class names were created with a random name. Since the name is random, it is difficult to exclude the storage from the storage quota. 2.10.3 creates a storage class with a well-known name.

  • Increased resource limits of the control plane and web backend pods.

Known Issues

  • A known issue exists in which some users may experience Trilio pods going into an OOMKilled state. This can happen in environments where many objects exist, for which additional processing power is needed. users may easily adjust their environment’s memory and CPU limits as a workaround for this issue. Please refer to the troubleshooting section to find more details on achieving this.

  • Data browsing for encrypted backups is not yet supported.

2.10.2

Release Date

2022-08-01

What's New

  • No new features are provided as part of this release.

Bugs Fixed

  • T4K Versions <= T4K-2.10.1 had a timeout issue (OpenShift login not happening because of application timeout). We have handled this and increased the timeout to 300 seconds. This timeout value will be configurable in a future release.

  • With previous T4K versions, the exporter was failing while serving metrics. This has been fixed.

  • With previous T4K, Image backups were failing with service account issues. This has been fixed.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into an OOMKilled state. This can happen in environments where a large number of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment’s memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

  • Data browsing for encrypted backups is not yet supported.

  • Image backups are failing with immutable targets. This will be handled in an upcoming release.

  • Restore operation along with container images might fail in one case. If the user is restoring container images in a different registry/repository, then we need to update the image field in all the resources at the time of restore. But if a user is restoring container images in a different registry/repository and having an operator to restore with custom resources which are creating resources like deployment, jobs, pods, statefulsets, then the logic to update the image field in these resources is not supported yet. So, resources created by custom resources will end up using old images i.e. not the restored container images.

2.10.0

Release Date

2022-07-11

What's New

  • The T4K UI has been updated for the backup and restore workflows. Form wizards have been added to make workflows simpler and easy to access for users.

  • Backup and restore support has been added for user application's container images. For any application in kubernetes, the container images are an important building block on which the entire application relies on to run. These images are pulled from a registry for the containers to use. Previously, we supported the backup of the application, but the container images were not backed up. Given that the registry image could be deprecated and deleted, there was always the potential that you could not restore your backup and run the restored application. Our backups were dependent on the external registries to have the images we used in our application. This image backup and restore feature enables backups to be self-reliant, which can be restored in any environment, without depending upon the registries of the backup.

  • Data browsing support has been added from this release, enabling users to browse any application's backed up data. Previously, target browser support was added in T4K which enabled users to browse the backup stored on the target. As part of backups, we store the data part of the application in a disk, but until now, support for browsing the content of data disk was not part of target browser. From this release, we have added support to browse the content of data disk. Users can view the backend application's data as a part of data browsing.

Bugs Fixed

  • The cleanup sequence has been updated to fix a bug causing the VolumeSnapshot to not be deleted. This is specific to Cinder storage.

  • With several Kubernetes distributions, a login issue was experienced whilst using kubeconfig. This issue has been fixed from this version.

  • A fix has been provided for an issue where configmaps for backups were not being cleaned up after backups.

  • In previous versions, incremental backup was working for addition/deletion of files/directories. If the same file was updated, then the complete file was being backed up in incremental backups. This bug is now fixed, so that only the incremented data of the file will be backed up.

  • An intermittent issue which caused the backup-cleanup pod to get stuck in the containerCreating state has now been fixed.

  • In previous versions, the order of clusters in a multiple cluster scenario was random. This has been fixed now, so that clusters are displayed in alphabetical order.

  • Previously, during licensing an error message was wrongly displayed, but this has now been fixed so that the message shows the CPU count.

Known Issues/Limitations

  • Data browsing for encrypted backups is not yet supported.

  • There is one case where the restore operation with the container images restore can fail. If a user is restoring container images in a different registry/repository, then the image field must be updated in all the resources at the time of the restore. However, if a user is restoring container images in a different registry/repository, with an operator restoring with custom resources (which create resources like deployment, jobs, pods, and statefulesets), then logic to update the image field in these resources is not supported yet. So, resources created by custom resources will end up using old images; i.e. not the restored container images.

2.9.4

Release Date

2022-06-27

What's New

  • No new features are provided as part of this release.

Bugs Fixed

  • A fix has been provided for an issue where the metamover snapshot job occasionally failed with a 'backupScope is not supported' error displayed.

  • A fix relating to the offline installer has been implemented. Previously it was failing to push images if the local registry container was not in a running state. The fix causes the trilio-registry container to be deleted and then recreated, if it is not in a running state.

  • Support has been added for multi-namespace. An additional field has been added in the metrics. The new Kind field differentiates between backup plans associated with cluster backup and those associated with normal backup.

  • The resources restore sequence has been fixed as per helm standard practice.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

2.9.3

Release Date

2022-06-16

What's New

  • Support has been added for NFS-based PVs, which can also be used for non-CSI PVs. The user must specify the provisioner in the k8s-triliovault-config configuration map. T4K will not take data backup of the PVs created by that provisioner.

  • The T4K configuration map has been updated to improve structure and make it more generic.

  • In the UI, it is now possible to create a namespace in all restore workflows.

  • A Darksite installer has been added.

  • PriorityClass support has been added to enable T4K pod scheduling decisions to be made according to user requirements.

  • Logging has been enhanced across all components in T4K, including addition of an effective logging structure and configurable logging level.

  • A stack has been added for Observability. Logging and monitoring can now be executed easily with Grafana.

Bugs Fixed

  • A workaround has been added for a rsync connection timeout. Previously, during incremental data backup, rysnc disconnected and the user experienced an error connection time out. If this issue is experienced, now the workaround fix is to re-attempt the rsync operation execution during incremental backup of the data.

  • An LDAP connector SSL issue has been fixed.

  • An issue has been fixed relating to a restored application not working. A flag has been added to decide whether to use the NS UID range or restore the UID along with restore operation.

  • Multiple minor issue fixes and improvements were made to the UI.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

  • If a user specifies the cleanupOnFailure flag in restore CR, then sometimes the cleanup job is created infinitely. We are working on this issue with priority and will create patch as soon as this is fixed.

  • Multi-namespace support is not yet available in the Observability stack, but it is currently in progress.

2.9.2

Release Date

2022-05-13

What's New

  • No new features are provided as part of this release.

Bugs Fixed

  • In previous versions of T4K, the Dex pod was going into a CrashLoopBackOff state on OCP IBM cloud. This issue has now been fixed.

  • Previously, if the user application had symlinks in the data that pointed to an external data source, then backups failed. This symlink behaviour has now been fixed.

  • The cleanupOnFailure flag in restore was previously causing an issue where an infinite number of cleanup pods were being created. This issue is now fixed.

  • In the case of helm application backups, if any helm application use subcharts with local repositories, we allow the restore of such backup in the namespace with same name. This means that in this scenario, the restore is restricted. Previously there was no warning available to advise users of this restriction, but a UI warning has now been added to make things clearer.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

  • If a user specifies the cleanupOnFailure flag in restore CR, then sometimes the cleanup job is created infinitely. We are working on this issue with priority and will create patch as soon as this is fixed.

2.9.1

Release Date

2022-05-06

What's New

  • No new features are provided as part of this release.

Bugs Fixed

  • Previously, in the case of swift storage with s3proxy, get_object_lock_configuration API was not throwing an exception if object lock was not enabled on the storage. Normally, the API should return an error in response to this scenario, but instead it was returning a success with an empty response. This issue has now been fixed in this version.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

  • If a user specifies the cleanupOnFailure flag in restore CR, then sometimes the cleanup job is created infinitely. We are working on this issue with priority and will create patch as soon as this is fixed.

2.9.0

Release Date

2022-05-05

What's New

  • From this version forward, documentation will only be released for major versions., e.g. 2.8.0 and 2.9.0. However, release notes shall be published and added to this page for all versions.

  • A new flag has been added to Backup that enables a user to specify how helm resources are to be backed up. In previous versions of T4K, users were not able to backup helm applications as a helm applications in namespace backups. Instead, helm applications were backed up by resources like label-based backups. After restoring from such backups, the helm consistency is not maintained. From this version, a RetainHelmApps flag is provided for namespace/multi-namespace backups, which can be specified in backup plans and subsequently propagated to associated backups. If this flag is specified as true, T4K will backup helm applications using helm way during namespace-level backup. In other words, after the restore of this kind of backup, the user will find the helm application exactly as it is supposed to be. To use this feature, it is recommended for the user to create a new namespace/multi-namespace backupplan.

  • A limit on the number of queued backups has now been introduced. With previous versions of T4K, no such limit existed.

  • The Mirantis MKE symbol is now added for Mirantis clusters.

Bugs Fixed

  • Previously, some customers were experiencing olm-operator restarts. This issue has now been fixed.

  • Previously, in some cases, the restore CR update was failing and it was causing subsequent failures in overall restore operation. This issue has now been fixed.

  • An issue related to Namespace backups has been fixed. Previously one of the T4K resources was being included as a part of Namespace backup.

  • Previously, following T4K upgrade, proxy variables were not being propagated. This has been fixed from this version.

  • In previous versions, mutation and validation configurations were not being updated in some upgrade scenarios. This has now been fixed.

  • For multi-cluster scenarios, previously the secondary added cluster entered an inactive state. This issue has now been fixed.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

  • If a user specifies the cleanupOnFailure flag in restore CR, then sometimes the cleanup job is created infinitely. We are working on this issue with priority and will create patch as soon as this is fixed.

2.8.2

Release Date

2022-04-19

What's New

  • No new features are provided as part of this release.

Bugs Fixed

  • In the previous version, installation issues were experienced by some customers. This issue has now been resolved in v2.8.2.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

2.8.1

Release Date

2022-04-18

What's New

  • A status filter has been added and a creationTimestamp, completionTimestamp ordering filters for backup in targetBrowser. Those filters are required for latest backup selection while Disaster Recovery plan creation.

  • Support has been added for reusing restore transforms. Previously created transforms can now be reused while creating a new restore.

  • The TVM name is now displayed instead of primary in left side nav panel.

  • Previously in the UI, the primary cluster was displayed in the side panel with the name Primary. From 2.8.1, the instance name is displayed in the navigation for the primary cluster.

  • The More/Less link has been removed within the Tile view of cluster management. Now the additional parts are displayed by default in a scrollable container.

  • All time zone calculation has shifted from Asia time zone to UTC.

  • With previous T4K versions, backup and restore performance was slow, especially when there were a large number of files. From 2.8.1, backup and restore performance has been improved.

  • When users are prompted to select a Hook, Target, or Policy option when performing tasks like Backupplan creation, all existing options are listed first. This means that users must scroll to the bottom of the respective dropdown list to see the Create New option. Create New is now fixed to the top of each of these dropdown lists, making it easier for users.

Bugs Fixed

  • Previously, Preflight created the volumesnapshotclass with fix name and it caused issue with multiple storage classes. From 2.8.1, Preflight creates a random suffix, so that a separate snapshot class gets created for the different storage class.

  • In the previous version of T4K, metadata restore was taking too much time restoring the instance template. From 2.8.1, this has been fixed and a timeout has been added, which enables the restore process to proceed. The timeout option is user configurable. Use the field ResourcesReadyWaitSeconds to configure the timeout while performing a restore.

  • In previous versions, an issue in T4K OpenShift operator was causing the reinstallation of operator. This has now been fixed in v2.8.1.

  • Previously, the Restore controller was not adding the start time stamp and the completion time stamp to some restore failures, which led to failed migration. A fix has been added, which adds the start timestamp and completion timestamp in all cases.

  • Previously, if Retention policies were created without specifying latest, then latest was being assigned to 1, which caused some retention issues. The latest should be 2, so that no current backup is merged with the previous backup. This issue has been fixed from this version.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

2.8.0

Release Date

2022-03-31

What's New

  • Improved Helm sub-chart handling:

    Improvements have been made to how Trilio handles Helm sub-charts that do not conform to Helm standards, including checking of local sub-charts. If a helm application contains sub-charts hosted either locally, or on an ftp server, the backup will not push the corresponding sub-charts to the target. The snapshot process also displays a warning in the backup CR. While restoring an application like this, restores are restricted to the same namespace and the transformation feature is disabled.

  • T4K Preflight Tool:

    • T4K Pre-flight as a Helm hook with volumesnapshot CRD handling ****Preflight can be now run as a pre-install helm hook in Trilio operator. Configuration defaults can be found here: Preflight Checks Plugin

    • Preflight installs volume snapshot CRDs if not present on your cluster environment (based on k8s server version). Additionally, preflight installs volume snapshot class if not already present.

    • Preflight tool bundled with upstream operator Previously, it was a separate entity, which the user needed to run it before installing T4K. It can now be run as a pre-install helm hook with upstream operator. Preflight installs VolumeSnapshot CRDs if not already present and also VolumeSnapshotClass if not already present on the cluster.

  • OpenShift Virtualization Support T4K can now support backup and restores of OpenShift Virtualization VMs.

  • Specify restored resources wait time In previous T4K versions, users were not able to specify the wait time to assess the status of restored resources. This led to dependent restored resources failing. T4K 2.8.0 gives users the option to specify this wait time in Restore and ClusterRestore Restore operations. Then it checks the restored resources status and waits for no longer than the maximum wait time specified. If restored resources fail to run in the specified time, a warning is logged and the restore process continues.

  • T4K UI:

    • Net calculated immutability is now displayed in the UI in relation to backupplans. Whilst creating a backupplan; when Scheduling Policy, Retention Policy and Bucket Retention are selected, a RetainUntilDate field is then displayed in the UI. RetainUntilDate is the date that the complete backup chain is retained to.

    • New columns have been added to the namespace list within T4K UI console's Backup and Recovery page. The new columns display the backup summary, number of pods and number of PVs for each namespace. An further column has been added to display the backup/restore summary for each backupplan.

    • For the T4K UI console login screen, the Sign in using kubeconfig/credentials button is now disabled if the user has not selected an associated file. Secondly, for some distributions, selecting a credentials file is disallowed. In these cases, the Sign in using kubeconfig/credentials button is not visible. Thirdly, following log in to the T4K UI, a user can choose tiles view, which displays all clusters along with a summary of each.

    • A table displaying the top 10 backups and a second table displaying the top 10 backupplans, along with various filters for different parameters, has been added to the Monitoring page in the T4K UI.

    • For all tables in the T4K UI, the pagination is now sticky; i.e. the user does not have to scroll down to see the page numbers. Also, all table columns are now sortable. Clicking on the column header will sort column data.

    • Under the namespace Details view in the T4K UI, objects are now clubbed together based on API group.

    • On the Backupplans page in the T4K UI, when creating a backupplan, a widget has been added to display the net immutability; i.e. the 'retain until' date based on the selected scheduling and retention policy.

    • When creating a restore in the T4K UI, support has been added for enabling target browsing when the user is unable to add a restore transform.

    • When creating a backup on the T4K UI, if a user selects applications from different namespaces, validation has been added to block the user and show the appropriate warning.

    • In the Backup and Recovery section in the T4K UI, a Cleanup policy page has been added to list/create cleanup policies. Also, in the applications objects list, users can now search in the Application Type dropdown.

    • Under the Disaster Recovery section in the T4K UI, support has been added for searching by instance name and pagination.

Bugs Fixed

  • In VMware Tanzu, for container networking through Antrea, the NetworkPolicyStats list API call would cause an issue with T4K deployment if the featureGate was not enabled. This resource, since it has no bearing on the application captures has been added in the T4K's BackupIgnoreList and won't be considered in the backup.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

2.7.1

Release Date

2022-03-08

What's New

  • Better Helm application restoration: The method for restoring Helm applications has changed slightly, so that it is reliant on Helm's own internal process. Previously some hooks were missing following restoration of Helm applications. The new approach executes hooks correctly, by considering the pre-post hooks during the restoration process.

  • Persistent Volume (PV) Resizing: Logic has been introduced to determine if a user has resized the PV before making a backup. Previously if the PV was resized, when the user tried to execute the next incremental backup, the backup would fail with the error: "No space left on device." Now, when a user resizes the PV following a backup, then logic will force the next backup to be a full backup.

  • T4K instance name support: Users should now provide a T4K instance name. Previously, during one-click deployment, the following pre-install error was displayed: Error: INSTALLATION FAILED: failed pre-install: warning: Hook pre-install k8s-triliovault-operator/templates/TVMCustomResource.yaml failed: TrilioVaultManager.triliovault.trilio.io "triliovault-manager" is invalid: spec.tvkInstanceName: Invalid value: "null": spec.tvkInstanceName in body must be of type string: "null" Now, as a part of TVM CR, you can mention the tvkInstanceName in the specification. Alternatively, in the case of OCP, you can edit the k8s-triliovault-config configmap to provide the instance name. This clears the error and enables users to proceed with installation.

Bugs Fixed

  • Backup failures following upgrade: A bug has been fixed to solve backup failures experienced by some users following execution of an upgrade. In all cases, old pre-upgrade backups stuck in the Coalescing status were causing newer post-upgrade backups to fail. This only happened for backups where retention was running, which caused the backup controller and retention logic to conflict. This conflict has now been removed and backup no longer becomes stuck in the Coalescing state.

  • Failed Deployment: A bug has been fixed to address an issue faced by some users when trying to install T4K 2.7.0 on RKE(v1.20.14) as upstream. In these instances, the TVM CR installs the T4K, but displays the wrong status in TVM CR as 'ReleaseFailed'. This issue has now been successfully resolved and TVM CR displays the correct 'Deployed' state.

Known Issues/Limitations

  • A known issue exists in which some users may experience Trilio pods going into a OOMKilled state. This can happen in environments where a lot of objects exist, for which additional processing power is needed. As a workaround for this issue, users may easily adjust their environment's memory and CPU limits. Please refer to the troubleshooting section to find more details on achieving this.

2.7.0

Release Date

2022-02-11

What's New

  1. Redesign of UI Navigation: The T4K management console has been redesigned to make it easier to use. For an overview of the new design and the navigation, refer to this UI video.

  2. Ingress Wildcard(*) Support: To access the UI, a mandatory hostname is no longer required for the ingress resource. Instead, the UI can be directly accessed from the external IP of your ingress.

  3. Backup Queuing Mechanism: A backup queue mechanism has been added to this version, which successfully removes a known limitation item from the list below. This enables a user to create multiple backups, even if their schedules overlap. These backup jobs are then queued internally to prevent issues.

Bugs Fixed

  1. When more than one backup is scheduled for the same time, the day-zero bug no longer causes any issues around the backing chain.

  2. Previously, successful cron jobs for backup schedules left behind three successful jobs. This has now been fixed, so that it no longer keeps any successful jobs.

Known Issues/Limitations

  1. If Pods go into an OOMKilled state, the environment requires more than the default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to find more details on increasing these limits in your environment.

  2. Helm charts with restore hooks have a logical error in terms of execution, as per the detailed issue filed here.

2.6.7

Release Date

2022-01-24

What's New

  1. No new features are provided as part of this release.

Bugs Fixed

  1. TK-4965 - Out of Space Issue on Datamover Pod. Trilio now leverages the PV size instead of snapshot size for backup file creation.

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

  2. Helm charts with restore hooks have a logical error in terms of execution as per the issue filed here

  3. When adding multiple scheduling policies and those schedule policies intersect with each other at the exact same time, it is possible that two incremental backups could get triggered at the same time and result in an orphaned backup. You can avoid this bug by avoiding the backup schedules from colliding with each other. If you are defining a daily schedule and weekly schedule, make sure that these two schedule time doesn't collide with each other

2.6.6

Release Date

2022-01-12

What's New

  1. No new features are provided as part of this release

Bugs Fixed

  1. TK-4957 - Fix for BuildConfig EnvVars parsing error

  2. Rancher app version fix of extra zeros appending to app version during one click installation

  3. Update default ingressController service type from NodePort to LoadBalancer for one click installation

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

  2. Helm charts with restore hooks have a logical error in terms of execution as per the issue filed here

  3. When adding multiple scheduling policies and those schedule policies intersect with each other at the exact same time, it is possible that two incremental backups could get triggered at the same time and result in an orphaned backup. You can avoid this bug by avoiding the backup schedules from colliding with each other. If you are defining a daily schedule and weekly schedule, make sure that these two schedule time doesn't collide with each other

2.6.5

Release Date

2022-01-11

What's New

  1. Added One-Click install support.

  2. Added Azure Blob as a target for backup and restore.

  3. Dynamic handling of default ingress-controller custom TLS cert with provisions to fallback to HTTP if any error occurs.

Bugs Fixed

  1. TK-4903 - Fixed an issue with Multi-Namespace restores using transformations.

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

  2. Helm charts with restore hooks have a logical error in terms of execution as per the issue filed here

  3. When adding multiple scheduling policies and those schedule policies intersect with each other at the exact same time, it is possible that two incremental backups could get triggered at the same time and result in an orphaned backup. You can avoid this bug by avoiding the backup schedules from colliding with each other. If you are defining a daily schedule and weekly schedule, make sure that these two schedule time doesn't collide with each other.

2.6.4

Release Date

2021-12-23

What's New

  1. Support for GKE/EKS UI authentication using GCP/AWS credentials.

Bugs Fixed

  1. MixedContent bug while attaching two clusters in T4K: When a user tries to connect two clusters where one is running on https and the other one is running on http, the UI now shows proper instruction how to allow the addition of cluster.

  2. TK-4788 Adding T4K cluster on Red Hat OpenShift using Dex (Sign in via OpenShift)

  3. TK-4761 Backup failure when using the BuildConfig resource within OpenShift is fixed.

  4. Upstream Operator support for Kubernetes v1.22

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

  2. Helm charts with restore hooks have a logical error in terms of execution as per the issue filed here

  3. When adding multiple scheduling policies and those schedule policies intersect with each other at the exact same time, it is possible that two incremental backups could get triggered at the same time and result in an orphaned backup. You can avoid this bug by avoiding the backup schedules from colliding with each other. If you are defining a daily schedule and weekly schedule, make sure that these two schedule time doesn't collide with each other.

2.6.3

Release Date

2021-11-29

What's New

  1. No new features are provided as part of this release

Bugs Fixed

  1. TK-4610 - User created from Rancher server is not able to login to Trilio UI via kubeconfig file

  2. TK-4640 - "backupPlan not found for provided details" Error while View yaml of the backup resources

  3. TK-4601/4606/4636 - Automatic configuration of HTTPS access for OCP environments

  4. TK-4633 - Fixed bugs around filter scope query param in web-backend

  5. TK-4629 - Fixed flakyness issue across view YAML in metadata details & hooks actions button.

  6. TK-4631 - Fixed restore flags migration issues**.**

  7. TK-4689 - Ignored the attributes and ACLs failures while uploading the data on target (TK-4689)

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

  2. Helm charts with restore hooks have a logical error in terms of execution as per the issue filed here

2.6.2

Release Date

2021-11-22

What's New

  1. No new features are provided as part of this release

Bugs Fixed

  1. TK-4634 - Fix for namespace listing issue. Show only the namespaces to which the user has access.

  2. TK-4606 - Fix around minion ingress to not to revert workaround suggested to the customer for HTTPS when upgrading to the new 2.6.2 version

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

2.6.1

Release Date

2021-11-14

What's New

  1. No new features are provided as part of this release

Bugs Fixed

  1. TK-4605 - After automatic upgrade to 2.6 web-backend pod fails to start

    1. Default memory and CPU limits have been increased on pods to align with heavier customer environments.

Known Issues/Limitations

  1. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

2.6.0

Release Date

2021-11-11

What's New

  1. TLS Proxy Support: Trilio can run behind a TLS proxy and will utilize the proxy certificates to make a secure connection to the target.

  2. Ingress Controller Optional: Deploying an ingress controller as part of the deployment of Trilio is optional. Users can select to install the Trilio ingress controller or use an existing ingress controller available within the cluster to support T4K.

    For, OpenShift environments Trilio uses the built-in ingress controller. As a result, users can install T4K from OperatorHub and access the UI by simply clicking the route created for it.

  3. Encryption and Retention: Full support for encrypted backups including retentions.

  4. OnlyMeta and OnlyData Support: Ability to restore only metadata or data components as part of a restore operation.

  5. ClusterBackupPlan Namespace Selector: Users can capture namespaces based on labels.

  6. Dex Support for MCM: Multi-cluster management authentication is supported using dex (OIDC/LDAP etc.)

  7. Dex Login Custom Port Support: Now you can also access T4K UI using Node Port & Port-forwarding endpoints by mentioning custom URL in the triliovault-dex secret.

  8. Preview YAML for Helm and custom transform: Users can preview YAML files as part of transformation workflows along with the ability to track deltas.

  9. Dark Mode: Trilio Management Console now supports Dark Mode

Bugs Fixed

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backup plans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backup Plan gives an error -"the object has been modified; please apply your changes to the latest version and try again

Known Issues/Limitations

  1. TK-4477 - Kubeconfig files for the Trilio management console created via the Rancher server do not work for cloud-provider deployments.

  2. For HTTPS access to the management console in OpenShift environments, the certificates need to be provided in the ingress resource

  3. TK-4094 - Backup summary shows count 0 when namespace associated with multi ns backup - it shows the summary correctly when multi-ns are selected.

  4. If Pods are going into an OOMKilled state, the environment will need more than the provided default CPU and Memory values on the Trilio Pods. Please refer to the troubleshooting section to increase the limits in your environment

2.5.4 - OpenShift Only

Release Date

2021-11-01

What's New

  1. No new features as part of this release.

  2. This release is specific to OpenShift environments only. Please use version 2.5.3 for all other distributions

Bugs Fixed

  1. Fix to ensure that management console settings in OCP are not reset upon an upgrade.

Known Issues/Limitations

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backup plans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backup Plan gives an error -"the object has been modified; please apply your changes to the latest version and try again"

2.5.3

Release Date

2021-11-01

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. OCP issues with respect to web-backend pod restarting.

Known Issues/Limitations

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backupplans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backup Plan gives an error -"the object has been modified; please apply your changes to the latest version and try again"

2.5.2

Release Date

2021-10-05

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. This release contains the fix to allow T4K to run behind the proxy server. T4K will be completely behind the proxy and perform operations in a secure way.

Known Issues/Limitations

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backupplans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backupplan gives an error -"the object has been modified; please apply your changes to the latest version and try again"

2.5.1

Release Date

2021-09-12

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. S3 Targets that did not support object locking were failing upon creation.

Known Issues/Limitations

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backupplans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backupplan gives an error -"the object has been modified; please apply your changes to the latest version and try again"

2.5.0

Release Date

2021-09-08

What's New

  1. Immutability: Trilio provides the ability to create immutable backups at the application level. Once the backup is taken and stored on an immutable target, it can not be altered (overwritten/deleted) until the retention period set through T4K is up.

  2. Encryption/Decryption: Added support to encrypt backups to protect them from malicious users. T4K encrypts the backup data at the application level using an encryption key which can only be restored with the same encryption key. Even if an attacker gets holds of the backup data he can not see the actual data without a key.

  3. Multi Namespace Backup/Restore: Earlier T4K used to protect only a single namespace with a single backup plan. Now, you can protect multiple namespaces using new CRDs - ClusterBackupPlan, ClusterBackup, and ClusterRestore.

  4. NFS target without privilege: Earlier T4K requires NFS target to be mounted with privilege permissions, now for NFS target we have removed privileged access. This allows T4K to be run in an restricted environment without escalated permissions.

  5. Inclusion/Exclusion: You can provide specific resource which needs to be included or excluded while doing backup. Resources can be specified either by GVKO or by the Kind directly.

  6. LDAP & OIDC Support: T4K support 8 tested popular public OIDC providers including Google, Github, Azure, Okta, GitLab, LinkedIn, and ideally any OIDC provider will work.

  7. OpenShift Authentication Support: This is enabled by default on OpenShift platform with the T4K operator. Trilio will auto-configure the management console based on authentication mechanisms available within OpenShift

  8. Protect Restored Application: Recreate targets, policy, hooks, and backupplans to protect the restored application. Now, the application is protected no matter the Kubernetes cluster it is restored into.

  9. Failed Restore Cleaner: Now restore allows you to specify via a flag to clean the resources created by the restore, if the restore fails.

  10. Krew Plugin for target browser: T4K now has the kubectl plugin to get the backup information from the target using the kubectl CLI.

  11. Target Credentials as Secrets: Earlier T4K used to have naked authentication information inside the target CR. Now, we support the target credentials to be saved as a secret and refer to the target CR for better security reasons.

  12. Scheduling as a Policy: Now you can define the backup schedule policy as a different policy object and can use a policy in different backupplans.

  13. Azure blob storage support: Added support to use the Azure blob storage as a target. We are using NFS3.0 to use the Azure blob storage.

  14. S3 SSL Certificate: Now we support authenticating S3 targets using SSL certificates.

  15. ReadOnlyRootFilesystem: All the containers that are running as backup and restore processes are running with a readOnlyRootFilesystem.

  16. Sharing of Resources: To enable simplified management of Trilio resources in a multi-cluster environment, users can share/clone resources like targets, hooks, policies, etc. across namespaces in the same cluster or across clusters themselves.

Bugs Fixed

  1. Multiple bugs with respect to the management console and backend APIs have been fixed

  2. MinIO S3 targets are now supported.

Known Issues and Limitations

  1. TK-3843 - In Namespace and Application Summary Ingress shows twice

  2. TK-3780 - UI: only GVK cannot be provided for excludeResources in Restore form [advanced configuration]

  3. TK-3861 - Alignment issue for Restore Transformation Accordion, once transform component added

  4. TK-3999 - 8 EB threshold capacity value shows "9223372036854775807" & dropdown is empty for edit NFS & AWS target

  5. TK-4138 - gutter icon for navigation is visible in Backup metadata details

  6. TK-4139 - Autoscaler is observed duplicate in Resources

  7. TK-4158 - Search Backupplan dropdown shows no options when time range is Hour & Day - Listing of backupplans is not visible in the dropdown

  8. TK-4160 - Edit transformation should not ask to update "Transform Name''

  9. TK-4094 - Edit In Progress Backupplan gives an error -"the object has been modified; please apply your changes to the latest version and try again"\

2.1.0

Release Date

2021-05-04

What's New

  1. Architecture Change: Earlier T4K used to ship two different scope CustomResourceDefinitions (CRDs) : Cluster and Namespaced . Now, we only ship Namespaced Scope CRDs. However T4K still support both the "Cluster" & "Namespaced" installation mode as before.

  2. Multi Cluster Management: T4K now supports managing multiple T4K clusters/installations through the UI.

  3. Velero Integration: T4K now ships with velero integration. It will now allow you to list and see the details of your existing velero backups, restore & target in the T4K UI directly.

  4. Analytics Controller: T4K now comes with an analytics controller which populates running stats in trilio resources providing extra information about individual resources.

  5. Helm2 & v1alph1 CRD Deprecation: helm2 support is now deprecated from the T4K and you will not be able to backup/restore the helm2 applications. v1alpha1 version of the T4K CRD's is also deprecated.

  6. Krew Plugins for preflight and log-collector: The shell based pre-flight checks and the go based log-collector tool have been migrated to Krew based plugins. Additional information around usage of these tools is available here

  7. Datamover Performance: This release includes performance improvement to the NFS based targets

  8. Lightweight webhook & OLM bundle: T4K has simplified installation process by moving resource creation to it's Helm chart and the OLM bundle, making the webhook init process lightweight

  9. View/Download YAML: Ability to view raw YAML files for Backup, Restore, Backupplan, Target, Hooks,Retention Policy.

  10. DR Plan: Disaster Recovery Plan feature to support a plan via a single workflow to recover your cluster state in case of any disaster.

  11. Tooltips for all UI forms

  12. Performance and resource analysis via StormForge Experiments - purpose built solution to analyze the resource requirements for T4K pods.

Bugs Fixed

  1. Multiple bugs with respect to the management console and backend APIs have been fixed.

Known Issues/Limitations

  1. Multi-namespace backup in the same workflow not supported yet.

  2. Custom certificates for on-premise S3 targets not supported.

  3. MinIO targets are not supported as MinIO does not support empty folders. Will be fixed in the next release.

2.0.5

Release Date

2021-04-22

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. This release includes a patch fix of taking clusterIP service backup and restore without using transformation on k8s 1.20+

Known Issues/Limitations

  1. OLM Based Upgrade/Install

    1. Clustered

      1. The v2.0.4 version will be automatically upgraded to v2.0.5 via OLM if the subscription is set to automatic. If not, manually approve the subscription upgrade.

      2. v1.1.1 or v2.0.0to v2.0.5 - either incrementally upgrade to the next version up to v2.0.3 or uninstall the current operator and reinstall the 2.0.4 operator from OperatorHub.

    2. Namespaced

      1. Install - Use a custom catalog source to install T4K at a namespace scope.

      2. Upgrade - Uninstall the current operator and install the 2.0.5 operator via custom catalog source.

    3. Clustered <-> namespaces

      1. Same as before - full reinstall

  2. In 2.1.0 (Early May) all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Multi-namespace backup in the same workflow not supported yet.

  4. Custom certificates for on-premise S3 targets not supported.

  5. MinIO targets are not supported as MinIO does not support empty folders.\

2.0.4

Release Date

2021-04-01

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. Issue related to cross-cluster migration and custom CRDs failing is now fixed.

  2. 504 Gateway timeout Error Toast Message on UI on Launch of Target Browser

  3. There was a Red Hat OperatorHub issue where Trilio images were not loading for 2.0.3 which has been fixed with 2.04. As a result, 2.0.2 customers can upgrade to 2.0.4 via OLM.

Known Issues/Limitations

  1. OLM Based Upgrade/Install

    1. Clustered

      1. The v2.0.2 version will be automatically upgraded to v2.0.4 via OLM if the subscription is set to automatic. If not, manually approve the subscription upgrade.

      2. v1.1.1 or v2.0.0to v2.0.4 - either incrementally upgrade to the next version up to v2.0.3 or uninstall the current operator and reinstall the 2.0.4 operator from OperatorHub.

    2. Namespaced

      1. Install - Use a custom catalog source to install T4K at a namespace scope.

      2. Upgrade - Uninstall the current operator and install the 2.0.4 operator via custom catalog source.

    3. Clustered <-> namespaces

      1. Same as before - full reinstall

  2. In 2.1.0 (Early May) all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Multi-namespace backup in the same workflow not supported yet.

  4. Custom certificates for on-premise S3 targets not supported.

  5. MinIO targets are not supported as MinIO does not support empty folders.

  6. Kubernetes 1.20 adds a new field known as ClusterIPs for the service resource that is auto-populated from the clusterIP field. Trilio wipes out the clusterIP field from a service resource before a restore, but the ClusterIPs field cannot be non-empty with the clusterIP field. As a result, the restore validation fails. Trilio will fix this issue in a subsequent release, but until then restore transformations can be used to workaround this issue. A video explaining the transform can be found here\

2.0.3

Release Date

2021-03-08

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. Issue related to cross-cluster migration and custom CRDs failing is now fixed.

  2. 504 Gateway timeout Error Toast Message on UI on Launch of Target Browser

Known Issues/Limitations

  1. OLM Based Upgrade/Install

    1. Clustered

      1. The v2.0.2 version will be automatically upgraded to v2.0.3 via OLM if the subscription is set to automatic. If not, manually approve the subscription upgrade.

      2. v1.1.1 or v2.0.0to v2.0.3 - either incrementally upgrade to the next version up to v2.0.3 or uninstall the current operator and reinstall the 2.0.3 operator from OperatorHub.

    2. Namespaced

      1. Install - Use a custom catalog source to install T4K at a namespace scope.

      2. Upgrade - Uninstall the current operator and install the 2.0.3 operator via custom catalog source.

    3. Clustered <-> namespaces

      1. Same as before - full reinstall

  2. In 2.1.0 (late March) all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Multi-namespace backup in the same workflow not supported yet.

  4. Custom certificates for on-premise S3 targets not supported.

  5. MinIO targets are not supported as MinIO does not support empty folders.

  6. Kubernetes 1.20 adds a new field known as ClusterIPs for the service resource that is auto-populated from the clusterIP field. Trilio wipes out the clusterIP field from a service resource before a restore, but the ClusterIPs field cannot be non-empty with the clusterIP field. As a result, the restore validation fails. Trilio will fix this issue in a subsequent release, but until then restore transformations can be used to workaround this issue. A video explaining the transform can be found here\

2.0.2

Release Date

2021-01-30

What's New

  1. No new features as part of this release.

Bugs Fixed

  1. OCP dynamic forms - inability to enter certain mandatory fields. All mandatory fields are supported now.

  2. Restore by location from Target Browser not loading metadata for transformations and exclusions.

Known Issues/Limitations

  1. OLM Based Upgrade/Install

    1. Clustered

      1. The v2.0.1 version will be automatically upgraded to v2.0.2 via OLM if the subscription is set to automatic. If not, manually approve the subscription upgrade.

      2. v1.1.1 or v2.0.0to v2.0.2 - either incrementally upgrade to the next version up till to 2.0.2 or uninstall the current operator and reinstall the 2.0.2 operator from OperatorHub.

    2. Namespaced

      1. Install - Use a custom catalog source to install T4K at a namespace scope.

      2. Upgrade - Uninstall current operator and install the 2.0.2 operator via custom catalog source.

    3. Clustered <-> namespaces

      1. Same as before - full reinstall

  2. In 2.1.0 (late March) all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Multi-namespace backup in the same workflow not supported yet.

  4. Custom certificates for on-premise S3 targets not supported.

  5. MinIO targets are not supported as MinIO does not support empty folders.

2.0.1

Release Date

2021-01-14

What's New

Management Console:

  1. Namespace and application toggle for changing monitoring.

  2. Filters added for backup view on landing page under monitoring panel

  3. Filters available for backup overview page

  4. Status log with information specific to each PV is available.

  5. Delete backup and delete restore in place from backup tab under monitoring panel on landing page

  6. Consolidated views for backup/restore summary and overview pages will be part of upcoming release

  7. Reusable resources view added to manage backupPlans, retention policies, targets and hooks.

  8. Validation and checks for all workflows added.

  9. Help screens for landing page and Backup/Restore Overview pages added.

Backend:

  1. Ability to set/configure resources configurations for Trilio Pods.

Bugs Fixed

  1. UI related multiple bugs resolved related to user experience and cosmetics.

  2. Post-hook being triggered after upload is fixed.

Known Issues/Limitations

  1. OLM Based Upgrade/Install

    1. Clustered

      1. The v2.0.0 version will not be automatically upgraded to v2.0.1 if the auto publish is enabled as the images for 2.0.0 were being served from the namespace channel. The channel will have to be changed back to cluster to receive the update. A video has been provided on the Upgrade page that demonstrates the process

    2. Clustered <-> namespaces

      1. Same as before - full reinstall

  2. In 2.1.0 all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Multi-namespace backup in the same workflow not supported yet.

  4. Custom certificates for on-premise S3 targets not supported.

  5. MinIO targets are not supported as MinIO does not support empty folders.

2.0.0

Release Date

2020-11-16

What's New

  1. Trilio for Kubernetes Management Console

  2. Namespace Level Backups

  3. Restore Enhancements

    1. Transformation

    2. Exclusions

    3. PatchCRD

    4. Skip Operator Resources

    5. Omit Metadata

    6. Restore Hooks

  4. Target Browser

  5. Support for Helm Charts with sub-charts for helm based backup.

Bugs Fixed

  1. Metadata for Helm resources now available in Status sub-resource

Known Issues/Limitations

  1. OLM Based Upgrade

    1. Upgrading from 1.x -> 2.0

      1. Namespace install -> Namespace install

        1. User will have to use custom catalog source for upgrade

        2. Uninstall current Operator

        3. Reinstall Operator via Catalog Source

      2. Clustered install -> Clustered install

        1. Instead of using upgrade mechanism via OLM, do the following:

          1. Uninstall current operator

          2. Reinstall 2.0 from OperatorHub

      3. Clustered <-> namespaces

        1. Same as before - full reinstall

  2. In 2.1.0 (End of January) all clustered installations would have to go through full reinstalls as namespace to cluster and vice-versa upgrades will be supported.

  3. Custom certificates for on-premise S3 targets not supported.

  4. MinIO targets are not supported as MinIO does not support empty folders.

1.1.1

2020-08-25

What's New

  1. No new features.

Bugs Fixed

  1. Licensing bug fixed where Master nodes were being counted for licensing.

  2. Unable to update cron schedule in a BackupPlan.

Known Issues/Limitations

Same as 1.1.0

  1. Sub-charts in case of helm backup not supported.

  2. Custom certificates for on-premise S3 targets not supported.

  3. Metadata information for Helm apps not supported.

  4. Dynamic Forms in OCP should not be used for T4K administration as the forms feature is still under development and does not provide the level of nesting needed by T4K CRDs.

  5. CRDs not deleted when uninstalling - OpenShift does not delete CRDs as part of Operator uninstall. Since T4K supports namespace and clustered installation, this may cause unexpected behavior with the application. Customers must ensure that the CRDs are deleted after Operator uninstall.

1.1.0

Release Date

2020-08-17

What's New

  1. Migration capability from 1.0.0 and v1alpha1 versions.

    1. Customers using Kubernetes lower than v1.15 must enable the CustomResourceWebhookConversion feature gate on the cluster

  2. Restore By Location

Bugs Fixed

  1. Snapshot issue where backups were getting stuck at 6%

Known Issues/Limitations

  1. Sub-charts in case of helm backup not supported.

  2. Custom certificates for on-premise S3 targets not supported.

  3. Metadata information for Helm apps not supported.

  4. Dynamic Forms in OCP should not be used for T4K administration as the forms feature is still under development and does not provide the level of nesting needed by T4K CRDs.

  5. CRDs not deleted when uninstalling - OpenShift does not delete CRDs as part of Operator uninstall. Since T4K supports namespace and clustered installation, this may cause unexpected behavior with the application. Customers must ensure that the CRDs are deleted after Operator uninstall.

1.0.0

Release Date

2020-07-14

What’s New:

  1. IBM CloudPak Certified

  2. IBM MultiCloud Management Certified

  3. Migrated from v1alpha1 to v1 API version

  4. Licensing Enabled.

  5. Label Backup - ability to provide multiple label selectors where the corresponding logic is ORR’ed instead of AND’ed

  6. Upstream Operator Backup - Support for Multiple CRDs

  7. Added support for pre-backup validations

  8. Upstream Operator Backup - Ability to select Operator resources by providing helm release name

  9. Backup Metadata information - Ability to view metadata information of a backup in terms of objects/resources captured.

  10. Grafana Dashboards - 10 Grafana dashboards that are pivoted on Backups/Restores/Targets/Applications (BackupPlans) that provide an overview, summary, and details around T4K resources.

Bugs Fixed:

  1. NFS Timeout issue - Issue with NFS target creation where mount validation would fail due to a timing issue.

  2. Backup by Labels - Backup only top-level items as nested items were being double-counted in the restore, causing it to fail.

Known Issues/Limitations:

  1. Restore by location not supported.

  2. Upgrades are not supported directly and will be supported as a patch before the next released version.

  3. Sub-charts in case of helm backup not supported.

  4. Custom certificates for on-premise S3 targets not supported.

  5. Metadata information for Helm apps not supported.

  6. Dynamic Forms in OCP should not be used for T4K administration as the forms feature is still under development and does not provide the level of nesting needed by T4K CRDs.

  7. CRDs not deleted when uninstalling - OpenShift does not delete CRDs as part of Operator uninstall. Since T4K supports namespace and clustered installation, this may cause unexpected behavior with the application. Customers must ensure that the CRDs are deleted after Operator uninstall.

0.2.3 - 0.2.5

  1. Issues post uninstall:

    1. MutatingWebhookConfiguration and ValidatingWebhookConfiguration is not cleaned up

    2. The Cleaner cronjob is not cleaned up.

    3. Cleaner cronjob will fail to create jobs due to RBAC revocation

    4. In case of scheduled backups defined for any BackupPlan, the cronjob for the same will not be cleaned up unless the BackupPlan is deleted. This will lead to failed jobs due to RBAC revocation Helm charts with sub-charts are not yet supported

  2. In case there is a problem with Target after it is created (change in IP, removal of keys in use), it won’t change the state of Target and will still show Available

  3. OCP UI forms should not be used for resource creation. Instead, the YAML/JSON editor should be used

  4. Listing CRD resources don’t show correct status on OCP UI for operator details

  5. OLM does not consider the scope of CRD it is installing.

    1. Problem: In case Trilio is installed in cluster scope and then deleted and after then if the user tries to install in namespaced scope, OLM allows it. On operator deletion, CRD’s are not deleted. So even when the user did the namespaced installation, the underlying CRD’s are installed in cluster scope, leading to issues with application functioning

Last updated