LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • Why?
  • Assumptions
  • High-level Steps with Flowchart
  • Extending the runner
  • Conclusion

Was this helpful?

  1. Ecosystem

T4K GitHub Runner

PreviousOptimize T4K Backups with StormForgeNextAWS RDS snapshots using T4K hooks

Was this helpful?

Why?

IAC is very important in Kubernetes Streamline ops from dev to production as much as possible. You want runners that can automate tasks around application development and delivery T4K has built a runner for GH that can do the following**.**

Assumptions

  1. Source and destination kubernetes clusters are setup

  2. T4K installed on the source and destination clusters with appropriate licenses

  3. A backup location is created and available for use

High-level Steps with Flowchart

Pre-requisites

In order to get started with the T4K runner ensure the following prerequisites are met:

  1. Install T4K on the source and destination clusters with appropriate licenses

  2. Ensure a backup location is created and available for use.

System Setup

  1. actions-runner-controller uses cert-manager for certificate management of Admission Webhook. Install cert-manager using below command:$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml

  2. Install the custom resource and actions-runner-controller with kubectl. This will create actions-runner-system namespace in the Kubernetes cluster and deploy the required resources.

    $ kubectl apply -f https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.18.2/actions-runner-controller.yaml

  3. Setup authentication for actions-runner-controller to authenticate with GitHub using PAT (Personal Access Token). Personal Access Tokens can be used to register a self-hosted runner by actions-runner-controller.

    1. Log-in to a GitHub account that has admin privileges for the repository, and create a personal access token with the appropriate scopes listed below:

      1. Required Scopes for Repository Runners : repo (Full control)

  4. Deploy the token as a secret to the Kubernetes cluster $ GITHUB_TOKEN="<token>"

    $ kubectl create secret generic controller-manager -n actions-runner-system --from-literal=github_token=${GITHUB_TOKEN}

  5. Deploy a repository runner

    1. Create a manifest file including Runner resource as follows:

      # runner.yaml

      apiVersion: actions.summerwind.dev/v1alpha1

      kind: Runner

      metadata:

      name: example-runner

      spec:

      repository: summerwind/actions-runner-controller

      env: []\

    2. Apply the created manifest file to your Kubernetes

      $ kubectl apply -f runner.yaml runner.actions.summerwind.dev/example-runner created

    3. You can see that the Runner resource has been created

      $ kubectl get runners

      NAME REPOSITORY STATUS

      example-runner summerwind/actions-runner-controller Running

    4. You can also see that the runner pod has been running

      $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      example-runner 2/2 Running 0 1m

    5. The runner you created has been registered to your repository

Executing the Runners

  1. Create a workflow with 2 jobs - Backup and Restore

  2. User Inputs:

    1. kubeconfig files for source and destination clusters

    2. Load the user input variables in the “.env”-

      1. Backup target name and namespace on source cluster

      2. Namespace to be backed up

      3. Backup target name and namespace on destination cluster

      4. Namespace to be used for restore

  3. Workflow triggers - set appropriate triggers for the job to run

  4. The workflow run is made up of Backup and Restore jobs that run sequentially

  5. Backup job details:

    1. runs-on: self-hosted

    2. Setup the environment

      1. install kubectl

      2. Load kubeconfig files for source and destination clusters

      3. Load the user inputs -

        1. Backup target name and namespace on source cluster

        2. Namespace to be backed up

        3. Backup target name and namespace on destination cluster

        4. Namespace to be used for restore

    3. Create a backup plan

    4. Perform backup

    5. Capture the backup location that will be needed for the restore on destination cluster

  6. Restore job details:

    1. runs-on: self-hosted

    2. Setup the environment

      1. install kubectl

      2. Load kubeconfig files for source and destination clusters

      3. Load the user inputs -

        1. Backup target name and namespace on source cluster

        2. Namespace to be backed up

        3. Backup target name and namespace on destination cluster

        4. Namespace to be used for restore

    3. Perform restore using the location of the backup

Extending the runner

  1. The workflow can be extended to support additional destination clusters to cover several environments as part of the test cases.

  2. Different storage classes that may be in use in the source and destination clusters are supported via T4K transforms.

  3. Automation activities - can extend the runners to handle an array of applications instead of specific ones

Conclusion

With the advent of technology and proliferation of multiple software technologies, automation is key for success. Automation can only be possible via IAC. With the rise and mainstream adoption of Kubernetes, microservices and IAC is the future. Any technology entering the IT market must enable operations and control via code. Trilio is a purpose-built solution providing point-in-time orchestration for cloud-native applications. Trilio is able to support the application lifecycle from Day 0 to Day 2 by enabling the respective personas to perform their objectives leveraging IAC.\

Ref. Link - ****

https://github.com/actions-runner-controller/actions-runner-controller
High Level Data Flow for T4K GitHub Runner