TVK GitHub Runner

Why?

IAC is very important in Kubernetes Streamline ops from dev to production as much as possible. You want runners that can automate tasks around application development and delivery TVK has built a runner for GH that can do the following.

Assumptions

  1. Source and destination kubernetes clusters are setup

  2. TVK installed on the source and destination clusters with appropriate licenses

  3. A backup location is created and available for use

High-level Steps with Flowchart

High Level Data Flow for TVK GitHub Runner

Pre-requisites

In order to get started with the TVK runner ensure the following prerequisites are met:

  1. Install TVK on the source and destination clusters with appropriate licenses

  2. Ensure a backup location is created and available for use.

System Setup

Ref. Link - https://github.com/actions-runner-controller/actions-runner-controller

  1. actions-runner-controller uses cert-manager for certificate management of Admission Webhook. Install cert-manager using below command:$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml

  2. Install the custom resource and actions-runner-controller with kubectl. This will create actions-runner-system namespace in the Kubernetes cluster and deploy the required resources.

    $ kubectl apply -f https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.18.2/actions-runner-controller.yaml

  3. Setup authentication for actions-runner-controller to authenticate with GitHub using PAT (Personal Access Token). Personal Access Tokens can be used to register a self-hosted runner by actions-runner-controller.

    1. Log-in to a GitHub account that has admin privileges for the repository, and create a personal access token with the appropriate scopes listed below:

      1. Required Scopes for Repository Runners : repo (Full control)

  4. Deploy the token as a secret to the Kubernetes cluster $ GITHUB_TOKEN="<token>"

    $ kubectl create secret generic controller-manager -n actions-runner-system --from-literal=github_token=${GITHUB_TOKEN}

  5. Deploy a repository runner

    1. Create a manifest file including Runner resource as follows:

      # runner.yaml

      apiVersion: actions.summerwind.dev/v1alpha1

      kind: Runner

      metadata:

      name: example-runner

      spec:

      repository: summerwind/actions-runner-controller

      env: []

    2. Apply the created manifest file to your Kubernetes

      $ kubectl apply -f runner.yaml runner.actions.summerwind.dev/example-runner created

    3. You can see that the Runner resource has been created

      $ kubectl get runners

      NAME REPOSITORY STATUS

      example-runner summerwind/actions-runner-controller Running

    4. You can also see that the runner pod has been running

      $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      example-runner 2/2 Running 0 1m

    5. The runner you created has been registered to your repository

Executing the Runners

  1. Create a workflow with 2 jobs - Backup and Restore

  2. User Inputs:

    1. kubeconfig files for source and destination clusters

    2. Load the user input variables in the “.env”-

      1. Backup target name and namespace on source cluster

      2. Namespace to be backed up

      3. Backup target name and namespace on destination cluster

      4. Namespace to be used for restore

  3. Workflow triggers - set appropriate triggers for the job to run

  4. The workflow run is made up of Backup and Restore jobs that run sequentially

  5. Backup job details:

    1. runs-on: self-hosted

    2. Setup the environment

      1. install kubectl

      2. Load kubeconfig files for source and destination clusters

      3. Load the user inputs -

        1. Backup target name and namespace on source cluster

        2. Namespace to be backed up

        3. Backup target name and namespace on destination cluster

        4. Namespace to be used for restore

    3. Create a backup plan

    4. Perform backup

    5. Capture the backup location that will be needed for the restore on destination cluster

  6. Restore job details:

    1. runs-on: self-hosted

    2. Setup the environment

      1. install kubectl

      2. Load kubeconfig files for source and destination clusters

      3. Load the user inputs -

        1. Backup target name and namespace on source cluster

        2. Namespace to be backed up

        3. Backup target name and namespace on destination cluster

        4. Namespace to be used for restore

    3. Perform restore using the location of the backup

Extending the runner

  1. The workflow can be extended to support additional destination clusters to cover several environments as part of the test cases.

  2. Different storage classes that may be in use in the source and destination clusters are supported via TVK transforms.

  3. Automation activities - can extend the runners to handle an array of applications instead of specific ones

Conclusion

With the advent of technology and proliferation of multiple software technologies, automation is key for success. Automation can only be possible via IAC. With the rise and mainstream adoption of Kubernetes, microservices and IAC is the future. Any technology entering the IT market must enable operations and control via code. TrilioVault is a purpose-built solution providing point-in-time orchestration for cloud-native applications. Trilio is able to support the application lifecycle from Day 0 to Day 2 by enabling the respective personas to perform their objectives leveraging IAC.