Azure pipelines is a cloud-based continuous integration and deployment (CI/CD) platform to build, test and deploy applications to cloud environments like Azure, AWS, or GCP through Kubernetes Services, Azure Web Apps, and Azure Functions. Azure DevOps pipelines can build container images, push the images to a container registry and deploy these images to Kubernetes environments of our choice. 

Setting up a pipeline that leverages a custom container registry or deploys to multi-cloud Kubernetes environments requires cloud-specific expertise and is time-consuming. For instance, building a pipeline from scratch requires:

  1. Containerizing a service – Creating Dockerfile and Kubernetes YAML files
  2. Preparing the agent pool – Agent pools are the environments within which the pipeline scripts are executed, including the container builds. Either Microsoft Hosted Agents or Self-Hosted agents can be used. In either scenario, the agents need to be configured to build and push the Docker Images
  3. Deployment Configuration – Only Azure Kubernetes Service is supported as the target Kubernetes environment.

Introducing gopaddle

gopaddle is a no-code platform that helps to build, deploy and maintain cloud-native applications across hybrid environments. It automatically generates Dockerfiles and Kubernetes YAML files through the intelligent scaffolding process. gopaddle exposes APIs that can be extended and integrated with other pipeline tools like Azure DevOps.

No-Code and Multi-Cloud Capabilities with Azure DevOps

By integrating gopaddle with Azure DevOps pipelines, developers get the no-code automation with the flexibility of deploying to multiple cloud platforms. The following table illustrates the additional capabilities developers get by integrating both solutions:

Azure DevOps (without gopaddle)Azure DevOps (with gopaddle)
Releases and DistributionsSupports multi-branch triggers with validation.Multi-strategy to build for different OS – supports Linux, MacOS and Windowsgopaddle supports multi-branch, parallel builds across multiple releases and distributions.Customized build strategy for individual release and distribution.
Build EnvironmentsAgent Pools – Microsoft-hosted agents and Self-hosted agents are supported. Agent pools can be Linux, MacOS or Window VMs or Linux/Windows ContainersOn-demand: Builds run inside your Kubernetes cluster as isolated build jobs with a specified CPU/Mem capacity.Requires no additional effort. Standardize the environments across build and deployment workflows.Uses a shared Kubernetes environment and can save cost as the environment scales up/down automatically based on build jobs.
RegistriesAzure Container Registry, Google Container Registry (via Google Service Accounts), Docker Hub (username/password-based authentication) are supported. For artifacts other than container images, Azure Artifacts or a NuGet repository can be used.Get additional support for a variety of Docker registries – gopaddle supports Azure Container Registry, AWS Elastic Container Registry, Google Container Registry, Docker Hub private registry, On-premise Container Registry like GitHub Registry. 
Source Control ReposAzure Repo/TFS, Bitbucket cloud, GitHub cloud, GitHub Enterprise, External Git repositories, and Subversion. Check the service connection types here –  https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yamlGet additional support for a variety of source control repositories – gopaddle supports GitHub Cloud & On-prem, GitLab Cloud & On-prem, BitBucket Cloud
Multi-Arch BuildMulti-Arch build requires Linux agents along with a QEMU emulator. Get support multi-arch builds natively. gopaddle supports –  Intel x86_64  – Linux onlyAMD64 – Linux onlyARM64 – Linux onlyMulti-arch – Linux only
Deploy to K8sHas integrations with Azure Kubernetes Service. Support for other Kubernetes services through service connections (using username/password or token-based authentication) along with scripts to deploy to Kubernetes clusters using cloud-specific APIs or kubectl or kube APIs. In either scenario, it requires scripting to deploy applications to Kubernetes.Works seamlessly with Azure AKS, Google GKE, AWS EKS, HPE Ezmeral Runtime Enterprise, Huawei Kubernetes Service, MicroK8s, and any CNCF compatible Kubernetes environment.

Workflow

Having explored the advantages of using gopaddle with the Azure DevOps pipeline, let us take a look at the process of integrating these two platforms. We will explore this in the context of a .NET Core application in the GitHub repository. To build a pipeline for this application, we need to initialize the project and create Docker/Kubernetes artifacts like Dockerfile and the Kubernetes YAML files. We can leverage gopaddle’s intelligent scaffolding process to generate these artifacts.

 The below workflow explains the step-by-step process of initializing the project.Project initialization workflow

a. Subscribe to gopaddle from here.

b. Provision Kubernetes cluster: Follow the steps here to provision a Kubernetes cluster on AWS/Google/Azure. If you are provisioning a GKE cluster, then we need a minimum capacity specified below to build the eShopOnWeb container in this cluster.

Type - N1-Standard-2
Disk - 40GB

Note: If you are using Azure Container Registry and AKS, then you need to register the Azure Cloud Account. Registering an Azure Cloud account requires Single-Sign-On (SSO) to your Azure Account. This requires an OAuth application be to created in your Azure Cloud Account which can authenticate our SSO requests. You can find more information here on creating an OAuth application and registering an Azure Cloud account with gopaddle.

c. Create an Allocation policy in gopaddle UI, with a minimum capacity for building eShopOnWeb container.

CPU Requests    -  500 millicore
CPU Limits      -  1500 millicore
Memory Requests -  4 G
Memory Limits   -  7 G

With the above node size and the allocation capacity, it takes approximately 4-5 mins to generate a Docker image. If you would like to speed up the build process, then you need to increase the node capacity and the allocation policy.

d. Add a container registry to gopaddle by following the steps here. This registry will be used to push/pull the Docker images during the build and the deployment process. 

e. Download and install gpctl command-line utility by following the steps here.

f. Fork and Clone the repository locally. 

git clone https://github.com/dotnet-architecture/eShopOnWeb

Note: We need to fork this project to our local GitHub account as the Azure DevOps pipeline requires access to create a pipeline script in this repo.

g. Initialise the project from your local desktop: Create 3 scripts for building, starting, and checking the health of the application.

buildScript.sh

#!/bin/bash
cd src/Web
dotnet restore
dotnet publish -c Release -o out

startScript.sh

#!/bin/bash
cd src/Web/out
dotnet Web.dll

healthCheck.sh

#!/bin/sh
curl http://localhost:5000/

h. Export ENVs in your local machine.

export ASPNETCORE_ENVIRONMENT=Development
export ASPNETCORE_URLS=http://+:5000

i. Initialise the project using gpctl.

gpctl login --emailID=<emailId> --password=<password> --endPoint=https://portal.gopaddle.io
gpctl init --startScript=./startScript.sh --buildScript=./buildScript.sh --healthCheck=./healthCheck.sh

Note: During the init process, choose the Kubernetes cluster to build and deploy the application and choose the Docker registry to use for pushing the image.

Once the gpctl init completes, gopaddle creates a .gp file with a list of resource IDs that can be used to build the pipeline.

Creating Azure DevOps Pipeline

Now that we have initialized the project, it is time for us to create a pipeline in Azure DevOps. 

Azure DevOps pipeline

From the Azure DevOps subscription, let us perform these steps.

a. Create a new pipeline.

b. Select the forked Github repository and the branch. Once you select the repo and the branch, you will be redirected to the Github sign-in page (or) if have you already logged in to GitHub, you will see the list of repositories in your Github account. You can choose the repository and authenticate the request from Azure DevOps.

List of repositories in Github account

c. Create an Azure DevOps Pipeline of type ‘Stater pipeline’.

Azure DevOps Pipeline - Stater pipeline

Once the pipeline is created, a pipeline script file named azure-pipelines.yml gets added to the project root folder in the GitHub repository.

d. Edit this pipeline script and modify the contents based on the template script from here.

e. Create a secure variable to access the gopaddle API token by creating a variable group by name gp-api-key under the Library option in the Azure DevOps portal. Store the gopaddle API token under this group with the variable key as GP_API_TOKEN and the value as the API token from the gpctl init output.

f. Replace the contents of the variables section in the pipeline script with the values from the .gp file generated during the gpctl init process.

A Closer Look at the Pipeline Script

The pipeline has 2 stages – Build the application and Perform a rolling update once the build completes. But this pipeline can be extended to include more steps like running a regression suite or sending an email to the project stakeholders.

As soon as the code is committed to the project, Azure DevOps detects a change and triggers the pipeline. When the pipeline script is executed, it calls the gopaddle API to trigger a container build. The container build process builds the container image in the pre-configured Kubernetes cluster, pushes the Docker image to the container registry. The pipeline script waits until the build process is complete. The second script, picks the build information from the previous step – like the build ID, commit description, etc and initiates a rolling update using the gopaddle API. 

Demo

You can find a brief demo of the pipeline integration: 

As illustrated, the modernization process takes about 10 minutes (most of this time is spent on building the container image and deploying the application) to generate the Dockerfile and the Kubernetes YAML files and about 10 minutes to create the pipeline. 

Through a simple integration, we get the benefit of modernizing the application and creating an end-to-end CI/CD pipeline by integrating gopaddle and Azure DevOps pipelines.

Configurator

Configurator is an open-source project that version controls and keeps Kubernetes ConfigMaps and Secrets in sync with the deployments. Configurator uses CRDs to create CustomConfigMaps and CustomSecrets that in turn creates ConfigMaps and Secrets with a postfix. As and when a change is detected in a CustomConfigMap or a CustomSecret, Configurator automatically generates a new ConfigMap with a new postfix. This acts as a version controlling system for ConfigMaps. A change in a ConfigMap not only creates a new ConfigMap version but also rolls out a new deployment version across all the deployments using the ConfigMap. This enables both rolling update and rollback of ConfigMaps in sync with the deployment versions.

This blog will focus on the following motives:

  • Installing Configurator using the helm chart.
  • Customizing Configurator helm chart based on requirements.
  • Contributing back to the Configurator project.

System Requirements

Make sure that you have installed helm in your machine and you are connected to a Kubernetes cluster. The chart is qualified for helm version > v3 & Kube version v1.20.8. Follow the documentation from the link to install helm: https://helm.sh/docs/helm/helm_version/

helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Installing Configurator using helm chart

Follow the below steps to directly deploy the Configurator helm package. Make sure that a namespace ‘configurator’ already exists in your cluster. If not, create a namespace with the following command.

kubectl create namespace configurator

Add the configurator helm repository, by executing the following command :

helm repo add gopaddle_configurator https://gopaddle-io.github.io/configurator/helm/

Once the command is executed, verify the repository by running the command below. You must see the configurator_helm repo in the list.

helm repo list

The output must be similar to this:

NAME                   URL
hashicorp              https://helm.releases.hashicorp.com
gopaddle_configurator  https://gopaddleio.github.io/configurator/helm/

Once you’ve verified the repo, install the helm chart with the following command: helm install <release_name> <repo_name/chart_name>

helm install release1.0.0 gopaddle_configurator/configurator

This installs the Configurator CRDs and the controller in the ‘configurator’ namespace. After you install the helm chart, verify by listing the resources in the corresponding namespace using the following commands.

kubectl get pods -n configurator
kubectl get crds -n configurator
kubectl get serviceaccounts -n configurator
kubectl get clusterrolebindings -n configurator

The configurator is now ready for use. Here is a reference blog on how to use configurator with the deployments: https://blog.gopaddle.io/2021/04/01/strange-things-you-never-knew-about-kubernetes-configmaps-on-day-one/

Customizing Configurator helm chart based on requirements

Sometimes, you may wish to change the Configurator image name, Docker repository, image tag or even include other service charts along with Configurator. Modifying the Configurator helm is pretty straightforward. Make sure you’ve cloned the Configurator GitHub project before proceeding with the next steps.

To clone the project, run the following command:

git clone https://github.com/gopaddle-io/configurator.git

The helm package needs to be unpacked to modify the helm chart. The zip file will be present at the path configurator/helm in the Configurator project. Choose this option when you want to modify the helm chart configuration. Unzip the file with the following command.

tar -zxvf <path to .tgz file>

This will extract the contents of the chart in a folder. Once you extract, the helm chart’s file system tree will look like this:

configurator
├── charts
├── Chart.yaml
├── crds
│ ├── crd-customConfigMap.yaml
│ └── crd-customSecret.yaml
├── templates
│ ├── configurator-clusterrolebinding.yaml
│ ├── configurator-clusterrole.yaml
│ ├── configurator-deployment.yaml
│ ├── configurator-serviceaccount.yaml
│ └── tests
└── values.yaml

The crds directory contains the custom resource definition files — crd-customConfigMap.yaml & crd-customSecret.yaml. The templates directory contains the resource’s yaml files, in our case, it contains the roles and role bindings and the configurator service definitions. The charts directory is empty by default. This folder can be used to add your application charts that use Configurator Custom Resource. The Chart.yaml file contains information about the helm, like the chart’s name, description, type etc.

# Default values for my_chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
replicas: 1
namespace: configurator
image: gopaddle/configurator:latest

You can edit the values.yaml file to your requirements like changing the namespace, replica_count or the image name, docker repository or the image tag. Make sure that the namespace used in the values.yaml exists in the cluster before you do a helm install. Once the necessary configuration is done, execute the following command to install the charts into your cluster: helm install <release_name> <chart_name>

helm install release1.0.0 configurator

This will install the helm chart inside the cluster with the new configurations.

Contributing back to the configurator project

To contribute the helm changes back to the Configurator project, you need to package the helm chart with the following command :

helm package <path to helm chart>

This command will package the charts to a .tgz file. After packaging the helm, you need to give a pull request for code review & merge.

You can take a look at this open-source project @ https://github.com/gopaddle-io/configurator.git.

For any queries on how to use or how to contribute to the project, you can reach us on our discord server — https://discord.gg/dr24Z4BmP8

Image courtesy — https://www.freepik.com/vectors/technology Technology vector created by stories — www.freepik.com

Configurator is a version control and a sync service that keeps Kubernetes ConfigMaps and Secrets in sync with the deployments. It enables both rolling update and rollback of deployments and statefulsets along with their configuration state.

You can take a look at this open-source project @ https://github.com/gopaddle-io/configurator.git.

In this blog, I would like to introduce you to the steps for using custom Docker repository while building Configurator.

As a pre-requisite, you need to have golang and Docker CLI installed on your machine. You also need a Kubernetes cluster (version 1.16+). Install kubectl command and connect to the kubernetes cluster.

Pre-requisite

Fork the project and Clone the project to your local machine

git clone https://github.com/<your-githubhandle>/configurator.git

Check whether golang is configured

$ go version
go version go1.13.3 linux/amd64
....

$ echo $GOPATH
/home/user/codebase
....

$ echo $GOHOME
/home/user/codebase

Check whether Docker CLI works

$  sudo docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete 
Digest: sha256:7d91b69e04a9029b99f3585aaaccae2baa80bcf318f4a5d2165a9898cd2dc0a1
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Verify if Kubernetes connectivity works

$ kubectl cluster-info

Kubernetes control plane is running at https://35.224.198.88
GLBCDefaultBackend is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Configuring the Docker repository

Once the pre-requisites are met, we can start configuring the docker registry in order to build configurator and push it to your private repository.

LogIn to your docker hub account.

docker login --username <docker-hub-userName> --password <docker-hub-password>
  • Configure the docker hub repository name and the image tag in the Makefile. Edit the Makefile and change the DOCKER_IMAGE_REPO and the DOCKER_IMAGE_TAG variables to your docker repository and the tag name with which you prefer to push the newly built docker image.
$ cd configurator
$ vi Makefile

ifndef DOCKER_IMAGE_REPO
  DOCKER_IMAGE_REPO=demogp/demo-configurator
endif

ifndef DOCKER_IMAGE_TAG
  DOCKER_IMAGE_TAG=v1.0
endif
  • Now edit the configurator-deployment.yaml file and change the docker repository and the image name from where the configurator controller needs to be pulled from.
$ cd deploy/
$ vi configurator-deployment.yaml
....
....
    spec:
      containers:
      - image: demogp/demo-configurator:v1.0
        imagePullPolicy: Always
        name: configurator
      serviceAccountName: configurator-controller

The repository configurations are complete.

Build and Deploy Configurator

  • Move to root of the project directory and execute the make command mentioned below.
$ cd ../
$ make clean build push deploy
....
....
rm -f configurator
docker rmi demogp/demo-configurator:v1.0
Error: No such image: demogp/demo-configurator:v1.0
Makefile:16: recipe for target 'clean-configurator' failed
make: [clean-configurator] Error 1 (ignored)
go mod vendor
....
....
go build -o configurator . 
go: downloading github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e
go: downloading github.com/robfig/cron v1.2.0
go: downloading github.com/google/go-cmp v0.5.2
....
docker build . -t demogp/demo-configurator:v1.0
Sending build context to Docker daemon  78.48MB
Step 1/6 : FROM golang
latest: Pulling from library/golang
4c25b3090c26: Pull complete 
1acf565088aa: Pull complete 
b95c0dd0dc0d: Pull complete 
5cf06daf6561: Pull complete 
4541a887d2a0: Pull complete 
dcac0686adef: Pull complete 
9717d2820c6a: Pull complete 
Digest: sha256:634cda4edda00e59167e944cdef546e2d62da71ef1809387093a377ae3404df0
Status: Downloaded newer image for golang:latest
 ---> 8735189b1527
Step 2/6 : MAINTAINER Bluemeric <info@bluemeric.com>
 ---> Running in 1a41655fda14
Removing intermediate container 1a41655fda14
 ---> ffbd8038390d
Step 3/6 : RUN mkdir /app/
 ---> Running in d24ca3cc6c44
Removing intermediate container d24ca3cc6c44
 ---> ae25de38a5fc
Step 4/6 : WORKDIR /app/
 ---> Running in 86ede46c4736
Removing intermediate container 86ede46c4736
 ---> 3a6c8e408e7b
Step 5/6 : Add configurator /app/
 ---> 3c99e28f20d4
Step 6/6 : CMD ["./configurator"]
 ---> Running in 714c9a7524d0
Removing intermediate container 714c9a7524d0
 ---> c63e68e4ceb2
Successfully built c63e68e4ceb2
Successfully tagged demogp/demo-configurator:v1.0
docker push demogp/demo-configurator:v1.0
The push refers to repository [docker.io/demogp/demo-configurator]
04b1dc245435: Pushed 
acf8d8aa9ae0: Pushed 
4538c63ee03d: Mounted from library/golang 
84140b757a05: Mounted from library/golang 
9444aade22b2: Mounted from library/golang 
9889ce9dc2b0: Mounted from library/golang 
21b17a30443e: Mounted from library/golang 
05103deb4558: Mounted from library/golang 
a881cfa23a78: Mounted from library/golang 
v1.0: digest: sha256:3f21ea83d6a215705bd3bf7d2e9f3ceef55cb6ba05ceb8964848f823b8f2aa16 size: 2215
kubectl create ns configurator		
namespace/configurator created
kubectl apply -f deploy/configurator-serviceaccount.yaml
serviceaccount/configurator-controller created
kubectl apply -f deploy/configurator-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io/configurator created
kubectl apply -f deploy/configurator-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/Configurator created
kubectl apply -f deploy/crd-customConfigMap.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/customconfigmaps.configurator.gopaddle.io created
kubectl apply -f deploy/crd-customSecret.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/customsecrets.configurator.gopaddle.io created
kubectl apply -f deploy/configurator-deployment.yaml
deployment.apps/configurator-controller created

Build target ‘build’ builds the configurator controller and creates a new Docker image. ‘push’ pushes the image to the Docker registry and ‘deploy’ deploys the configurator CRDs and the controller to the kubernetes cluster. Once the build is complete, you can see the configurator image in your dockerhub.

Configurator image on dockerhub

How to validate the deployment ?

Execute the below kubectl commands to validate if the deploy task has successfully installed the configurator in your kubernets environment.

$ kubectl get ns
NAME              STATUS   AGE
configurator      Active   2m22s
....

$ kubectl get crds -n configurator
NAME                                             CREATED AT
customconfigmaps.configurator.gopaddle.io        2021-08-24T07:45:45Z
customsecrets.configurator.gopaddle.io           2021-08-24T07:45:47Z
....

$ kubectl get pods -n configurator
NAME                                       READY   STATUS    RESTARTS   AGE
configurator-controller-666d6794bb-4lm6c   1/1     Running   0          6m52s


$ kubectl get clusterrolebinding | grep Configurator
Configurator     ClusterRole/configurator 10m

Removing Configurator

To clean up the controller artifact and the local docker image, you can run the target clean as below.

$ make remove clean
....
....
kubectl delete -f deploy/configurator-deployment.yaml
deployment.apps "configurator-controller" deleted
kubectl delete -f deploy/crd-customConfigMap.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io "customconfigmaps.configurator.gopaddle.io" deleted
kubectl delete -f deploy/crd-customSecret.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io "customsecrets.configurator.gopaddle.io" deleted
kubectl delete -f deploy/configurator-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io "Configurator" deleted
kubectl delete -f deploy/configurator-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io "configurator" deleted
kubectl delete -f deploy/configurator-serviceaccount.yaml
serviceaccount "configurator-controller" deleted
kubectl delete ns configurator
namespace "configurator" deleted
....
....
rm -f configurator
docker rmi demogp/demo-configurator:v1.0
Untagged: demogp/demo-configurator:v1.0
Deleted: sha256:1f997b671507d230e3e685d434b3e9c678b4cf356ea044448b73ae489794ae24
Deleted: sha256:dec6aeb58347abf3832e747d4478d6493ed1da39639f5ba10dacb372281f59a2
Deleted: sha256:0e2e52831fa3e6475b347c40369b9cc3a41e2aaabd232480a244c69a90ab9cf3
Deleted: sha256:4851458a100d5c34297813abc157b15baf1f25bfbbdf9c1cca8e232b03f31103
Deleted: sha256:07f715e9deed52886e73de55a223dff83baa071f25264bfad677e8644f377fd7
Deleted: sha256:1fbf81f2d59e63c727e4b97b7a139de6d1fbf89f6715f8533f4c1e3f018a7f92
Deleted: sha256:0fed8f83cbe4268f8bd2692972ff3310fb88975a829ae7365662a7f5f8efd525

For any queries on how to use or how to contribute to the project, you can reach us on the discord server – https://discord.gg/dr24Z4BmP8

gopaddle v4.1 is a minor update with a few bug fixes and minor enhancements.

In this minor release we have introduced a gopaddle command line utility – gpctl to scaffold applications and import existing Kubernetes workloads to gopaddle. We have added support for Huawei Cloud and features that help in advanced configurations for service deployments and Docker builds.

gpctl import

Imports a pre-existing Kubernetes project with YAML files to gopaddle.

  • Use cases:
    • Migrate from GitOps to GUI based Cloud Native Governance platform
    • Migrate from one cloud platform to another by building a reusable gopaddle template.
  • Support Matrix:
    • Supported on Ubuntu 20.04
    • Supports github, bitbucket and gitlab source control repositories
    • If the Kubernetes YAML files in the Kubernetes project have references to docker images, then they can be linked during the gpctl import. gptcl supports Docker public & private hub, AWS ECR, Google GCR, Azure ACR, and any Quay based private repositories.

gpctl init (Alpha Version)

Initialize a microservice from source control repository and deploy to kubernetes in minutes.

  • Use cases:
    • Code to Kubernetes using a single command
    • Automatically create a Docker file by profiling the microservice
    • Automatically generate Kubernetes YAML files
    • Deploy the service on Kubernetes and get the end point to access the service
  • Support Matrix:
    • Supported on Ubuntu 18.04 or later
    • Only github based projects are supported
    • Java, NodeJS, Python and any type of linux based workloads that do not bring up a terminal

asciicast

Huawei Cloud Support

Use a pre-existing Huawei Cloud as an external cluster by registering it via Bastion Host or directly using its Private IP address

  • Use case:
    • Use Huawei Cloud for building docker containers
    • Use Huawei Cloud for deploying applications
    • Launch Stateful services on Huawei Cloud using Elastic Volume Service (EVS)
    • Use Huawei LoadBalancer to access services via Domain Name and Ingress rules
    • Launch Services on Huawei Cloud and access them using Public Elastic IP
  • Open Issues
    • Container Terminal doesn’t work when the Cluster is registered via Bastion Host

Shared Persistent Volumes

Use Shared persistent volumes across services within an application.

Dockerfile Custom Path and Build Arguments

Use Dockerfile custom paths and build ARG while creating a Docker Project in gopaddle.

  • Use cases:
    • If the Docker project uses Dockerfile in a location other than the project root, the custom docker file path can be provided at the time of adding the build scripts to a container
    • If the Container build requires certain environment variables to be set, it can be provided using the Build arguments at the time of adding the build scripts to a container

Version Control ConfigMaps and Secrets

Configurator integration: Use configurator to update and maintain configMaps and secrets in gopaddle

  • Use cases:
    • Keep services and configMaps/secrets in sync
    • Version control configMaps/secrets
    • Perform rolling updates or rollbacks on configMaps/secrets along with Deployments and Stateful sets

Custom Ingress

Use Ingress Controller for deployments. Define custom ingress controller like any other service in gopaddle and add it to the deployment templates.

  • Use Case:
    • Use custom ingress controller other than the default controller provided by gopaddle

Azure Autoscaling

Define autoscaling triggers for Azure node pools.

Node and Service Affinity Rules

Deploying on Kubernetes is one complex task, but dealing with surprises during maintenance is another. I can talk from my own experience of maintaining our gopaddle platform deployments on Kubernetes for more than a year now. Even with careful planning and collective knowledge from within the team, there are still hidden challenges in keeping the deployments intact. We get to learn those hidden challenges only by running the production systems on kubernetes for a while. This blog is one such wisdom we learnt. I would like to share our experience with ConfigMaps and what solution we built (open source) to overcome some trouble with ConfigMaps. For the rest of the blog, I am going to be focusing on ConfigMaps, but secrets also have the same set of challenges. Though I have referenced deployments through out the blog, the given scenario and solution exists for kubernetes secrets as well.

ConfigMaps and Secrets are often overlooked topics when it comes to Cloud Native Deployments. But they can add unforeseen challenges during application maintenance. Let me first introduce you to what ConfigMaps are.

ConfigMaps are Kubernetes resources that are used to store application configurations. It enables build time and run time attribute segregation in a cloud native deployment. Say by using ConfigMaps, you do not have to package application configurations along with your container images. Thus changing application configurations does not require the entire application to be rebuilt. We leverage ConfigMaps to keep our applications 12-factor compatible. In a nutshell, ConfigMaps are:

  • Collection of regular files or key/value pairs
  • Can be used to set Environment variables inside a container (using ValueFrom: ConfigMapRef to refer to values defined in ConfigMaps)
  • Can be mounted as directories inside containers (using VolumeMounts/MountPath keywords) and all the files within ConfigMaps get mounted inside the container on the mountPath provided.
  • Shared across deployments/replicas
  • Confined to a namespace
  • Created from files, literals, kustomize configMapGenerator
  • Replaces all files within the mount path : Since ConfigMaps are mounted inside the container on a given mount path, any existing files and folders within the container will not be available.

Here is an example of how ConfigMaps are defined and referenced inside a deployment specification.

Example of defining and using ConfigMaps inside deployment specifications

Hidden challenges of ConfigMaps

Some of the challenges with ConfigMaps are realized as soon as they are mounted inside the container. But some are unearthered only during maintenance. Following are some challenges we have observed :

  1. Can’t execute files in ConfigMaps : Starting from K8s 1.9.6, ConfigMaps get mounted as read-only files by default. Hence you may not be able to execute or run these files. Say, if you are planning on executing these files as container EntryPoint or CMD ARGs, then container may crash during startup as it cannot execute these read-only files. If you have recently upgraded the cluster version, then your deployments may break due this. Please check this K8s issue for information on how to configure ReadOnlyAPIDataVolumes to mount ConfigMaps as ReadWrite files.
  2. Deployments & ConfigMaps are loosely coupled, ie., they follow different lifecycle, but updating the contents of a ConfigMap automatically reflects inside the Pods. More often, applications running inside the containers need a restart to pick the new changes. But, applications are clueless of the changes. These changes are noticed during a scale up/down or a Pod restart event when the application inside the container gets restarted.
  3. No Versioning/No Rollbacks of ConfigMaps: ie when deployments are rolled back, it does not roll back the contents of the ConfigMaps.

The last two issues are the resultant of the mutable nature of ConfigMaps.

ConfigMaps are mutable

ConfigMaps are mutable ie., they can be edited. Every time a change is made, it is the same ConfigMap that gets updated. There are no revisions. Let me illustrate this with an example.

ConfigMaps are mutable

We have a ConfigMap, which is referenced in two different deployments. When you change the ConfigMap, the contents of the ConfigMap changes inside the deployments. When the deployments are rolled back, they still point to the current content of the ConfigMap. This can cause a problem when your application is expecting something but it actually sees something else. Deployments do not maintain any state regarding the ConfigMap changes.

Workarounds

ConfigMaps – Workarounds
  1. Smart apps: Applications can be designed in such a way that they constantly poll for changes in the ConfigMaps. This approach still cannot address the roll back issue.
  2. Induced Rolling Update: Another common approach is to hash the contents of the ConfigMap in to the deployment. When the ConfigMap changes, the hash changes and that automatically triggers a rolling update on the deployment. But even in this case, rolling back a deployment does not roll back to the previous content of the ConfigMap. Here is a reference to how ConfigMap hash can be used.
  3. Immutable ConfigMaps: The next option is to use ConfigMap as an immutable content. This feature was introduced in kubernetes 1.19. When this feature is turned on, you cannot update a ConfigMap and thus you can avoid all the associated problems.

Versioning the ConfigMaps

The ideal solution to keep the deployments and the ConfigMaps in sync is to version control the ConfigMaps and reference them in the deployments.

Versioning ConfigMaps
  • In the above example, when the contents of the ConfigMap version 1 is updated, it creates a new ConfigMap version 2. When the deployment specification is updated with ConfigMap version 2, it automatically triggers a rolling update and creates a new deployment version. When the deployment is rolled back, the rolled back version will reference ConfigMap v1. Thus ConfigMaps and deployments go hand in hand.
  • To make this work, we need to :
  • Version ConfigMaps whenever a change is committed to a ConfigMap
  • Automatically update ConfigMap versions in Deployment Specifications where ever it being referenced
  • Purge unused ConfigMaps periodically – Since ConfigMaps are shared resources across deployments and since each deployment may have a different revision history limits, we must consider checking all the revisions of all the deployments within the namespace to know if a ConfigMap is being used and purge accordingly.

Introducing Configurator

Configurator is an open source solution from gopaddle that makes use of Custom Resource Definitions (CRDs) for ConfigMaps/Secrets and an operator to automate the above mentioned steps. ConfigMaps and Secrets are now defined as CustomConfigMaps and CustomSecrets which are custom resources constantly monitored by the Configurator.

How it solves the problem

When a new CustomConfigMap or a CustomSecret resource is created, it generates a ConfigMap or a Secret with a postfix. This ConfigMap along with the post fix need to be added to the deployments/statefulsets initially.

From then on, if any change to the CustomConfigMap or CustomSecret is detected, configurator automatically updates all the deployments/statefulsets referencing the specific ConfigMap/Secret with a postfix. Configurator heavily depends on the configMapName in the CustomConfigMap and the labels in the deployment/statefulset specifications.

In the above example you can see that the example-customConfigMap.yaml creates a CustomConfigMap with the configMapName as testconfig. As soon as the CustomConfigMap is created, it automatically creates a ConfigMap testconfig-sn8ya. We need to manually add this ConfigMapName testconfig and the postfix sn8ya in the deployment’s metadata.labels as testconfig: sn8ya and also use the ConfigMapName testconfig-sn8ya in the volumes section.

From now on, user does not have to manage ConfigMaps directly.

Any change required in the ConfigMap or Secret needs to be done through the CustomConfigMap or CustomSecret.

When the CustomConfigMap is updated with the new content in the data section, it automatically generates a new ConfigMap testconfig-10jov and updates the deployment with the new ConfigMap name under the volumes and the metadata.label section.

Configurator purges unused ConfigMaps and Secret every 5 mins. It scans the replicaset or controllerRevision of all the deployments and statefulsets in the namespace and checks if the metadata.label exists for the ConfigMap. If there are no references, it purges the ConfigMap version.

How to use it ?

Download run the YAML files from our repository here and install them in your cluster.

kubectl apply -f deploy/crd-customConfigMap.yaml
kubectl apply -f deploy/crd-customSecret.yaml
kubectl create ns configurator
kubectl apply -f deploy/configurator-clusterrole.yaml
kubectl apply -f deploy/configurator-clusterrolebinding.yaml
kubectl apply -f deploy/configurator-serviceaccount.yaml
kubectl apply -f deploy/configurator-deployment.yaml

Once the configurator is deployed into the cluster, start creating CustomConfigMap or CustomSecret.

 example-customConfigMap.yaml

apiVersion: "configurator.gopaddle.io/v1alpha1"
kind: CustomConfigMap
metadata:
 name: configtest
 namespace: test
spec:
  configMapName: testconfig
  data:
   application.properties: |
    FOO=Bar

Create CustomConfigMap in cluster

Kubectl apply -f example-customConfigMap.yaml 

List the ConfigMaps

kubectl get configMap -n test
NAME               DATA   AGE
testconfig-sn8ya   1      7s

Copy the ConfigMap name and add the ConfigMap name in the deployment.yaml file at the volume level and metadata label level. In metadata level split the ConfigMap name and postfix separately and add that in the label.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-deployment
  labels:
   testconfig: sn8ya
    app: busybox
spec:
  replicas: 1
  revisionHistoryLimit: 1
  strategy: 
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
        volumeMounts:
        - mountPath: /test
          name: test-config
      volumes:
      - name: test-config
        configMap:
          name: testconfig-sn8ya

Edit the CustomConfigMap and list the ConfigMap. You can see new ConfigMap name with a postfix.

Kubectl edit ccm configtest  -n test
Kubectl list cm -n test
NAME               DATA   AGE
testconfig-10jov   1      10s
testconfig-sn8ya   1      111s

Now check the deployment. You can see that it is updated with new ConfigMap and metadata label.

kubectl get deployment busybox-deployment -n test -o yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-deployment
  labels:
  testconfig: 10jov
    app: busybox
spec:
  replicas: 1
  revisionHistoryLimit: 1
  strategy: 
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
        volumeMounts:
        - mountPath: /test
          name: test-config
      volumes:
      - name: test-config
        configMap:
          name: testconfig-10jov

Give configurator a try and share your feedback with us. If you are interested in contributing to the project, you can reach out to us. The project can be cloned from https://github.com/gopaddle-io/configurator

Ready To Modernize Applications The Easy Way ?

Try our 15 day free trial