Configurator

Configurator is an open-source project that version controls and keeps Kubernetes ConfigMaps and Secrets in sync with the deployments. Configurator uses CRDs to create CustomConfigMaps and CustomSecrets that in turn creates ConfigMaps and Secrets with a postfix. As and when a change is detected in a CustomConfigMap or a CustomSecret, Configurator automatically generates a new ConfigMap with a new postfix. This acts as a version controlling system for ConfigMaps. A change in a ConfigMap not only creates a new ConfigMap version but also rolls out a new deployment version across all the deployments using the ConfigMap. This enables both rolling update and rollback of ConfigMaps in sync with the deployment versions.

This blog will focus on the following motives:

  • Installing Configurator using the helm chart.
  • Customizing Configurator helm chart based on requirements.
  • Contributing back to the Configurator project.

System Requirements

Make sure that you have installed helm in your machine and you are connected to a Kubernetes cluster. The chart is qualified for helm version > v3 & Kube version v1.20.8. Follow the documentation from the link to install helm: https://helm.sh/docs/helm/helm_version/

helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Installing Configurator using helm chart

Follow the below steps to directly deploy the Configurator helm package. Make sure that a namespace ‘configurator’ already exists in your cluster. If not, create a namespace with the following command.

kubectl create namespace configurator

Add the configurator helm repository, by executing the following command :

helm repo add gopaddle_configurator https://gopaddle-io.github.io/configurator/helm/

Once the command is executed, verify the repository by running the command below. You must see the configurator_helm repo in the list.

helm repo list

The output must be similar to this:

NAME                   URL
hashicorp              https://helm.releases.hashicorp.com
gopaddle_configurator  https://gopaddleio.github.io/configurator/helm/

Once you’ve verified the repo, install the helm chart with the following command: helm install <release_name> <repo_name/chart_name>

helm install release1.0.0 gopaddle_configurator/configurator

This installs the Configurator CRDs and the controller in the ‘configurator’ namespace. After you install the helm chart, verify by listing the resources in the corresponding namespace using the following commands.

kubectl get pods -n configurator
kubectl get crds -n configurator
kubectl get serviceaccounts -n configurator
kubectl get clusterrolebindings -n configurator

The configurator is now ready for use. Here is a reference blog on how to use configurator with the deployments: https://blog.gopaddle.io/2021/04/01/strange-things-you-never-knew-about-kubernetes-configmaps-on-day-one/

Customizing Configurator helm chart based on requirements

Sometimes, you may wish to change the Configurator image name, Docker repository, image tag or even include other service charts along with Configurator. Modifying the Configurator helm is pretty straightforward. Make sure you’ve cloned the Configurator GitHub project before proceeding with the next steps.

To clone the project, run the following command:

git clone https://github.com/gopaddle-io/configurator.git

The helm package needs to be unpacked to modify the helm chart. The zip file will be present at the path configurator/helm in the Configurator project. Choose this option when you want to modify the helm chart configuration. Unzip the file with the following command.

tar -zxvf <path to .tgz file>

This will extract the contents of the chart in a folder. Once you extract, the helm chart’s file system tree will look like this:

configurator
├── charts
├── Chart.yaml
├── crds
│ ├── crd-customConfigMap.yaml
│ └── crd-customSecret.yaml
├── templates
│ ├── configurator-clusterrolebinding.yaml
│ ├── configurator-clusterrole.yaml
│ ├── configurator-deployment.yaml
│ ├── configurator-serviceaccount.yaml
│ └── tests
└── values.yaml

The crds directory contains the custom resource definition files — crd-customConfigMap.yaml & crd-customSecret.yaml. The templates directory contains the resource’s yaml files, in our case, it contains the roles and role bindings and the configurator service definitions. The charts directory is empty by default. This folder can be used to add your application charts that use Configurator Custom Resource. The Chart.yaml file contains information about the helm, like the chart’s name, description, type etc.

# Default values for my_chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
replicas: 1
namespace: configurator
image: gopaddle/configurator:latest

You can edit the values.yaml file to your requirements like changing the namespace, replica_count or the image name, docker repository or the image tag. Make sure that the namespace used in the values.yaml exists in the cluster before you do a helm install. Once the necessary configuration is done, execute the following command to install the charts into your cluster: helm install <release_name> <chart_name>

helm install release1.0.0 configurator

This will install the helm chart inside the cluster with the new configurations.

Contributing back to the configurator project

To contribute the helm changes back to the Configurator project, you need to package the helm chart with the following command :

helm package <path to helm chart>

This command will package the charts to a .tgz file. After packaging the helm, you need to give a pull request for code review & merge.

You can take a look at this open-source project @ https://github.com/gopaddle-io/configurator.git.

For any queries on how to use or how to contribute to the project, you can reach us on our discord server — https://discord.gg/dr24Z4BmP8

Image courtesy — https://www.freepik.com/vectors/technology Technology vector created by stories — www.freepik.com

Configurator is a version control and a sync service that keeps Kubernetes ConfigMaps and Secrets in sync with the deployments. It enables both rolling update and rollback of deployments and statefulsets along with their configuration state.

You can take a look at this open-source project @ https://github.com/gopaddle-io/configurator.git.

In this blog, I would like to introduce you to the steps for using custom Docker repository while building Configurator.

As a pre-requisite, you need to have golang and Docker CLI installed on your machine. You also need a Kubernetes cluster (version 1.16+). Install kubectl command and connect to the kubernetes cluster.

Pre-requisite

Fork the project and Clone the project to your local machine

git clone https://github.com/<your-githubhandle>/configurator.git

Check whether golang is configured

$ go version
go version go1.13.3 linux/amd64
....

$ echo $GOPATH
/home/user/codebase
....

$ echo $GOHOME
/home/user/codebase

Check whether Docker CLI works

$  sudo docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete 
Digest: sha256:7d91b69e04a9029b99f3585aaaccae2baa80bcf318f4a5d2165a9898cd2dc0a1
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Verify if Kubernetes connectivity works

$ kubectl cluster-info

Kubernetes control plane is running at https://35.224.198.88
GLBCDefaultBackend is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.224.198.88/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Configuring the Docker repository

Once the pre-requisites are met, we can start configuring the docker registry in order to build configurator and push it to your private repository.

LogIn to your docker hub account.

docker login --username <docker-hub-userName> --password <docker-hub-password>
  • Configure the docker hub repository name and the image tag in the Makefile. Edit the Makefile and change the DOCKER_IMAGE_REPO and the DOCKER_IMAGE_TAG variables to your docker repository and the tag name with which you prefer to push the newly built docker image.
$ cd configurator
$ vi Makefile

ifndef DOCKER_IMAGE_REPO
  DOCKER_IMAGE_REPO=demogp/demo-configurator
endif

ifndef DOCKER_IMAGE_TAG
  DOCKER_IMAGE_TAG=v1.0
endif
  • Now edit the configurator-deployment.yaml file and change the docker repository and the image name from where the configurator controller needs to be pulled from.
$ cd deploy/
$ vi configurator-deployment.yaml
....
....
    spec:
      containers:
      - image: demogp/demo-configurator:v1.0
        imagePullPolicy: Always
        name: configurator
      serviceAccountName: configurator-controller

The repository configurations are complete.

Build and Deploy Configurator

  • Move to root of the project directory and execute the make command mentioned below.
$ cd ../
$ make clean build push deploy
....
....
rm -f configurator
docker rmi demogp/demo-configurator:v1.0
Error: No such image: demogp/demo-configurator:v1.0
Makefile:16: recipe for target 'clean-configurator' failed
make: [clean-configurator] Error 1 (ignored)
go mod vendor
....
....
go build -o configurator . 
go: downloading github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e
go: downloading github.com/robfig/cron v1.2.0
go: downloading github.com/google/go-cmp v0.5.2
....
docker build . -t demogp/demo-configurator:v1.0
Sending build context to Docker daemon  78.48MB
Step 1/6 : FROM golang
latest: Pulling from library/golang
4c25b3090c26: Pull complete 
1acf565088aa: Pull complete 
b95c0dd0dc0d: Pull complete 
5cf06daf6561: Pull complete 
4541a887d2a0: Pull complete 
dcac0686adef: Pull complete 
9717d2820c6a: Pull complete 
Digest: sha256:634cda4edda00e59167e944cdef546e2d62da71ef1809387093a377ae3404df0
Status: Downloaded newer image for golang:latest
 ---> 8735189b1527
Step 2/6 : MAINTAINER Bluemeric <info@bluemeric.com>
 ---> Running in 1a41655fda14
Removing intermediate container 1a41655fda14
 ---> ffbd8038390d
Step 3/6 : RUN mkdir /app/
 ---> Running in d24ca3cc6c44
Removing intermediate container d24ca3cc6c44
 ---> ae25de38a5fc
Step 4/6 : WORKDIR /app/
 ---> Running in 86ede46c4736
Removing intermediate container 86ede46c4736
 ---> 3a6c8e408e7b
Step 5/6 : Add configurator /app/
 ---> 3c99e28f20d4
Step 6/6 : CMD ["./configurator"]
 ---> Running in 714c9a7524d0
Removing intermediate container 714c9a7524d0
 ---> c63e68e4ceb2
Successfully built c63e68e4ceb2
Successfully tagged demogp/demo-configurator:v1.0
docker push demogp/demo-configurator:v1.0
The push refers to repository [docker.io/demogp/demo-configurator]
04b1dc245435: Pushed 
acf8d8aa9ae0: Pushed 
4538c63ee03d: Mounted from library/golang 
84140b757a05: Mounted from library/golang 
9444aade22b2: Mounted from library/golang 
9889ce9dc2b0: Mounted from library/golang 
21b17a30443e: Mounted from library/golang 
05103deb4558: Mounted from library/golang 
a881cfa23a78: Mounted from library/golang 
v1.0: digest: sha256:3f21ea83d6a215705bd3bf7d2e9f3ceef55cb6ba05ceb8964848f823b8f2aa16 size: 2215
kubectl create ns configurator		
namespace/configurator created
kubectl apply -f deploy/configurator-serviceaccount.yaml
serviceaccount/configurator-controller created
kubectl apply -f deploy/configurator-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io/configurator created
kubectl apply -f deploy/configurator-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/Configurator created
kubectl apply -f deploy/crd-customConfigMap.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/customconfigmaps.configurator.gopaddle.io created
kubectl apply -f deploy/crd-customSecret.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/customsecrets.configurator.gopaddle.io created
kubectl apply -f deploy/configurator-deployment.yaml
deployment.apps/configurator-controller created

Build target ‘build’ builds the configurator controller and creates a new Docker image. ‘push’ pushes the image to the Docker registry and ‘deploy’ deploys the configurator CRDs and the controller to the kubernetes cluster. Once the build is complete, you can see the configurator image in your dockerhub.

Configurator image on dockerhub

How to validate the deployment ?

Execute the below kubectl commands to validate if the deploy task has successfully installed the configurator in your kubernets environment.

$ kubectl get ns
NAME              STATUS   AGE
configurator      Active   2m22s
....

$ kubectl get crds -n configurator
NAME                                             CREATED AT
customconfigmaps.configurator.gopaddle.io        2021-08-24T07:45:45Z
customsecrets.configurator.gopaddle.io           2021-08-24T07:45:47Z
....

$ kubectl get pods -n configurator
NAME                                       READY   STATUS    RESTARTS   AGE
configurator-controller-666d6794bb-4lm6c   1/1     Running   0          6m52s


$ kubectl get clusterrolebinding | grep Configurator
Configurator     ClusterRole/configurator 10m

Removing Configurator

To clean up the controller artifact and the local docker image, you can run the target clean as below.

$ make remove clean
....
....
kubectl delete -f deploy/configurator-deployment.yaml
deployment.apps "configurator-controller" deleted
kubectl delete -f deploy/crd-customConfigMap.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io "customconfigmaps.configurator.gopaddle.io" deleted
kubectl delete -f deploy/crd-customSecret.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io "customsecrets.configurator.gopaddle.io" deleted
kubectl delete -f deploy/configurator-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io "Configurator" deleted
kubectl delete -f deploy/configurator-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io "configurator" deleted
kubectl delete -f deploy/configurator-serviceaccount.yaml
serviceaccount "configurator-controller" deleted
kubectl delete ns configurator
namespace "configurator" deleted
....
....
rm -f configurator
docker rmi demogp/demo-configurator:v1.0
Untagged: demogp/demo-configurator:v1.0
Deleted: sha256:1f997b671507d230e3e685d434b3e9c678b4cf356ea044448b73ae489794ae24
Deleted: sha256:dec6aeb58347abf3832e747d4478d6493ed1da39639f5ba10dacb372281f59a2
Deleted: sha256:0e2e52831fa3e6475b347c40369b9cc3a41e2aaabd232480a244c69a90ab9cf3
Deleted: sha256:4851458a100d5c34297813abc157b15baf1f25bfbbdf9c1cca8e232b03f31103
Deleted: sha256:07f715e9deed52886e73de55a223dff83baa071f25264bfad677e8644f377fd7
Deleted: sha256:1fbf81f2d59e63c727e4b97b7a139de6d1fbf89f6715f8533f4c1e3f018a7f92
Deleted: sha256:0fed8f83cbe4268f8bd2692972ff3310fb88975a829ae7365662a7f5f8efd525

For any queries on how to use or how to contribute to the project, you can reach us on the discord server – https://discord.gg/dr24Z4BmP8

gopaddle v4.1 is a minor update with a few bug fixes and minor enhancements.

In this minor release we have introduced a gopaddle command line utility – gpctl to scaffold applications and import existing Kubernetes workloads to gopaddle. We have added support for Huawei Cloud and features that help in advanced configurations for service deployments and Docker builds.

gpctl import

Imports a pre-existing Kubernetes project with YAML files to gopaddle.

  • Use cases:
    • Migrate from GitOps to GUI based Cloud Native Governance platform
    • Migrate from one cloud platform to another by building a reusable gopaddle template.
  • Support Matrix:
    • Supported on Ubuntu 20.04
    • Supports github, bitbucket and gitlab source control repositories
    • If the Kubernetes YAML files in the Kubernetes project have references to docker images, then they can be linked during the gpctl import. gptcl supports Docker public & private hub, AWS ECR, Google GCR, Azure ACR, and any Quay based private repositories.

gpctl init (Alpha Version)

Initialize a microservice from source control repository and deploy to kubernetes in minutes.

  • Use cases:
    • Code to Kubernetes using a single command
    • Automatically create a Docker file by profiling the microservice
    • Automatically generate Kubernetes YAML files
    • Deploy the service on Kubernetes and get the end point to access the service
  • Support Matrix:
    • Supported on Ubuntu 18.04 or later
    • Only github based projects are supported
    • Java, NodeJS, Python and any type of linux based workloads that do not bring up a terminal

asciicast

Huawei Cloud Support

Use a pre-existing Huawei Cloud as an external cluster by registering it via Bastion Host or directly using its Private IP address

  • Use case:
    • Use Huawei Cloud for building docker containers
    • Use Huawei Cloud for deploying applications
    • Launch Stateful services on Huawei Cloud using Elastic Volume Service (EVS)
    • Use Huawei LoadBalancer to access services via Domain Name and Ingress rules
    • Launch Services on Huawei Cloud and access them using Public Elastic IP
  • Open Issues
    • Container Terminal doesn’t work when the Cluster is registered via Bastion Host

Shared Persistent Volumes

Use Shared persistent volumes across services within an application.

Dockerfile Custom Path and Build Arguments

Use Dockerfile custom paths and build ARG while creating a Docker Project in gopaddle.

  • Use cases:
    • If the Docker project uses Dockerfile in a location other than the project root, the custom docker file path can be provided at the time of adding the build scripts to a container
    • If the Container build requires certain environment variables to be set, it can be provided using the Build arguments at the time of adding the build scripts to a container

Version Control ConfigMaps and Secrets

Configurator integration: Use configurator to update and maintain configMaps and secrets in gopaddle

  • Use cases:
    • Keep services and configMaps/secrets in sync
    • Version control configMaps/secrets
    • Perform rolling updates or rollbacks on configMaps/secrets along with Deployments and Stateful sets

Custom Ingress

Use Ingress Controller for deployments. Define custom ingress controller like any other service in gopaddle and add it to the deployment templates.

  • Use Case:
    • Use custom ingress controller other than the default controller provided by gopaddle

Azure Autoscaling

Define autoscaling triggers for Azure node pools.

Node and Service Affinity Rules

Ready To Modernize Applications The Easy Way ?

Try our 15 day free trial