Architecting a development lifecycle for a Kubernetes-based deployments

Evgeni Aizikovich
5 min readMar 27, 2022

It is a relatively simple task to write some piece of software, to Dockerize it and then to deploy on Kubernetes. However, when working in a group consisting of dozens of R&D members and developing a complex system consisting of multiple microservices — things tend to get more complicated. You have to care about many other aspects besides the ” just” coding. After being running for a three years a large scale SaaS product, developed by a fairly large group of developers and deployed on a Kubernetes, in this article I would like to share with you how we took care of all development-related aspects wrapped together in a process I calling “development lifecycle”.

Development lifecycle is an endless loop, during which every developer in the team:

  1. Getting and deploying to a local environment latest versions of all microservices, using latest images and deployment YAMLs located in the common artifacts repository.
  2. Developing his own microservice(s) on the same local environment, where it able to interact with rest of the deployed microservices.
  3. When ready to deliver, creates a PR (which consists of code and deployment YAMLs):
  • PR changes reviewed and approved. CI/CD environment running all required checks, such as automation tests and security scans.
  • On successfully passing all checks, CI/CD environment uploads new image version, together with updated deployment YAMLs to the common artifacts repository. Also, assuming you have everything automated, no downtime during migrations and sufficient automation coverage, everything latest pushed to production.

4. (repeats all over from step #1)

Local Development

The process of local development which worked very well for my group, is to make every developer to have his own Kubernetes cluster. Initially developer deploys all latest versions of all microservices to that Kubernetes (e.g. step #1), using YAMLs of all of microservices stored on some common artifacts repository (we used Artifactory). For convenience, we were combining all YAMLs into one single file, so it could be “applied” into Kubernetes using single “apply” operation.

From this moment developer could develop on the same environment his own microservice (step #2). Developing on Kubernetes environment is very convenient, since considerable (and inseparable) part is the microservice development is everything related to the Kubernetes deployment (e.g. deployment, mounts, roles, secrets, etc.), developed many times in parallel with the dependent code.

Organizing development in such manner allows to use exactly the same APIs everywhere, to test end-to-end flows during development phase, to save development time and to increase overall product quality and security. From my experience, when code being developed straight on the Kubernetes environment, it forces developer to make “everything right” from the very beginning, and when he is releasing his code — in 99% of time it is ready to be deployed on automation environment and then on production without any additional changes.

Setting up local Kubernetes

You could create a separate developer account for every developer and deploy on it a full blown Kubernetes and all the other required services. If you can afford that and you have a low latency and high speed network between your IDE and the cloud — go for it.

In my organization, we decided that a more cost-effective approach would be to create for every developer a personal Ubuntu VM on a corporate vCenter. In a long run it’s cheaper (since we already had that vCenter in place) and uploading images in local gigabit network is much faster.

Then, on that VM we deployed a MicroK8s. MicroK8s is a lightweight Kubernetes emulator, running directly on the host OS without any additional virtualization layers. From my experience, it works just great and I can’t recall even a single time we had any issue with it.

Despite that MicroK8s comes with built-in Docker repository, I do prefer to have a separate Docker installed — because of the Docker Remote API. We need it for uploading our images from IDE to the local Kubernetes.

On Ubuntu, call the following commands to install Docker:

sudo apt -y install docker.io
sudo systemctl enable docker.service

Next, configure for local repository:

/bin/cat <<EOM > /etc/docker/daemon.json
{
"insecure-registries": ["localhost:5000", "localhost:32000"]
}
EOM

We need a convenient way to deploy our images to the Docker repository. We will do that using Docker remote API. Configure Docker Remote API on any desired port — in the example below I using 4243:

sudo sed -i ‘s/-H fd:\/\//-H fd:\/\/ -H tcp:\/\/0.0.0.0:4243/g’ /lib/systemd/system/docker.servicesudo systemctl daemon-reload
sudo service docker restart

Deploy Docker registry:

sudo docker run -d -p 5000:5000 — restart=always — name “registry” registry:2

Now, you are ready to install MicroK8s:

sudo snap install microk8s --classic

Configure MicroK8s: enable DNS plugin and disable HA since you don’t need it while working on a single node:

sudo microk8s.disable ha-cluster — force
sudo microk8s status — wait-ready
sudo microk8s.enable dns
sudo microk8s status — wait-ready

Add kubectl alias and configure iptables:

sudo snap alias microk8s.kubectl kubectl
sudo iptables -P FORWARD ACCEPT

Add a trust to Microk8s for local Docker repository:

sudo sed -i ‘/\[plugins.cri.registry.mirrors\]/a \ \ \ \ \ \ \ \ [plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”localhost:5000"]\n \ \ \ \ \ \ \ \ \ endpoint = [“http:\/\/localhost:5000”]’ /var/snap/microk8s/current/args/containerd-template.toml

At this point you should have a running and ready to go MicroK8s. Call to

 microk8s.inspect 

and see you have no issues. Now you can start using “kubectl” as just it would be on any regular Kubernetes environment.

Deploying your pods on MicroK8s

In order to deploy your microservice from IDE to the local MicroK8s, you basically need to perform 3 steps:

  1. Build Docker image with your code and upload it to local Docker repository (running on the VM). For doing that, use Docker Remote API we configured in previous section.
  2. Copy your YAMLs to the VM. Inside your deployment YAML should be listed container image name of the image your just uploaded in previous step. I suggest you to template it using Helm — this way on production you could inject a real image name (pointing to automation\production repository).
  3. “Apply” your YAMLs to the Kubernetes using kubectl.

In our group I developed a dedicated Gradle plugin, which performs all these steps automatically in single button press. Unfortunately I’m not allowed to share its source code, but you can write your own or instead prepare some kind of script. It should be fairly simple. Internally, inside my Gradle plugin I was using Gradle Docker Plugin for handling Docker API calls — you welcome to use it as well.

After you have your pod running on Kubernetes, you can remote debug it in the way I described in my other article.

Closing the loop

When developer ready to submit his artifacts (usually consisting of code + YAMLs), he creates PR (step #3). At this point, CI/CD pipeline manager could create automation environment (e.g. deploy all latest microservices from production using production deployment YAMLs, which are the same or very similar YAMLs as developer were using in development process) + deploy microservice submitted as part of PR. This way automation tests both the code and deployment YAMLs.

In case of successfully passing all automation tests (and optionally security scanning) — CI/CD pipeline manager should upload microservice image and its YAMLs to the common artifacts repository, where they will be accessible by all developers. Of cause, CI/CD pipeline manager should inject an appropriate container image name and optionally combine all generated YAMLs together with all existing into single YAML file describing the whole deployment. In additional, in case of CI, CI/CD pipeline manager can “apply” on production as well.

From this moment, any developer “applying” latest deployment YAML from common artifacts repository on his local environment will get deployed all latest versions of all microservices, including the one just built (step #4).

--

--