Architecting a development lifecycle for a Kubernetes-based deployments

  1. Getting and deploying to a local environment latest versions of all microservices, using latest images and deployment YAMLs located in the common artifacts repository.
  2. Developing his own microservice(s) on the same local environment, where it able to interact with rest of the deployed microservices.
  3. When ready to deliver, creates a PR (which consists of code and deployment YAMLs):
  • PR changes reviewed and approved. CI/CD environment running all required checks, such as automation tests and security scans.
  • On successfully passing all checks, CI/CD environment uploads new image version, together with updated deployment YAMLs to the common artifacts repository. Also, assuming you have everything automated, no downtime during migrations and sufficient automation coverage, everything latest pushed to production.

Local Development

Setting up local Kubernetes

sudo apt -y install
sudo systemctl enable docker.service
/bin/cat <<EOM > /etc/docker/daemon.json
"insecure-registries": ["localhost:5000", "localhost:32000"]
sudo sed -i ‘s/-H fd:\/\//-H fd:\/\/ -H tcp:\/\/’ /lib/systemd/system/docker.servicesudo systemctl daemon-reload
sudo service docker restart
sudo docker run -d -p 5000:5000 — restart=always — name “registry” registry:2
sudo snap install microk8s --classic
sudo microk8s.disable ha-cluster — force
sudo microk8s status — wait-ready
sudo microk8s.enable dns
sudo microk8s status — wait-ready
sudo snap alias microk8s.kubectl kubectl
sudo iptables -P FORWARD ACCEPT
sudo sed -i ‘/\[plugins.cri.registry.mirrors\]/a \ \ \ \ \ \ \ \ [plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”localhost:5000"]\n \ \ \ \ \ \ \ \ \ endpoint = [“http:\/\/localhost:5000”]’ /var/snap/microk8s/current/args/containerd-template.toml

Deploying your pods on MicroK8s

  1. Build Docker image with your code and upload it to local Docker repository (running on the VM). For doing that, use Docker Remote API we configured in previous section.
  2. Copy your YAMLs to the VM. Inside your deployment YAML should be listed container image name of the image your just uploaded in previous step. I suggest you to template it using Helm — this way on production you could inject a real image name (pointing to automation\production repository).
  3. “Apply” your YAMLs to the Kubernetes using kubectl.

Closing the loop




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium


How to use column numbers instead of column names in output mapping in WSO2 Data Services?

Goodbye Docker Desktop, Hello Minikube!

CS373 Fall 2021: Lorenzo Martinez: Final Entry

Web Dev Streaks Day - 1 (Milestone 1: Personal Portfolio)

Polywhale Weekly Progress Report [1]

Macrostrat Server Failure Postmortem

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Evgeni Aizikovich

Evgeni Aizikovich

More from Medium

Life of a Packet in ISTIO — Part 1

Getting Started With Kubernetes Ingress Controllers

Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.

Easy Blue-Green Deployment on Openshift Container Platform using Argo Rollouts