Kubernetes: revolutionizing the development industry

Aayushi Shah
6 min readDec 26, 2020

--

Kubernetes and Docker are two of the words you hear most in conversations about DevOps today. Docker is a tool that allows you to contain and run applications, and Kubernetes provides a platform to orchestrate or manage these containers, since managing thousands of containers manually with Docker CLI is a very costly task.

In 2013 Docker began to gain popularity by allowing developers to quickly create, run and scale their applications by creating containers. Part of its success is due to being Open Source and the support of companies like IBM, Microsoft, RedHat or Google. In just two years, Docker had been able to turn a niche technology into a fundamental tool within everyone’s reach thanks to its greater ease of use. But if the number of applications grows in the system, it becomes complicated to manage.

Docker is not enough, since coordination is needed to make the deployment, service monitoring, replacement, automatic scaling and, ultimately, the administration of the various services that make up the distributed architecture.

Kubernetes and the Need for Containers

Before we explain what Kubernetes does, we need to explain what containers are and why people are using those.

A container is a mini-virtual machine. It is small, as it does not have device drivers and all the other components of a regular virtual machine. Docker is by far the most popular container and it is written in Linux. Microsoft also has added containers to Windows as well, because they have become so popular.

The best way to illustrate why this is useful and important is to give an example.

Suppose you want to install the Nginx web server on a Linux server. You have several ways to do that. First, you could install it directly on the physical server’s OS. But most people use virtual machines now, so you would probably install it there.

But setting up a virtual machine requires some administrative effort and cost as well. And machines will be underutilized if you just dedicate it for just one task, which is how people typically use VMs. It would be better to load that one machine up with Nginx, messaging software, a DNS server, etc.

The people who invented containers thought through these issues and reasoned that since Nginx or any other application just needs some bare minimum operating system to run, then why not make a stripped-down version of an OS, put nginx inside, and run that. Then you have a self-contained, machine-agnostic unit that can be installed anywhere.

Now containers are so popular than they threaten to make VMs obsolete, is what some people say.

Some of its features include:

  • The ability to automatically place containers according to your resource requirements, without affecting availability.
  • Service discovery and load balancing: no need to use an external mechanism for service discovery as Kubernetes assigns containers their own IP addresses and a unique DNS name for a set of containers and can balance the load on them.
  • Planning: it is in charge of deciding in which node each container will run according to the resources it requires and other restrictions. It mixes critical and best-effort workloads to enhance resource utilization and savings.
  • Enable storage orchestration: automatically set up the storage system as a public cloud provider. Or an on-premise networked storage system such as NFS, iSCSI, Gluster, Ceph, Cinder and others.
  • Batch execution: in addition to services, Kubernetes can manage batch and IC workloads, replacing failed containers.
  • Configuration and secret management: sensitive information such as passwords or ssh keys are stored in Kubernetes hidden in ‘secrets’. Both the application’s configuration and secrets are deployed and updated without having to rebuild the image or expose sensitive information.
  • Self-repair: restart failed containers, replace and re-program them when nodes die. Also, remove unresponsive containers and do not publish them until they are ready.
  • Execution of automated deployments where changes to the application or its configuration are progressively implemented, while its status is monitored. This ensures that you do not delete all your instances at once. If something goes wrong, Kubernetes will reverse the change.

Use Cases

We have selected some common use cases to demonstrate Kubernetes’ capabilities. The use cases can be utilized together for different setups.

Self-Healing and Scaling Services

For simplicity, K8s process units can be detailed as pods and services. A pod is the smaller deployment unit available on Kubernetes. A pod can contain several containers that will have some related communication — such as network and storage. Services are the interface that provides accessibility to a set of containers. These services can be for internal or public access and can load balance several container instances.

Pods are mortal: once finished, they vanish from the cluster. Pod termination can be natural or through an error. Deployment is the most modern Kubernetes module to create and maintain pods. Using a single description file, a developer can specify everything necessary to deploy, keep running, scale, and upgrade the pod.

The figure below shows a simple deployment. This creates a pod of Nginx (version 1.7.9) with three replicas. In other words, Kubernetes will manage three Nginx instances; when an instance stops working, Kubernetes will create a new one.Use Cases

We have selected some common use cases to demonstrate Kubernetes’ capabilities. The use cases can be utilized together for different setups.

This Deployment can be configured to be auto-scalable with the following command line:

$ kubectl autoscale deployment nginx-deployment — min=10 — max=15 — cpu-percent=80

One of the advantages of K8s is that it’s easy to understand what the platform is doing. In this case, the cluster will have 10 Nginx instances, and as many as 15 instances if the CPU utilization exceeds 80 per cent of capacity.

Serverless, with Server

Serverless architecture has taken the world by a storm since AWS launched Lambda. The principle is simple: just develop the code, and don’t worry about anything else. Server and scalability are handled by the cloud provider and code just have to be developed as functions that handle specific events: from HTTP requests to queue messages.

Vendor lock-in is the major disadvantage of this solution. It is almost impossible to change cloud providers without refactoring most of the code. There are some solutions like Serverless that seek to standardize function code across clouds. Another solution is to use a Kubernetes cluster to create a vendor-free serverless platform. As mentioned above, K8S abstracts away the difference between cloud servers. Currently, two popular frameworks virtualize the cluster as a serverless platform: Kubeless and Fission.

Optimized Resource Usage with Namespaces

A K8s namespace is also known as a virtual cluster. Namespaces create a virtually separated cluster inside the real cluster. Clusters without namespaces probably have test, staging and production clusters. Virtual clusters usually waste some resources because they do not undergo continuous testing and because staging is used from time to time to validate the work of a new feature. By using a virtual cluster, or a namespace, an operations team can use the same set of physical machines for different sets depending on a given workload.

Namespaces are closely related to DNS because services located within the same namespace are accessible through their names. Namespaces offer a good solution for creating similar environments that locate services through network names: instances from different namespaces will find their dependencies without having to take into account which namespace they are located in.

Besides, namespaces can have resource quotas: each virtual cluster can receive a defined allocation to avoid a resource competition between namespaces. This is particularly useful to avoid a production environment sharing computing resources with just a few priority environments. Finally, different permissions can be created with roles for each namespace to limit the number of individuals with access to production environments.

Hybrid and Multiclouds

A hybrid cloud utilizes computing resources from a local, conventional data centre, and a cloud provider. A hybrid cloud is normally used when a company has some servers in an on-premise data centre and wants to use the cloud’s unlimited computing resources to expand or substitute company resources. A multi-cloud, on the other hand, refers to a cloud that uses multiple cloud providers to handle computing resources. Multi-cloud is generally used to avoid vendor lock-in, and to reduce the risk from a cloud provider going down while performing mission-critical operations.

Both solutions are addressed by Kubernetes Federation. Multiple clusters — one for each cloud or on-premise data centre — are created that are managed by the Federation. The Federation synchronizes computing resources, and even allows cross-cluster discovery: virtually any pod can communicate with a pod in another cluster without knowing the infrastructure.

The Federation setup is not simple, and there is a caveat: for obvious reasons, the solution doesn’t work on managed services like Google Kubernetes Engine, Azure Container Service or AWS EKS.

--

--