Introduction to Kubernetes

Shehani Rathnayake
8 min readDec 7, 2019

You may be wondering what Kubernetes and how it became so popular within a shorter time period and why people trend to use it massively. Kubernetes or K8s (commonly stylized as k8s ) is an open source solution designed to overcome the existing drawbacks of containers. Let’s start from the beginning.

Why Kubernetes ?

In early decades, people used to install and run their applications on physical machines or servers. There, they didn’t have the control over the applications running on their machines to draw boundaries on applications’ resources usages. As an example if multiple applications running on your machine and one of them using most of the resources of your machine, then other applications won’t be having enough resources to perform in the expected level.

As a solution people moved towards virtual machines(VMs) which allows a guest operating system to run on top of the host machine with allocated resources. It allows you to run multiple VMs on a single physical server’s CPU. But again when using VMs, installing application requires some extra administrative effort and have to bare some cost. Even though you used VMs, it will be underutilized if you dedicate it for just one task.

In the other hand, when enterprises develop their applications in dev environment and going to ship it in different production environments, most of the time it won’t be run in way is supposed to due to the restrictions and complexities in different software environments. Therefore businesses and their customers were looking for flexibility, faster time to market and software that runs seamlessly across different environments.

So the developer who thought across theses problems were able to come up with a concept call containers. This is where the containerisation begins. Containers are capable of wrapping all of your application’s code, libraries, and dependencies together into one isolated unit as an immutable artifacts which share the Operating System (OS) among the applications. Since it separates your software from its underlying infrastructure, you can run your applications in any environment.

So then why we need Kubernetes when we have containers ? Kubernetes then comes as a solution to the solution where it address the inherent problems of containers. Even though containers solve your problems they are having some underlying problems. Just imagine If you want to run multiple containers across multiple machines how you gonna handle and maintain them. There still will have lot of work left to do such as, start the right containers at the right time, figure out how they can talk to each other, ensure that there is no downtime, handle storage considerations, deal with failed containers or hardware …It makes our lives easy if some system can handle these situations..

So the kubernetes comes to the rescue.

What is Kubernetes ?

Kubernetes is an open source solution provides by Google for managing, automating and deploying containerized applications, allowing large number of containers to work together in harmony and a solution which reducing operational burden.
It helps with things like running containers across many different machines without worrying about the maintenance and downtimes, keeping storage consistent with multiple instances of an application, distributing load between the containers, launching new containers on different machines if something fails and so on.

Kubernetes Architecture

Kubernetes Architecture

Kubernetes cluster is mainly consists with master and worker nodes. Master node is responsible for scheduling pods across multiple worker nodes. The master node ensures that the desired state of the cluster is maintained. Within the cluster there can be multiple master nodes to avoid single point of failure. Master is consists with kube-controller-manager, kube-apiserver, kube-scheduler and etcd.

kube-controller-manager is responsible for monitoring the current state of the cluster and making decisions to achieve the desired state. As an example, X number of pods should be in running state (desired state) but having only Y number of pods (current state). Then the kube-controller-manager takes necessary decisions to have X number of running pods within the cluster. It listens on kube-apiserver for the required information, to take necessary actions. kube-apiserver is the one who provides information about the state of the cluster to the kube-controller-manager. kube-apiserver exposes four APIs; Kubernetes API, Extensions API, Autoscaling API, and Batch API. These APIs are used to communicate with the Kubernetes cluster and execute operations within the cluster. kube-scheduler decides how events and jobs should be scheduled across the cluster based on the availability of resources, policies set by operators, etc. It takes the required information about the state of the cluster from kube-apiserver. etcd is the distributed primary data storage of Kubernetes. It stores data in key-value pairs and saves the cluster state, configuration data, meta data etc.

Worker nodes are the ones who actually runs your applications. Each of them consists with kubelet, kube-proxy and docker runtime. kubelet reports the information about the health of the node to the master node and also execute instructions given to it by master node. kube-proxy is the network proxy which allows multiple microservices of your application to communicate with each other, within the cluster. If you desired, you can expose your application to the rest of the world with kube-proxy. Each pod can talk with each other via this proxy. Apart from that each node has a docker engine which manages the containers.

So this is how the Kubernetes cluster operates. Next we’ll look into some basic terminologies used by Kubernetes.

Terminologies

Node

Nodes

Also known as minion. A node is a worker machine in Kubernetes. Regardless of the physical machines (ex: node can be a physical computer, virtual machine, cloud server, laptop etc) Kubernetes cluster treats all machines as nodes which has some amount of memory and cpu. Since Kubernetes adding this abstraction layer, it can substitute any machine with other easily.

Cluster

Kubernetes Cluster

Nodes in Kubernetes cluster together their resources form a more powerful one machine. This is call as Kubernetes cluster. When you deploy applications in the cluster, it intelligently handles distributing work to the individual nodes. Therefore it does not ensure that your application is running on a particular node but running in any node within the cluster. If any nodes are added or removed, the cluster will shift the work load as necessary.

Pod

Pod

Pod is a higher-level structure where Kubernetes deploy an application. It wraps application’s containers (can be one or more), necessary storage resources, network IP and guidance to control the running containers together as a one unit and deploy it in the cluster. Any containers in the same pod will share the same resources and local network. Pods also use as the unit of replication in Kubernetes cluster. Even when not under heavy load, it is standard to have multiple copies of a pod running at any time in a production system to allow load balancing and failure resistance. pods are scaled up and down as a unit, all containers in a pod must scale together, regardless of their individual need.This leads to wasted resources. Hence pods should remain as small as possible, typically holding only a main process and its tightly-coupled helper containers.

Deployment

Deployment

Although pods are the basic unit of computation in Kubernetes, they are not typically directly launched on a cluster. Instead pods are managed by deployments. A deployment’s primary purpose is to declare how many replicas of a pod should be running at a time and maintain that desired state. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Since deployment is used, we don’t have to deal with pods manually. We can just declare the desired state of the system, and it will be managed for us automatically.

Service

Accessing application via a service
Accessing application via a service

Services open up a channel for allowing external traffic to your application and it exposes your application to the rest of the world. Services consist with a selector, which select the set of pods by its label which it needs to route the receiving traffic. When a network request is made to the service, it selects all Pods in the cluster matching the service’s selector, chooses one of them, and forwards the network request to the application running on that pod. There are several service types which determines how the service is exposed to the network. It changes based on where a Service is able to be accessed from.

Service Types

  • ClusterIP : The default service type. The service is only accessible from within the Kubernetes cluster. But if desired can be enabled the access using Kubernetes proxy.
  • NodePort: This will expose your application to the external traffic via a static port on each Node in Kubernetes cluster
  • LoadBalancer: This is the standard way of exposing your services to external world. it can access externally through a cloud provider’s load balancer functionality (ex: GCP, AWSAzure,OpenStack) It automatically routes requests to particular Kubernetes Service.

How to deploy a Kubernetes Cluster ?

Well, so far we have talked about why we need Kubernetes and what is it, it’s architectural design and some terminologies. The next probable question can be how to deploy a cluster in Kubernetes? There are several providers which build Kubernetes cluster in our local environment or in cloud platform as listed below. you can use one of them to test and run your applications in Kubernetes cluster.

  • Google Cloud : GCE
  • Amazon EKS
  • Minikube
  • Docker for Mac with Kubernetes
  • On premise — Openstack , AWS, Any IaaS provider

What’s Next

Now comes to the end of the introduction to Kubernetes. Here we have talked about the need of the Kubernetes, it’s architecture, basic terminologies and how we can deploy a Kubernetes cluster. .In my next article, I will guide you through building and deploying a simple application in Kubernetes cluster.

--

--