Deploy a simple application in Kubernetes cluster

Shehani Rathnayake
8 min readFeb 3, 2020

In my previous article, I have described why we need Kubernetes, it’s architectural design and terminologies briefly. Now let’s try to deploy a simple application in Kubernetes cluster. You can download or clone the git repo of the project I’m going to discuss here.

Kubernetes is a system for managing, automating and deploying containerized applications. So first we need to have a containerized application. In order to containerized an application we need to build an image from an application and push it to a centralized container registry. Then we can use Kubernets to deploy container instance in the cluster and manage it. While Kubernetes supports several container runtimes, Docker is a very popular choice. So let’s create a simple hello world server and then create a docker image from that.

Creating simple hello world server

As you can see in below code we can create a simple hello world application with GO programming language.

package main

import (
"fmt"
"log"
"net/http"
"os"
)

func main() {

port := "8080"
if fromEnv := os.Getenv("PORT"); fromEnv != "" {
port = fromEnv
}

// register hello function to handle all requests
server := http.NewServeMux()
server.HandleFunc("/", hello)

// start the web server on port and accept requests
log.Printf("server listening for requests")
err := http.ListenAndServe(":"+port, server)
log.Fatal(err)
}

// hello responds to the request with a plain-text "Hello, world" message.
func hello(w http.ResponseWriter, r *http.Request) {
log.Printf("Serving request: %s", r.URL.Path)
host, _:= os.Hostname()
_, _ = fmt.Fprintf(w, "Hello, world!\n")
_, _ = fmt.Fprintf(w, "Welcome to K8s training Session\n")
_, _ = fmt.Fprintf(w, "Hostname: %s\n", host)
}

Building Docker image and push it to the Docker registry

Docker images are created using a Dockerfile that contains all commands, in the order of execution, which needed to build a given image. Let’s define a Dockerfile as follows to compile the binary executable of our Go program and kicks off our newly created binary executable.

# We specify the base image we need for our
# go application
FROM golang:1.12.0-alpine3.9
# We create an /app directory within our
# image that will hold our application source
# files
RUN mkdir /app
# We copy everything in the root directory
# into our /app directory
ADD . /app
# We specify that we now wish to execute
# any further commands inside our /app
# directory
WORKDIR /app
# we run go build to compile the binary
# executable of our Go program
RUN go build -o main .
# Our start command which kicks off
# our newly created binary executable
CMD ["/app/main"]

Let’s build the docker image with following command.

docker build -t <imagename:tag> <path_to_Dockerfile>
Ex: docker build -t shehani123/test:v1 .

Output:

Building Docker image

Then push the built image to your docker registry.

docker push <docker-registry>/<imagename:tag>
Ex: docker push shehani123/test:v1

Output:

Push Docker image to Docker registry

Ok now we are ready with our image. Lest take a look at how to deploy the created application in Kubernetes cluster.

Deploying a Kubernetes cluster

Before we deploy an application we need to deploy a Kubernetes cluster. For that you can use several providers as we discussed in previous article. Here I’m going to use Minikube. If you do not have Minikube installed, you can go through this article and install Minikube.

First we need to start the local Kubernetes cluster with Minikube as follows. This will create and configure a Virtual Machine that runs a single-node Kubernetes cluster and configure the kubectl installation to communicate with this cluster.

minikube start

The output will be as follows.

Output result of minikube start

Now we have the Kubernetes cluster. Let’s take a look at how to deploy the application in this cluster.

Creating a namespace

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called as namespaces. First, we’ll create a namespace in Kubernetes cluster for our deployment. Creating a namespace is not mandatory. If you are not specified a namespace all the deployments and services will be created at default namespace.

Let’s have a look at the ways of creating a namespace in Kubernetes cluster.

  • Creating namespace with kubeclt command.
kubectl create namespace <namespace-name>
Ex: kubectl create namespace wso2
  • Define the namespace with Kubernetes Namespace kind in yaml file
kind: Namespace
apiVersion: v1
metadata:
name: wso2
labels:
name: wso2

Create the namespace from that file with following command.

kubectl create -f <namespace-yaml-file>
Ex:kubectl create -f namespace.yaml

This is how you can create a virtual cluster in Kubernetes. Let’s deploy the application in created namespace.

Creating Deployment

You can create a Kubernetes deployment with the kind Deployment as shown in following yaml definition.

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
labels:
app: helloworld
namespace: wso2
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- image: shehani123/test:v1
name: helloworld
ports:
- containerPort: 8080

In order to define the meta information of the deployment, we have metadata section. There we should give a name to our deployment and the namespace if available. Labels defined under metadata are key/value pairs attached to objects such as pods to identify, or group them. These labels are used as keys to find a collection of objects.

We can define the number of pod replicas we need to run at any given time under specifications (spec section) with replicas. Labels are not unique. We can have many objects with the same labels. Therefore Kubernetes uses labels selector selector as core grouping primitive. With label selector we tell the deployment which pods are the parts of our deployment.

template defines the specification of pods that get created in the deployment. Now we can provide the image we have built in previous steps under image tag in the deployment to deploy our hello world server in Kubernetes. It will create pods that run helloworld server with image shehani123/test:v1 and with the configured labels.

Let’s add this deployment to the Kubernetes cluster.

kubectl create -f <deployment-yaml-file>
Ex:kubectl create -f deployment.yaml

Output:

Create deployment

Now we have deployed our containerized application in Kubernetes cluster. Let’s check our deployment in namespace wso2.

kubectl get deployment -n wso2

Output:

Deployment

Let’s check the running pods of our deployment in the cluster.

kubectl get pod -n wso2

Output:

Running Pods

Now we need to expose our deployment in order to access it from outside of the cluster. For that we need to create a service. Let’s create a service to access our deployed application in Kubernetes cluster.

Creating service

As we discussed in previous article we have several service types to expose the deployed application. Since we are using minikube, here I’m going to create a service type NodePort to expose our application. As you can see in the following yaml definition, We can define service with kind service.

apiVersion: v1
kind: Service
metadata:
name: nodeport-service
namespace: wso2
spec:
type: NodePort
selector:
app: helloworld
ports:
- nodePort: 30165
protocol: TCP
port: 80
targetPort: 8080

With metadata we should define our namespace and a name of the service as required details. we specify the service type and the selector in spec section. When a network request is made to the service, it selects all Pods in the cluster matching the service’s selector, chooses one of them, and forwards the network request to it.

In nodePort services, kubernetes allocates a port from a range (default: 30000–32767) in each node of the cluster. Then each node proxies that port (the same port number on every Node) into your Service. You can declare the port under spec/ports/nodePort in the service definition. port defines the port which makes the service visible internally in the cluster so that the other services running in the same cluster can access. targetPort defines the container port where our application is running.

Let’s create the NodePort service in the cluster with yaml definition.

kubectl create -f <nodePort-service-yaml-file>
Ex:kubectl create -f serviceNodePort.yaml

Output:

Creating NodePort service

Now we have created the service as well. Let’s list down the services and see.

kubectl get svc -n <namespace>
Ex:kubectl get svc -n wso2

Output:

Listing service created

Let’s describe the service we have created to get more details.

kubectl describe svc <service name> -n <namespace>
Ex:kubectl describe svc nodeport-service -n wso2

Output:

Detailed description of the service created

Accessing the deployed application

Now we have ready with the service NodePort. Lets access our deployed application via the service. In order to access with nodePort service we need to get the <NodeIP>:<NodePort>.

You can get the NodeIP of the minikube with following command.

minikube ip

Output:

Get the minikube IP

Lets access the service with the curl command.

curl http://<nodeport-ip>:port
Ex: curl http://192.168.99.128:30165

Output:

Output from the server

In this manner we can access our application deployed in Kubernetes cluster.

Clean Up

Now we have created and deployed simple application in Kubernetes cluster and accessed it via a service. Let’s finish the work by deleting artifacts we have created.

  • Delete Deployment
kubectl delete deployment <deployment name> -n <namespace>
Ex: kubectl delete deployment helloworld-deployment -n wso2
  • Delete Service
kubectl delete svc <service name> -n <namespace>
Ex: kubectl delete svc nodeport-service -n wso2
  • Delete Namespace
kubectl delete namespace <namespace>
Ex: kubectl delete namespace wso2

What’s Next

Well, In this article I have guided you through simple steps to create a simple hello world server with GO and building the docker image with it. Then we have deployed it in Kubernetes cluster and accessed via a NodePort service. In my git repo I have written yaml definitions for clusterIP and loadbalancer service types as well. The ways of accessing with those service types and some other Kubernetes commands are described in README document. You can try them as well.

--

--