What exactly is Kubernetes

kubernetesKubernetes (or K8s) is another tool for orchestrating containerized apps in a cluster. It’s job is to find the right place for a container, fulfill its desired state (e.g. “running, 5 replicas”), provide a network, internal IP, possibly, access from outside, apply updates, etc. Originally developed by Google, now Kubernetes is open source.

It’s not a replacement for Docker, though. In fact, Kubernetes nodes (worker machines) need Docker (or rkt, another container engine)  to actually run the containers. But it is an alternative to Docker Swarm. Despite they do share many concepts in common, like services, replicas, desired state and its enforcement, Kubernetes feels more versatile and you just know it will scale beyond imaginable. It’s Google’s child, after all. On the other hand, as any other Google’s API or tool I used, it’s generally harder to understand and get into.

K8s can run either on virtual machines or real hosts, Google Compute Engine (“natively”), AWS or Azure (with some levels of pain). It even can deal with Windows Server Containers!

So, what the plan. Let’s create simple local Kubernetes cluster and look around. Deploy something, poke few URLs. Let’s explore. Nothing complex, just enough to get the feeling what that is and how that works.

Prerequisites

The biggest requirement is that VT-x/AMD-v virtualization must be enabled in BIOS. In most cases it is. The rest is downloadable.

Installation

I’m on Mac, so downloaded tools and installation commands will be specific to that platform. But ones for Linux or Windows won’t be fundamentally different. Here’s what we’ll need:

  1. kubectl – the tool to run K8s commands in a cluster.
  2. minikube – the tool to actually create Kubernetes cluster locally.
  3. VirtualBox – never leave a home without it. It’s going to host cluster node(s).

Installing kubectl and minikube is quite trivial: download, make executable, copy.

Kubectl:

Minikube:

VirtualBox installation will involve some clicking, but it’s still easy.

Once the installation’s finished, we finally can create local K8s cluster with minikube start:

Now we have something.

Nodes and pods

On the top level Kubernetes cluster consists of two entities: master services and worker nodes. While masters maintain cluster state, distribute work and do other smart things, nodes are dumb hosts that do what they are told to do. When you deploy a container, you actually deploy it on one of the nodes.

As we already have some sort of a cluster, we can ask its master about the nodes in it via kubectl get nodes:

Not many, but for our purpose even one will do.

Even though I mentioned the word container, in Kubernetes terminology the unit of work is called pod. A pod is an abstraction over container that will have its own IP address and will contain one or more containers inside. When it has, for example, two containers, they will share the same IP and can talk to each other via localhost.

pod

We can use the same kubectl get command we previously used for querying nodes to get the list of currently running pods:

Not surprisingly, get pod returns absolutely nothing. After all, we haven’t deployed anything. However, getting pods from --all-namespaces returns three running pods from kube-system namespace, which Kubernetes installed in order to make the party possible. One of them, kubernetes-dashboard, is particularly interesting.

Kubernetes dashboard

What’s nice, K8s comes with pretty powerful dashboard. Not only it displays what and how is running, but also it allows deploying pods, services and editing everything it can get to. The simplest way to get to the dashboard is through minikube dashboard:

Kubernetes dashboard

Deploying your first pod

Deploying the pod in some way is similar to starting a container in Docker. It’s the same run command as in docker run, it also takes image name as an input and optionally – a port number. However, kubectl run won’t let you to create a pod directly. It will create a deployment, which then will lead to pod creation. Deployment will also monitor the pod and if it goes down, will create a new one.

Similarly to Docker, we can use kubectl describe to get more details about a pod or deployment (or anything else), or kubectl logs nginx-158599303-hq212 to view pod’s logs, or even kubectl exec -ti nginx-158599303-hq212 bash to start interactive bash session with it.

In case you haven’t noticed, the name of nginx pod has slightly changed. The reason is that I killed an old one with kubectl delete just to make sure that nginx deployment recreates it. It really does.

Exposing the pod to the world

At the moment the pod we created is accessible from within the cluster only. It’s good for other pods in that cluster, but not good for humans or services outside of it. And here’s another problem. Even if we did know how to bind pod’s port to node’s external port, if something happens to the pod, its deployment object will recreate it anywhere in a cluster. What happens to the binding then? How would we deal with multiple pod replicas running side by side?

In order to abstract from actual pod location and its replicas count, Kubernetes has a concept of services. A service is a proxy between a user (or a service) and a set of pods. What’s cool about a service is the way it decides which pods belong to him. If you take a look at the output of last kubectl describe command, you’ll see that pod has few key-value labels attached to it. K8s added those automagically, but you can add your own.

Service can use those labels as a filter criteria for finding the pods. They don’t even have to share the same image! If we use kubectl expose command, K8s will configure such service and its labels filter for us:

We used “NodePort” type of a service, which binds certain internally exposed port to node’s external port. However, we also could’ve used “ClusterIP” to keep the service inside the cluster, “LoadBalancer” to use cloud provider’s load balancing capabilities (if supported), or even “ExternalName” to mess up with DNS.

When we describe‘d newly created service, one of the lines contained external port number (32543) that became available to us, so using it along with node’s IP address in theory should bring us into nginx pod:

nginx in Kubernetes

And theory worked. We have a pod in a cluster that’s accessible from outside.

Cleaning up

There’s whole bunch of things you could play with in minikube cluster, but once you’ve done, it’s time to clean up.

minikube delete should destroy the VMs and remove all cluster traces.

Conclusion

Hopefully, that gave you some feeling about what Kubernetes is. In production though, you wouldn’t be starting services and deployments manually, there’re YAML configurations for that. You also would be more focused at running pods at scale and concerned with upgrade strategies, and we covered none of that. However, there should be some first step into orchestrating containers with Kubernetes and starting a single pod in a cluster of one machine seems like one.

Leave a Reply

Your email address will not be published. Required fields are marked *