Kubernetes (or K8s) is another tool for orchestrating containerized apps in a cluster. It’s job is to find the right place for a container, fulfill its desired state (e.g. “running, 5 replicas”), provide a network, internal IP, possibly, access from outside, apply updates, etc. Originally developed by Google, now Kubernetes is open source.
It’s not a replacement for Docker, though. In fact, Kubernetes nodes (worker machines) need Docker (or rkt, another container engine) to actually run the containers. But it is an alternative to Docker Swarm. Despite they do share many concepts in common, like services, replicas, desired state and its enforcement, Kubernetes feels more versatile and you just know it will scale beyond imaginable. It’s Google’s child, after all. On the other hand, as any other Google’s API or tool I used, it’s generally harder to understand and get into.
K8s can run either on virtual machines or real hosts, Google Compute Engine (“natively”), AWS or Azure (with some levels of pain). It even can deal with Windows Server Containers!
So, what the plan. Let’s create simple local Kubernetes cluster and look around. Deploy something, poke few URLs. Let’s explore. Nothing complex, just enough to get the feeling what that is and how that works.
Prerequisites
The biggest requirement is that VT-x/AMD-v virtualization must be enabled in BIOS. In most cases it is. The rest is downloadable.
Installation
I’m on Mac, so downloaded tools and installation commands will be specific to that platform. But ones for Linux or Windows won’t be fundamentally different. Here’s what we’ll need:
kubectl
– the tool to run K8s commands in a cluster.minikube
– the tool to actually create Kubernetes cluster locally.VirtualBox
– never leave a home without it. It’s going to host cluster node(s).
Installing kubectl and minikube is quite trivial: download, make executable, copy.
Kubectl:
1 2 3 |
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl |
Minikube:
1 2 3 |
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-darwin-amd64 chmod +x minikube sudo mv minikube /usr/local/bin/ |
VirtualBox installation will involve some clicking, but it’s still easy.
Once the installation’s finished, we finally can create local K8s cluster with minikube start
:
1 2 3 4 5 |
minikube start #Starting local Kubernetes v1.6.4 cluster... #Starting VM... #Downloading Minikube ISO #... |
Now we have something.
Nodes and pods
On the top level Kubernetes cluster consists of two entities: master services and worker nodes. While masters maintain cluster state, distribute work and do other smart things, nodes are dumb hosts that do what they are told to do. When you deploy a container, you actually deploy it on one of the nodes.
As we already have some sort of a cluster, we can ask its master about the nodes in it via kubectl get nodes
:
1 2 3 |
$ kubectl get nodes #NAME STATUS AGE VERSION #minikube Ready 3m v1.6.4 |
Not many, but for our purpose even one will do.
Even though I mentioned the word container, in Kubernetes terminology the unit of work is called pod. A pod is an abstraction over container that will have its own IP address and will contain one or more containers inside. When it has, for example, two containers, they will share the same IP and can talk to each other via localhost.
We can use the same kubectl get
command we previously used for querying nodes to get the list of currently running pods:
1 2 3 4 5 6 7 |
$ kubectl get pod #No resources found. $ kubectl get pod --all-namespaces #NAMESPACE NAME READY STATUS RESTARTS AGE #kube-system kube-addon-manager-minikube 1/1 Running 0 58s #kube-system kube-dns-1301475494-p8gvb 3/3 Running 0 43s #kube-system kubernetes-dashboard-hmh8f 1/1 Running 0 44s |
Not surprisingly, get pod
returns absolutely nothing. After all, we haven’t deployed anything. However, getting pods from --all-namespaces
returns three running pods from kube-system
namespace, which Kubernetes installed in order to make the party possible. One of them, kubernetes-dashboard
, is particularly interesting.
Kubernetes dashboard
What’s nice, K8s comes with pretty powerful dashboard. Not only it displays what and how is running, but also it allows deploying pods, services and editing everything it can get to. The simplest way to get to the dashboard is through minikube dashboard
:
1 2 |
$ minikube dashboard # Opening kubernetes dashboard in default browser... |
Deploying your first pod
Deploying the pod in some way is similar to starting a container in Docker. It’s the same run
command as in docker run
, it also takes image name as an input and optionally – a port number. However, kubectl run
won’t let you to create a pod directly. It will create a deployment, which then will lead to pod creation. Deployment will also monitor the pod and if it goes down, will create a new one.
1 2 3 4 5 6 7 8 9 10 |
$ kubectl run nginx --image=nginx #deployment "nginx" created $ kubectl get deployment #NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE #nginx 1 1 1 1 1m $ kubectl get pod #NAME READY STATUS RESTARTS AGE #nginx-158599303-hq212 1/1 Running 0 1m |
Similarly to Docker, we can use kubectl describe
to get more details about a pod or deployment (or anything else), or kubectl logs nginx-158599303-hq212
to view pod’s logs, or even kubectl exec -ti nginx-158599303-hq212 bash
to start interactive bash session with it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
$ kubectl describe deployment/nginx # Name: nginx # Labels: run=nginx # ... $ kubectl get pod # NAME READY STATUS RESTARTS AGE # nginx-158599303-ffscb 1/1 Running 0 16m $ kubectl describe pod/nginx-158599303-ffscb # Name: nginx-158599303-ffscb # Namespace: default # Labels: pod-template-hash=158599303 # run=nginx # .. |
In case you haven’t noticed, the name of nginx pod has slightly changed. The reason is that I killed an old one with kubectl delete
just to make sure that nginx
deployment recreates it. It really does.
Exposing the pod to the world
At the moment the pod we created is accessible from within the cluster only. It’s good for other pods in that cluster, but not good for humans or services outside of it. And here’s another problem. Even if we did know how to bind pod’s port to node’s external port, if something happens to the pod, its deployment object will recreate it anywhere in a cluster. What happens to the binding then? How would we deal with multiple pod replicas running side by side?
In order to abstract from actual pod location and its replicas count, Kubernetes has a concept of services. A service is a proxy between a user (or a service) and a set of pods. What’s cool about a service is the way it decides which pods belong to him. If you take a look at the output of last kubectl describe
command, you’ll see that pod has few key-value labels attached to it. K8s added those automagically, but you can add your own.
Service can use those labels as a filter criteria for finding the pods. They don’t even have to share the same image! If we use kubectl expose
command, K8s will configure such service and its labels filter for us:
1 2 3 4 5 6 7 |
$ kubectl expose deployment/nginx --type="NodePort" --port 80 #service "nginx" exposed $ kubectl describe service/nginx #Name: nginx #Namespace: default #Labels: run=nginx #NodePort: <unset> 32543/TCP |
We used “NodePort” type of a service, which binds certain internally exposed port to node’s external port. However, we also could’ve used “ClusterIP” to keep the service inside the cluster, “LoadBalancer” to use cloud provider’s load balancing capabilities (if supported), or even “ExternalName” to mess up with DNS.
When we describe
‘d newly created service, one of the lines contained external port number (32543) that became available to us, so using it along with node’s IP address in theory should bring us into nginx pod:
1 2 |
$ kubectl describe node/minikube | grep InternalIP # InternalIP: 192.168.99.100 |
And theory worked. We have a pod in a cluster that’s accessible from outside.
Cleaning up
There’s whole bunch of things you could play with in minikube cluster, but once you’ve done, it’s time to clean up.
minikube delete
should destroy the VMs and remove all cluster traces.
1 2 3 |
$ minikube delete # Deleting local Kubernetes cluster... # Machine deleted. |
Conclusion
Hopefully, that gave you some feeling about what Kubernetes is. In production though, you wouldn’t be starting services and deployments manually, there’re YAML configurations for that. You also would be more focused at running pods at scale and concerned with upgrade strategies, and we covered none of that. However, there should be some first step into orchestrating containers with Kubernetes and starting a single pod in a cluster of one machine seems like one.
One thought on “What exactly is Kubernetes”