Dissecting Kubernetes example

Much to my surprise, starting from the last week Kubernetes became the part of my job description. It’s no longer something just interesting to try, I actually have to understand it now. And as you probably could tell from my older k8s post, I’m not quite there. The post sort of builds a logical example (containerized web server) but something just doesn’t click.

I was trying to understand what’s missing, and it seems like the problem is in the tooling. You see, there’re two and a half ways to run something in Kubernetes. One is through ad-hoc commands, like kubectl run or kubectl expose. They are simple, but they also skip few important concepts happening in the background, so the whole picture stays unclear.

Another one and a half ways of making an app is through building its components from configuration files. Either one by one, or passing the whole configuration directory to kubectl. This approach, even though being slightly harder, somehow makes much more sense and it doesn’t leave logical gaps.

So today we’ll build something simple again, e.g. replicated nginx server, but this time every single Kubernetes object will come from configuration file and we’ll see why every one of them is necessary.


The only things we’ll need are VirtualBox for virtual machines, minikube for creating Kubernetes cluster and kubectl for working with it. Once they all installed, minikube start will create the cluster and we’re good to go.


As you probably know by now, the smallest unit of work in k8s is pod – an envelope around one or more containers, with own internal IP address, unique identifier, name, etc. According to official documentation, we can think of it as a unit of service.

Like before, we’ll make our first pod around nginx Docker container. But this time it’s going to have its own configuration file:

YAML files are brilliant. Easy to write, easy to read. Here we defined a kind  of an object this YAML file is for, its name and list of containers it is made of.

Then we can send it to Kubernetes with kubectl apply command and basically the job is done.

It takes a little bit of time until the pod gets ready, mostly because it has to pull nginx image, but eventually it’s there. We even can get inside of that pod and look around:

It was so empty inside, so I had to install at least something bright and shiny, like htop, which also helped to confirm that there’s indeed nginx  process inside:


However, single pod is vulnerable and often useless. Firstly, we can’t make a call to it from outside. Then, if something happens to the pod or underlying host, it’s gone forever. If we need to scale it to, let’s say, 5 pods, we have to repeat apply command 4 more times.


On the other hand, there’re Controllers that can solve most of the issues from above.

Deployment controller

Controllers are Kubernetes objects that can manipulate pods. For instance, there’s a Cron Job controller, which will launch pods on schedule, or Replica Set controller, which will scale pods up and down. Probably the most versatile controller is Deployment. It can make sure that pod(s) exists, apply or revert updates, use Replica Set to scale up and down and perform few other tricks.

For what we care, Deployment can ensure that our nginx stays alive as long as possible and at some point scales it up to more instances. So, let’s add it.

This configuration is more complex than previous one, but after all, it describes more complex object. Here’s a breakdown of what it actually does:

  1. replicas obviously defines now many copies of the pod we need to run.
  2. selector tells deployment how to find pods it supposed to be managing. Here deployment object will look for pods that have app: webserver label attached to them.
  3. template section actually describes how to create a new pod for this deployment, and it’s very similar to pod.yml configuration we had at previous step. In addition to container description, template says that newly created pods should be labeled with app: webserver label – the one that deployment will be looking for.

Like with pod YAML file, kubectl create works on deployments too:

Few seconds later we can confirm that both deployment object and the pod it supposed to create are there:

Maintaining desired configuration

And now we can see why Deployment is that useful. For instance, assume that our nginx pod suddenly passed away:

Sooner than you could say “Oh, SHI..” the deployment will create a new one:


Another use case is scaling up or down. If we realized that our Facebook killer app is actually getting popular, but there’s no way it can handle the increasing load with only one service instance, we could add nine more in single command (or through YAML):

But even after being scaled our nginx pod lacks of one crucial feature: nobody can get to it from the outside. We need some sort of an entry point.



It’s really hard to create an entry point for pods when their quantity and location constantly changes. Service object on the other hand can use use label selector to describe a set of pods that provide a certain functionality – a service. Not only it’ll look for ever moving pods across the cluster, it will also provide an entry point for them, either from inside or from the outside.

Service doesn’t even have to point to any pod at all. We can create one pointing some other, external service, and in-cluster services would neither know, nor care about that.

For now we’re going to create a ClusterIP kind of service, which provides common internal IP for set of pods:

Free DNS and load balancing

Even though we still can’t access nginx pods from outside (though we could’ve used another type of service to achieve that), we got ourselves a point of entry from within the cluster, which, in fact, is load balanced and added as local DNS entry.

We can prove this quite easily. For simplicity I downscaled the nginx-deployment down to 2 replicas and replaced the word nginx in the titles of index.html files in them with the host name the file resides on. Now, as we still have  single-nginx-pod running, we can get inside of it and try to curl nginx-service from there.

Have you seen that? Every time nginx-service receives new request, it dispatches it to new pod. I checked their names and it all matches. Load balancing is a true thing.


The only problem is that we still can’t get to these pods from outside. Well, not for long.


Ingress is a magic box that can route requests from outside world to services within the cluster. It actually can do much more than that, but this time we only interested in out-in routing. So, without further ado:

I think I had to enable ingress add-on with minikube addons enable ingress before it could work, but honestly I don’t remember.

After we create the object with ubiquitous kubectl create and found out cluster external IP address with minikube ip, the browser will show that it all not just works, it still load balances!


And after refresh:





Hopefully the whole picture makes much more sense now. Creating those objects manually one by one definitely helped me to get a glimpse of understanding why Kubernetes has what is has.

There’s another thing that I find interesting. If you take a look at configuration files, you’ll see that you can apply them in any order. For instance, I can create the service first, then ingress, and finally the deployment, and there won’t be any errors. That’s kind of obvious, given that the connection between the objects is selector based. But I wonder, is that something that Google architects initially intended to achieve, or it’s natural side-effect of a good design, or something else? What other models they anticipated? I’m really curious about their way of thinking.

But I digressed. Configuration files are the “true” way, Kubernetes is cool now, let’s move on.

3 thoughts on “Dissecting Kubernetes example

  1. Hi Pav, thank you for this very informative article.
    Was looking for something exactly like this.

    Like you I am about to work very heavily with this technology. And still can’t wrap my head around some of the concepts. What you’ve created in this POC is a “micro-service” correct?
    but are the pods holding multiple instances or is each instance a new pod.

    and second, do you have any experience on deploying .NET Core to these containers. I would really like to pick your brain. If I could.

    1. Hi Bjarke,
      It could have been a micro-service example if I didn’t modify each pods index.html file, so it became unique per pod. If you revert that change then indeed that would become scaled micro-service with two identical pods, each pod having a stateless service process. In our case – nginx. Having one service process per container/pod is kind of a good practice and ‘default’ design approach.
      I have some experience in putting and troubleshooting .NET Core apps into containerized environment, so if you have a question I know the answer to, I don’t mind to help.

Leave a Reply

Your email address will not be published. Required fields are marked *