I suddenly realized that I haven’t blogged about Kubernetes for quite a while. But there’s so much happening in that area! For instance, even though creating Kubernetes objects from YAML configuration was the true way, it never felt that much convenient. So here’s the solution – use helm, the package manager for Kubernetes.
What’s helm
Helm allows installation of Kubernetes apps in the same manner as we’d install TypeScript via npm
or nginx via apt-get
. It actually comes as two components: a command line client called helm and its companion service hosted inside of Kubernetes called tiller. Together they can search, install, remove, upgrade and create new application packages called charts.
‘Chart’ is not the only potentially confusing choice of words. For example, the instance of a chart running in Kubernetes is called a release. Two instances of the same chart would become two releases, and so forth.
Fortunately, the packages source has more conventional name – a repository. Obviously, there can be more than one of them, including privately maintained.
Having terminology sorted out, let’s install helm and see it in action.
Install
Well, it’s easy. Any of main OS package managers (brew, apt-get, chocolatey) can install helm. In my case brew install helm
does the trick.
However, this only installs command line client. In order to make the whole thing to work we also need to install the server component – tiller – into Kubernetes cluster. I’ve started my cluster locally via minikube start
and helm init
will take care of the rest.
As a side note, it’s actually possible to see what exactly gets installed: helm init --output yaml
spits out pretty trivial Deployment YAML that tiller is made of:
1 2 3 4 5 6 7 8 9 |
#... kind: Deployment #... spec: template: spec: containers: - image: gcr.io/kubernetes-helm/tiller:v2.9.1 #... |
Basic operations
OK, it’s there, it’s working, let’s do something. For instance, let’s find some chart, install it, check its status and then remove it.
Search
First, let’s find something to install. helm search
is the command to show all known packages. In fact, I already know what I want to try, so helm search prometheus
narrows the search down to just a handful of packages:
1 2 3 4 5 |
helm search prometheus #NAME CHART VERSION APP VERSION DESCRIPTION #stable/prometheus 6.8.0 2.3.1 Prometheus is a monitoring system and time seri... #stable/prometheus-blackbox-exporter 0.1.0 0.12.0 Prometheus Blackbox Exporter #... |
Install
stable/prometheus
seems to be the one I need, so without further ado:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
helm install stable/prometheus #NAME: willing-grizzly #... #RESOURCES: #==> v1/PersistentVolumeClaim #... #==> v1/Pod(related) #... #==> v1beta1/Deployment #... #NOTES: #... #The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: #willing-grizzly-prometheus-alertmanager.default.svc.cluster.local # #Get the Alertmanager URL by running these commands in the same shell: # export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") # kubectl --namespace default port-forward $POD_NAME 9093 #... |
It’s actually very convenient. Notes
section in the end shows some info about what was installed and where to go next. For instance, there’re some suggested commands at lines 17-18 which will forward local port 9093 to exposed port of Alertmanager
service, so we could point the browser to it and check what’s inside.
Btw, we could see the same notes again with the help of helm status %release name%
command.
List installed charts
It’s also quite easy to see that prometheus indeed was installed:
1 2 3 |
helm list #NAME REVISION UPDATED STATUS CHART NAMESPACE #willing-grizzly 1 Mon Jul 9 23:08:26 2018 DEPLOYED prometheus-6.8.0 default |
Pay attention to REVISION
column (the second one). That will come in handy once we get to upgrades and rollbacks.
Delete release
Well, that’s pretty obvious. helm delete %release name%
will do the trick. I don’t really want to do that now, but adding --dry-run
argument would help to see how it would go:
1 2 |
helm delete willing-grizzly --dry-run # release "willing-grizzly" deleted |
Advanced operations
Customizing installation
Installing preconfigured charts is not as useful as to be able to configure how exactly they are going to be installed.
See customizable options
While it’s possible to change virtually any parameter of the chart, it’s still a questions which ones do we have? helm inspect
knows the answer:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#$ helm inspect values stable/prometheus rbac: create: true #... alertmanager: ## If false, alertmanager will not be installed ## enabled: true ## alertmanager container name ## name: alertmanager |
Apply options
As you can see (actually you can’t, as I truncated the output, but believe me on this one), it has lots and lots of customizable values. If I wanted to disable the Alertmanager during installation, I’d probably put alertmanager.enabled=false
to separate config file and passed it as an additional argument to install
command:
1 |
helm install -f config.yaml stable/prometheus |
Alternatively, it’s also possible to pass this value directly, without the file at all:
1 |
helm install --set alertmanager.enabled=false stable/prometheus |
However, as we already installed prometheus, it would be way simpler to just upgrade it.
Upgrade and rollback
Upgrade
If we replace install
with upgrade
, we can pass new chart settings to existing release:
1 2 3 4 5 6 |
helm upgrade --set alertmanager.enabled=false willing-grizzly stable/prometheus #Release "willing-grizzly" has been upgraded. Happy Helming! #LAST DEPLOYED: Tue Jul 10 00:02:19 2018 #NAMESPACE: default #STATUS: DEPLOYED #... |
It looks like alertmanager
didn’t appear in the output this time, but let’s check to be sure:
1 2 3 4 5 6 7 |
kubectl get svc #NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE #... #willing-grizzly-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 48m #willing-grizzly-prometheus-node-exporter ClusterIP None <none> 9100/TCP 48m #willing-grizzly-prometheus-pushgateway ClusterIP 10.99.116.205 <none> 9091/TCP 48m #willing-grizzly-prometheus-server ClusterIP 10.105.214.102 <none> 80/TCP 48m |
Nope, it’s gone.
Rollback
In case we changed our minds and want to revert back to previous version, there’s nothing easier. Simply pick the latest stable revision (remember that second column value in helm list
, 1
?) and call rollback
:
1 2 |
helm rollback willing-grizzly 1 #Rollback was a success! Happy Helming! |
We also can get back to list of revisions anytime with history
command:
1 2 3 4 5 |
helm history willing-grizzly #REVISION UPDATED STATUS CHART DESCRIPTION #1 Mon Jul 9 23:08:26 2018 SUPERSEDED prometheus-6.8.0 Install complete #2 Mon Jul 9 23:57:01 2018 SUPERSEDED prometheus-6.8.0 Upgrade complete #3 Tue Jul 10 00:16:41 2018 DEPLOYED prometheus-6.8.0 Rollback to 1 |
Easy peasy.
Create own chart
Of cause, we can and should create our own chart.
Create chart
There’s helper command helm create
which creates a boilerplate template:
1 2 |
helm create demo-chart # Creating demo-chart |
1 2 3 4 5 6 7 8 9 10 11 |
tree demo-chart/ # demo-chart/ # ├── Chart.yaml # ├── charts # ├── templates # │ ├── NOTES.txt # │ ├── _helpers.tpl # │ ├── deployment.yaml # │ ├── ingress.yaml # │ └── service.yaml # └── values.yaml |
So Chart.yaml
obviously is chart definition with the name and other stuff, whereas charts
seems to be the folder for charts-dependencies. values.yaml
is quite interesting. All those parameters we could see in helm inspect values
and which we could change in helm install/upgrade
are actually coming from here.
Templates
Checking the contents of templates
folder reveals one important secret – all these deployments, services and other Kubernetes goodies that we’re going to ship in our package and which will be stored in templates
folder – they all can be templates.
Here’s how it could be useful. For instance, if we don’t want to hardcode some Deployment’s name in YAML and rather pick one assigned to release with some suffix, here’s how we could do that:
1 2 3 4 5 |
apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Release.Name }}-deployment #... |
.Release
(as well as .Values
) is one of helm’s built-in objects that templates can use. There’re few other sources of values, but simple value substitutions is not the only thing that templates are capable of. They also have control structures like if
or range
, some functions or even pipelines, so {{ .Release.Name | upper }}
is perfectly valid template entry.
Packing and installing the chart
In fact, folder with the chart is already perfectly installable entity, so helm install ./demo-chart
would actually work. However, if we’re going to distribute the chart or upload it to our own charts repository, it would make more sense to pack it first:
1 2 |
helm package demo-chart/ # Successfully packaged chart and saved it to: /Users/pav/Documents/helm/demo-chart-0.1.0.tgz |
This creates a tarball, which, but the way, is also installable.
Conclusion
So this is helm. It’s actually pretty neat: easy to use, quite easy to understand and definitely more powerful than I need for foreseeable future. If only it could not exist and simply be a part of some already existing package manager. After all, how can I possibly keep remembering all of them? Even JavaScript has three. Can we have, maybe, just one to rule them all? Please?
Aw, this was a really nice post. Spending some time and
actual effort to produce a superb article… but what can I say… I
procrastinate a whole lot and never seem to get anything done.