How do you usually configure an app? Over the decades our industry came up with multiple approaches, like providing command line arguments, various config files, registry settings and environment variables. Even hardcoding certain options into the app itself also works sometimes. Whenever we need to reconfigure the app we’d just go to its host, change a setting or two, and the job is done.
And now imagine a challenge: you have a cluster of services that come and go, they have different versions and locations, and you need to reconfigure all of them. How would you do that?
Continue reading “Using Consul Key-Value Store for Service Configuration”
As Docker containers supposed to be small, single process and easy replaceable instances, it’s not particularly clear how persistent data fits into that picture. Imagine you have MySQL container which you decided to upgrade. What will you do with its database files? In containers world “upgrade” means “nuke an old one, start a new one” and your data will turn into radioactive ashes with the rest of container’s file system.
However, along with the problem Docker also provides a solution: Docker volumes.
Continue reading “Persistent data in Docker volumes”
Docker has several types of networks, but one of them is particularly interesting. Overlay network can span across hosts boundaries, so your web application container at HostA can easily talk to database container at HostB by its name. It doesn’t even have to know where that container is.
Unfortunately, you can’t just create overlay network and hope that it magically finds out about all participating hosts. There should be one more component in order to make that happen.
Of cause, we could use Docker in Swarm mode and problem’s solved. But we don’t have to. Configuring multi-host Docker network without Swarm is actually quite easy. Continue reading “Multi-host Docker network without Swarm”
Today we’ll take a look at the last component of Elastic’s ELK stack – Kibana. Even though Logstash does a great job of processing logs and other data streams, and Elasticsearch is a powerful hybrid of a search index and a storage for them, these tools do not provide graphical user interface for analyzing the data. For some tasks otherwise convenient command line interface is just not enough. This is where Kibana steps in.
Continue reading “Visualize Elasticsearch data with Kibana”
Last time we talked out about Elaticsearch – a hybrid of NoSQL database and a search engine. Today we’ll continue with Elastic’s ELK stack and will take a look at the tool called Logstash.
Continue reading “Processing logs with Logstash”
So far we’ve been dealing with name-value kind of monitoring data. However, what works well for numeric readings isn’t necessarily useful for textual data. In fact, Grafana, Graphite and Prometheus are useless for other kind of monitoring records – logs and traces.
There’re many, many tools for dealing with those, but I decided to take a look at Elastic’s ELK stack: Elasticsearch, Logstash and Kibana – storage, data processor and visualization tool. And today we’ll naturally start with the first letter of the stack: “E”.
Elasticsearch is fast, horizontally scalable open source search engine. It provides HTTP API for storing and indexing JSON documents and with default configuration it behaves a little bit like searchable NoSQL database.
Continue reading “Quick intro to Elasticsearch”
I don’t know if that’s a coincidence or not, but drastic changes in application metrics usually happen soon after a product upgrade was made. In fact, whenever I have to deal with new issue on production server, the first thing I do is checking if it was recently updated. No wonder it makes sense to record such events along with other monitoring data.
But assuming our monitoring data is in Graphite, how would we do that?
Continue reading “Tracking application events in Graphite”
There’re two conceptually different approaches in collecting application metrics. There’s PUSH approach, when metrics storage sits somewhere and waits until metrics source pushes some data into it. For instance, Graphite doesn’t do any collection on its own, it waits until somebody like collectd does the delivery.
There’s second approach – PULL. In this approach metrics sources don’t try to be smart and just provide their readings on demand. Whoever needs those metrics can make a call, e.g. HTTP request, in order to get some.
Prometheus collects metrics using the second approach. Continue reading “Scraping application metrics with Prometheus”
Even though Graphite does very decent job in displaying individual metrics graphs, its dashboards support is quite limited. Of cause, we could take its powerful Render URL API and build anything we like in good old HTML, but on the other hand, there’s Grafana.
Continue reading “Building dashboards with Grafana”
In the variety of collectd plugins there’s one ‘to rule them all’. If due to some course of events all collectd plugins except for Exec would be taken from you, you’d still be able to restore all its functionality with Exec.
As the name suggests, Exec starts external program or script and interprets its output as source of data. To be specific, it looks for lines that follow this scheme:
PUTVAL hostname/source-instance/datatype-instance [Interval=seconds] timestamp:value[:value..]
To be even more specific, these lines would work:
PUTVAL myhost/cpu-0/cpu-system interval=10 N:51
PUTVAL hostname/vm_count/gauge 1484012951:U