Using Vagrant for Windows VMs provisioning

Vagrant Windows

Using Vagrant for creating Consul cluster on Linux probably was fun. But what about Windows hosts? Believe it or not, but more than half of developers are actually using Windows, so for most of the folks seeing how Vagrant creates Linux VMs is pretty useless.

However, you can create and provision Windows VMs with Vagrant with little to no problem. In fact, Windows support has been around for years. But there’re some things to keep in mind though. Continue reading “Using Vagrant for Windows VMs provisioning”

How to use Vagrant to create Consul cluster

Vagrant logo

Last two articles about Consul service discovery involved one simple but extremely boring manual task: creating and configuring a cluster. In fact, I had to do it twice. I had to create three virtual machines, download and unpack Consul on them, find out their IP addresses, add configuration files and finally launch the binaries.

It’s dull. It’s boring. Humans shouldn’t do that kinds of things by hand. Seeing how easily we can automate creation of Docker containers with Dockerfile and docker-compose makes me wonder if we can do the same for hosts. Continue reading “How to use Vagrant to create Consul cluster”

Using Consul for Service Discovery

Consul logoImagine your distributed app has two kinds of services: web and db. Both of them are replicated for higher availability, live on different hosts, go online and offline whenever they like. So, here’s a question: how do web‘s find db‘s?

Obvious solution would be to come up with some sort of reliable key-value storage, and whenever service comes online, it would register itself with the address in the store. But what happens when service goes offline? It probably could notify the store just before that, but c’mon, it’s internet: things can go offline without any warning. OK, then we could implement some sort of service health checks to ensure that they are still available… By the way, did you notice how quickly the simple idea of using external store for service discovery started to become a reasonably large infrastructure project?

Service discovery is something very hard to do. But we don’t have to – there’re tools for that, and Consul is one of them. Continue reading “Using Consul for Service Discovery”

Visualize Elasticsearch data with Kibana

KibanaToday we’ll take a look at the last component of Elastic’s ELK stack – Kibana. Even though Logstash does a great job of processing logs and other data streams, and Elasticsearch is a powerful hybrid of a search index and a storage for them, these tools do not provide graphical user interface for analyzing the data. For some tasks otherwise convenient command line interface is just not enough. This is where Kibana steps in.

Continue reading “Visualize Elasticsearch data with Kibana”

Quick intro to Elasticsearch

ElasticsearchSo far we’ve been dealing with name-value kind of monitoring data. However, what works well for numeric readings isn’t necessarily useful for textual data. In fact, Grafana, Graphite and Prometheus are useless for other kind of monitoring records – logs and traces.

There’re many, many tools for dealing with those, but I decided to take a look at Elastic’s ELK stack: Elasticsearch, Logstash and Kibana – storage, data processor and visualization tool. And today we’ll naturally start with the first letter of the stack: “E”.

What’s Elasticsearch

Elasticsearch is fast, horizontally scalable open source search engine. It provides HTTP API for storing and indexing JSON documents and with default configuration it behaves a little bit like searchable NoSQL database.

Continue reading “Quick intro to Elasticsearch”

Scraping application metrics with Prometheus

Prometheus logoThere’re two conceptually different approaches in collecting application metrics. There’s PUSH approach, when metrics storage sits somewhere and waits until metrics source pushes some data into it. For instance, Graphite doesn’t do any collection on its own, it waits until somebody like collectd does the delivery.

There’s second approach – PULL. In this approach metrics sources don’t try to be smart and just provide their readings on demand. Whoever needs those metrics can make a call, e.g. HTTP request, in order to get some.

Prometheus collects metrics using the second approach. Continue reading “Scraping application metrics with Prometheus”

Building dashboards with Grafana

Even though Graphite does very decent job in displaying individual metrics graphs, its dashboards support is quite limited. Of cause, we could take its powerful Render URL API and build anything we like in good old HTML, but on the other hand, there’s Grafana.

Grafana dashboard

Continue reading “Building dashboards with Grafana”

Quick intro to rrdtool

I mentioned in previous post that collectd uses rrdtool for saving its data by default. It results .rrd  file for each metric, which later can be rendered using very same rrdtool. RRD files are not something most of the people are familiar with and the tool itself isn’t particularly easy to use, so why such an easy to use tool as collectd would choose it?

For a number of reasons. Continue reading “Quick intro to rrdtool”

Quick intro to Apache Kafka

What is Apache Kafka

Official definition of Apache Kafka is distributed streaming platform, which starts to make sense only after reading at least few chapters of its documentation. However, idea behind it is relatively simple. In large distributed apps we have many services that produce messages: logs, monitoring events, audit entries – any type of records. On the other hand there’s similar amount of services that consume that data. Kafka brings these parties together: it accepts data from producers, reliably stores it in topics and allows consumers to subscribe to them. In other words, Kafka is a love child of distributed storage and messaging system.

Apache Kafka

Continue reading “Quick intro to Apache Kafka”