Quick intro to rrdtool

I mentioned in previous post that collectd uses rrdtool for saving its data by default. It results .rrd  file for each metric, which later can be rendered using very same rrdtool. RRD files are not something most of the people are familiar with and the tool itself isn’t particularly easy to use, so why such an easy to use tool as collectd would choose it?

For a number of reasons. Continue reading “Quick intro to rrdtool”

Host monitoring with collectd

collectdDistributed apps introduce a challenge that we usually could avoid in monolithic ones: how do we say that app is performing well? I’m not talking about it being user-friendly or providing business value. How do you tell that components of your distributed app are actually running? Which services are overutilized? Underutilized? Run out of disk space?

There’re tools to get that answers and collectd is one of them.

Continue reading “Host monitoring with collectd”

Highly available Kafka cluster in Docker

Apache Kafka cluster in Docker

Up until now we’ve been experimenting with Apache Kafka, a tool build with cluster and high availability in mind, but using exactly one host and availability settings which only few very optimistic people would call high.

Not today.

Today we’re going to spin up multi-host Kafka cluster and we’ll replicate topic in it, so if one host goes down, data and its availability won’t suffer.

Continue reading “Highly available Kafka cluster in Docker”

“Hello world” with Apache Kafka

Single node cluster

So it’s time to send some data bits through Apache Kafka. But first, as usual, we need to install it first.

Installing Kafka is so trivial, so I’ll change my rule and will actually explain the process. Here goes the manual:

  1. Install Java Development Kit (you probably have it already)
  2. Download Kafka tarball
  3. Uncompress it ( tar -xzf kafka_2.11-0.10.1.0.tgz in *nix systems)
  4. Done. You installed Kafka.

Continue reading ““Hello world” with Apache Kafka”

Quick intro to Apache Kafka

What is Apache Kafka

Official definition of Apache Kafka is distributed streaming platform, which starts to make sense only after reading at least few chapters of its documentation. However, idea behind it is relatively simple. In large distributed apps we have many services that produce messages: logs, monitoring events, audit entries – any type of records. On the other hand there’s similar amount of services that consume that data. Kafka brings these parties together: it accepts data from producers, reliably stores it in topics and allows consumers to subscribe to them. In other words, Kafka is a love child of distributed storage and messaging system.

Apache Kafka

Continue reading “Quick intro to Apache Kafka”

Building RabbitMQ Cluster

Cluster with RabbitMQ

As I promised last time, it’s time to check out RabbitMQ feature we can consider advanced – clustering. RabbitMQ cluster is a set of individual nodes that share the same users, queues, exchanges and runtime parameters. New nodes can come and go, be located at different continents, yet for the connected client they will look like one entity.

Clustering is not the same as replication or high availability. Yes, users and whatever is usually necessary for node to work will be copied across all nodes. Queues, however, will reside on the node they were initially created, though they will be accessible from any node. If one node goes down, its queues go with him.

Continue reading “Building RabbitMQ Cluster”

Quick intro to RabbitMQ

Quick intro to RabbitMQ

RabbitMQ is an example of full blown Message Queue that somehow remained simple to use. Unlike ZeroMQ, which is embeddable into the services that use it, RabbitMQ is a broker. It’s an intermediary messaging service with own users, permissions, encryption, configurable durability and delivery acknowledgements, clustering, high availability, and bazillion of other features you might never need. RabbitMQ is built on top of Erlang and inherits its known resilience with compatibility to virtually any OS.

In the following article we’ll try to get a sense of how messaging with RabbitMQ feels like. I’ve chosen Ubuntu (in a Docker container) as a platform, but it could’ve been anything else. Continue reading “Quick intro to RabbitMQ”

Quick intro to Windows containers

Windows containers

It finally happened. With release of Windows Server 2016 you can run Docker containers with Windows inside. There’s no Virtual Machine hiding somewhere in order for that to happen, or some sort of Windows emulation built on top of Linux core. It’s true Windows in true Docker, which supports Dockerfiles, docker-compose and other docker-goodies. Continue reading “Quick intro to Windows containers”

Using ZeroMQ with Docker

ZeroMQ, Node.js and Docker

Last time we built three client-server Node.js apps that were talking to each other using ZeroMQ. However, running both client and server on localhost is a little bit lame. Let’s put them into containers! They’ll still be lame, but now with Docker around.

So, here’s the plan: let’s see what we need to do to last week’s fire-and-forget ZeroMQ app, so its client and server can work and communicate from within Docker containers. Continue reading “Using ZeroMQ with Docker”