Quick intro to Elasticsearch

So far we’ve been dealing with name-value kind of monitoring data. However, what works well for numeric readings isn’t necessarily useful for textual data. In fact, Grafana, Graphite and Prometheus are useless for other kind of monitoring records – logs and traces.

There’re many, many tools for dealing with those, but I decided to take a look at Elastic’s ELK stack: Elasticsearch, Logstash and Kibana – storage, data processor and visualization tool. And today we’ll naturally start with the first letter of the stack: “E”.

What’s Elasticsearch

Elasticsearch is fast, horizontally scalable open source search engine. It provides HTTP API for storing and indexing JSON documents and with default configuration it behaves a little bit like searchable NoSQL database.


Elasticsearch is written in Java, so installation is very easy: download the archive and launch bin/elasticsearch in it. However, running it through official Docker container is even simpler: docker run -d -p9200:9200 elasticsearch. Port 9200 is a front door, so let’s look what’s inside.

Looking around

Official guides usually use Kibana for running demo queries, but c’mon, it’s just HTTP and JSON, we don’t need separate tool for that when there’s terminal and curl! Elasticsearch supposed to be listening at 9200 port, so let’s send blank request to it and see what happens:

Version 5.2.0.. Seems to be the latest one.

There’re other queries we can run without adding any data. For instance, we can check node’s health status:

Or get list of current indices:

Obviously, for brand new installation it has none. But as a number of unfamiliar words starts to climb, let’s take a look at elasticsearch glossary.

Elasticsearch glossary

So we’ve already mentioned node, which is a single running instance of elasticsearch with its own storage and settings. Even one node counts as a cluster, but having several of them in conjunction with index sharding (similar to Kafka topics partitioning) and replication would both decrease response time and increase index survival chances.

Term index itself describes a collection of documents. Your cluster can have as many of them as you want. Within the index you can categorize you documents by types – arbitrary names describing documents of similar structure, e.g. customers or paychecks. Finally, a document is a good old JSON.

Create, Read, Update, Delete

But enough of theory, it’s time to do something with the data.


Adding new document to elasticsearch is as easy as HTTP POST request:

We posted new { "kind": "info", "message": "..."} document to index called monitor and type named logs. None of the latter two existed before that, but elasticsearch created them along with indexing the document. It also responded with JSON containing newly inserted document ID (_id) and some other details. It’s also possible to provide own ID by using PUT request instead of POST and adding new ID to the URL, e.g. -X PUT monitor/logs/42. Query string parameter ?pretty is used only for formatting response JSON.

As not many people would actually enjoy inserting documents one by one, there’s also bulk insert option.

Bulk request requires two JSONs per one document. The first one describes bulk operation kind (in our case – “index”) and the second one is the document itself.


Now, when we have something in the index, we can perform simple search to read the documents back. Default elasticsearch settings store the full copy of a document along with its index, so in such case search with empty criteria would behave like SELECT * statement:

It’s also possible to get single document by its ID:


Similarly, knowing document ID we can update it. “Epic fail has just happened” message for OutOfMemoryException is probably saying less than it should, so it’s better to update it:

However, under the hood elasticsearch doesn’t update the document but rather replaces it with the new one, keeping the same ID.


When you need to get rid of something, HTTP DELETE will do the trick. E.g. curl -X DELETE


But many NoSQL databases are capable of storing and retrieving JSON documents. The real power of elasticsearch is in search (duh). There’re two approaches for searching for data: the REST Request API for simple queries and more sophisticated Query DSL.

REST Request API simply means there’s additional argument to HTTP GET request:

There’s not much you can insert into query string – a search term, maybe sort= instruction, and that’s it. Query DSL, on the other hand, is full blown domain-specific language, that has numerous search arguments, boolean expressions, result filters – all sorts of things to help in finding what we need.

Query DSL search is also a HTTP GET request, but with a little bit tricker syntax. If we’d want to find non-critical log messages that mention memory status, we could use something like this:


In addition to searching capabilities elasticsearch can aggregate stuff. Aggregation is a huge topic on its own, but to get a feeling of how it looks, here’s how we’d get statistics of logs by their kind:

Because _search URL will do both searching and aggregation and we didn’t provide search criteria (so query would return everything), we added "size": 0 parameter to prevent search results from showing. And the rest is quite obvious.


I would say we just scratched the surface of elasticsearch, but we did much less that that. We accidentally dropped a feather on it, sneezed and the air flow blew the feather and a few surface molecules that stuck to it away. Updating documents by submitting the script, document schemas, filters, complex search and aggregation queries, clusters, documents analysis – we covered none of that.

But we did cover enough to get the feeling what the tool is: easy to use search engine with convenient API and bazillion of useful data exploring features to google. Next time we’ll take a look at how to fill it in with textual monitoring data with the help of Logstash.

2 thoughts on “Quick intro to Elasticsearch

Leave a Reply

Your email address will not be published. Required fields are marked *