Keeping application secrets with Vault

I’ve been talking to one of our security guys recently about providing my piece of software with secret certificate and in the meanwhile keeping that certificate out of my hands. Apparently, managing application secrets is not an easy task. Later that day I checked out one of the tools that supposed to make such tasks simper – HashiCorp Vault – and was quite impressed. I didn’t realize how big the problem domain is, and how many tools and tricks you have to consider in order to build a solution for that. Today I want to go through the basics of managing secrets with Vault and hopefully highlight few things what impressed me the most.

What is Vault

Vault is a command line tool and a RESTful service that’s designed to safely keep application secrets such as logins, passwords, tokens or certificates. Obviously, along with keeping it also provides the secrets back on demand. What’s more, vault can work with custom secret backends (e.g. AWS, databases, PKI) and generate temporary credentials on a fly. All requests for secrets will be authenticated and audited, so security stuff should always be able to tell who requested what.

Installation

Almost everything that HashiCorp produces is downloadable as simple zip archive and Vault itself is not an exception. I downloaded it on Mac, so all examples will have unixy flavor. However, running it on Windows wouldn’t be much different.

“Hello world!”

Let’s try to run something that will give us a feeling of what we’re going to deal with. As vault involves command line tool and a service, we need that service to start first. For demo purposes, vault server –dev will do the trick:

Another thing we need to do is to configure command line vault to talk to the server via HTTP, not HTTPS, as it would try to do by default. We can do that by setting VAULT_ADDR environmental variable and vault server –dev already wrote the whole command for that:

Now, we can write and read some secrets. First, let’s store credentials for imaginary staging database (“sa”, “1”) under secret/db-staging key:

Quite simple. Getting them back is also easy:

Running a real server would require us to authenticate first, but demo server takes care of that for now. However, talking to vault server via HTTP requires valid authentication token even now. The following call will fail:

Whereas this one, with root token (taken from vault server –dev output), will work:

In real life it would work similarly: somebody would’ve put the secrets into the store and then applications would request them back, each using its own authentication token.

But what we’ve done so far doesn’t look that much impressive. After all, simple Consul key-value store with enabled authentication could’ve done the same. In order to understand what makes the tool so cool we need to dive a little bit deeper.

A little bit more realistic example

Let’s try to do the same example, but in more production-like fashion.

Starting the server

You can’t just start vault server without configuring it first. Configuration’s bare minimum should at least involve persistent storage for encrypted secrets and HTTP(S) endpoint details. There’s whole number of possible storages for secrets (Consul, S3, PostgreSQL, Azure to name a few), but I’ll use regular file system for storage and will keep the same HTTP settings as in –dev mode.

By the way, HashiCorp came up with its own configuration format called HCL (you just saw it above), which is a more human readable version of JSON. I’m quite skeptical when it comes to inventing new languages, but what do I know. On the bright side, we indeed can start the server now:

Unsealing the vault

That’s really cool idea. The vault can be in two states: sealed (default), where all secrets are in persistent storage and vault has no idea how to decrypt them, and unsealed (desired), when all the secrets finally get into the memory. The process of unsealing the vault is brilliant.

When you initialize vault the first time it creates a master key for decryption and splits it into several pieces (e.g. 3). Sealed vault never knows the key, so in order to unseal it you have to provide at least few pieces (e.g. 2) of the master key in order to restore it. If different people own different pieces, single person can’t unseal the vault – somebody else must participate in the process. But in case of emergency anyone can seal it back.

Let’s see that in action. I’m going to initialize newly created vault and tell it to split master key into three pieces and require at least two of them to unseal the vault:

Having the keys, we need to call vault unseal as many times as key-threshold options was set to. Obviously, each time with different keys.

Now vault is ready to share with its data.

Authentication

Of cause, unauthorized reading from and writing to a vault is nonsense. Caller have to identify itself with a token, and the only token we have by default is root token (returned by vault init). Let’s try it.

While we’re authenticated, let’s also add few secrets to the vault. We’ll need them later.

Creating a policy and a new token

Using ‘root’ account is not particularly safe. We could’ve created new, less privileged token, but as we’re logged in as ‘root’, new token would become a child token of ‘root’ and without providing custom policy it also would inherit all ‘root’ permissions. We need to create custom policy first.

A policy is basically another HCL file that says what paths a token will have an access to. If I wanted to create some sort of development policy that provides read-only access to secret/db-staging secrets and zero access to everything else, I’d come up with something like this:

After saving the policy to e.g. dev-policy.hcl file and writing it to the vault with vault policy-write development dev-policy.hcl command, we finally could create a new token with limited access:

It’s easy to test that the new token indeed has limited permissions:

Using custom secrets backend

So far we’ve been using Key-Value secrets backend, which is enabled by default and mounted to secrets path. That’s why all our keys had to start with secrets/. However, that’s not the only backend we could’ve used. Imagine, how cool it would be if instead of hardcoded logins and passwords we provided temporary ones, generated on demand?

Custom secrets backends can do that. As we already started with database credentials example, let’s use built-in database backend to generate read-only MySQL credentials on the fly. Before we begin, let’s switch back to root token:

Then, mount database backend:

At this point we need MySQL server. I have Docker installed on my machine, so we can get one in seconds:

This command will also open port 3306 and assign rootpwd  password to MySQL root account, both of which we’ll provide to database backend later.

At this point we can configure database backend and define what exactly read-only role (I called it viewer ) means for us:

It basically boils down to CREATE USER SQL statement with “SELECT” permission for whoever tries to get credentials for viewer role. I didn’t specify how long that newly created user should exist, so vault will default to some value on its own.

Let’s test if the whole thing works:

And after getting into MySQL container:

It all works! Isn’t that cool? Every application, every process could have its own temporary login and password, which would be useless to steal, and with proper logging we always could track down what process run what query.

Conclusion

The vault has other pluggable backends: another secret providers, authentication backends (e.g. AWS or github) and ones for auditing. We barely scratched the surface here.

I’m still impressed how much one can learn from just playing with the tool: algorithms, concepts, smart separation of pluggable components, some other thoughts. Even if I had no intent to get back to Vault ever again, it was worthwhile to sit with documentation and experimenting with the tool just because of that ideas.

Leave a Reply

Your email address will not be published. Required fields are marked *