On the surface Docker looks like another virtual machine (VM). Just pick Ubuntu image with hello-world app inside, type docker run ubuntu hello-world in your terminal of choice, and hello-world will start, thinking it owns a whole machine running Ubuntu.
But Docker is not a VM, manager of VMs, or hypervisor of any kind. Docker is a platform for creating, launching and managing containers. Those looks like VMs, quack like VMs, but are much closer to a fence with a barbwire on top of it. No app can enter, no app can leave. What is bad for humans works great for applications in production environment.
Because Docker doesn’t have to deal with guest operating system and hardware abstraction, it can work with the speed that VMs can only dream of, while doing quite similar job.
How can we use it.
1. It’s a great sandbox. If I ever decide to learn Erlang, I don’t have to pollute my laptop with installers and temp projects, which I’ll forget about in a week. Start Docker container, play there and remove it afterwards.
2. It’s great for deploying applications. I’m too old for chasing missing dependences on every new server. Instead I could just take a blank container, install all dependencies in it, put my app on top, and copy whole container to production machine. Or many production machines, who cares.
3. It’s great for deploying applications with conflicting dependences to the same machine. Just put them into containers, they’ll never know there’s another Skywalker out there.
4. It’s great for updating applications and surviving OS upgrade. Replacing old container with new one will update your app and unless OS upgrade breaks Docker engine itself, your app will hardly notice anything.
5. It can literally save you some money. My blog is the only app on the server and it rarely uses more than 5% of its CPU. Unfortunately, I have to pay for whole 100%. Putting blog into container with 5% CPU limit will allow me to fill remaining 95% with other dockerized apps (19 more blogs!), and those won’t mess with each other. I’ll buy the second host only when the first one is completely full.
All examples require Docker to be installed first. It works on Mac, Windows, natively on Linux (more on that later), and smart folks at Docker already created installation guide.
1. Simple Hello world
docker run hello-world
Command will launch demo container called hello-world with /hello application inside, which prints out the sacred sentence “hello world”. Because I didn’t have this container locally, Docker downloaded it for me automatically from official registry.
2. Advanced Hello world.
docker run ubuntu echo "hello world!"
This time Docker starts container called Ubuntu (with real Ubuntu inside!), executes echo “hello world” in it and then exits.
3. Launch bash in Debian and connect current terminal to it
docker run -ti debian /bin/bash
-ti arguments stand for ‘tty interactive’. And it’s true Debian inside:
4. Start a container with Nginx inside, open its 80th port and leave the container running in background
docker run -d -p80:80 nginx
-d makes container a background process and -p80:80 binds its 80th ports to 80th port of the host machine. If you’re happy Linux user, opening http://127.0.0.1 will greet you with nginx’s default page:
I’m not happy Linux user and Mac OS lacks of native Docker support (as well as Windows). Because of that my Docker secretly runs a VM with Linux inside, so p80:80 actually binds 80th port to VM, not to localhost. Find VM’s IP ( docker-machine ip or boot2docker ip – depending on your installation) and you’ll be fine. In my case it’s 192.168.99.1.
Once you’ve done, stop the container.
5. Create your own image/container.
Docker images have one nice feature – they are immutable. This means you can get into, let’s say, Ubuntu, type rm -rf --no-preserve-root / (am I the only one who always wanted to do that?), see how the command sends the container into the void, then exit, reenter and observe that nothing actually happened.
In fact, image consists of multiple read-only layers, and when Docker starts the image, it adds read-write layer on top of it, which turns an image into a container. All the changes you do happen in that layer, leaving underlying image intact.
But sometimes we need to preserve changes. In order to do that we need to find the container in which we did the changes ( docker ps -a ) and then save it as a new image ( docker commit <ID> name:tag ).
Not a conclusion.
Docker is huge. I covered some parts, oversimplified the others, but many more were not even mentioned. But despite it’s scale, Docker is simple. One evening is enough to get the basics. And you don’t have to develop super scalable enterprise applications to use it. I use Docker all the time as a sandbox, for emulating apps talking over the network, or for pretending that my app works elsewhere.