Imagine you have Node.js app you would like to run from within Docker container. Maybe you want to check if it still works on ‘another’ machine, or it’s a test run before adopting containers as the way of software delivery. Reasons may vary.
In order to have something tangible, let’s pick hello.js app which prints out ubiquitous ‘Hello World’:
1 2 3 4 5 6 7 |
var http = require('http'); http.createServer(function (_, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.end("Hello World\n"); }) .listen(8080); |
How do we put it into container?
The following three approaches assume you know Docker basics.
First approach: mount project files directly to container’s file system.
Docker provides -v command line argument that can be used for mounting paths from host files system to container’s. All we need to do is to take container with preinstalled Node.js (why bother with installing it ourselves?), mount project folder to arbitrary place in container file system and start the app. Fortunately for us, official Docker registry has node image that will do the trick. As usual, we don’t have to download it, Docker will do that automatically. So, here it goes:
1 2 3 4 5 6 7 8 9 |
docker run \ #run container -p 8080:8080 \ #bind container's and host's ports -v /Users/pav/helloapp:/helloapp \ #mount helloapp folder to /helloapp node \ #Node.js container name node /helloapp/hello.js #start hello.js with nodejs |
As I’m running this example on Mac, Docker resides in virtual machine and I can’t use 127.0.0.1 to connect to app anymore. I have to use VM’s IP instead and docker-machine ip can tell which one it is (mine says it’s 192.168.99.100). Go back to Chrome, copy-paste IP and voilà, it’s really working:
This is definitely not the approach you’d use in production environment, but for quick and dirty proof-of-concept tasks it’s perfect. As a downside, you cannot move such container to another host without also copying mounted files, but the next approach can fix that.
Second approach: copy project files into container.
Docker’s command cp can copy files between host and container files systems. This sounds so ridiculously simple, so I decided to complicate example a little bit:
-
Start node container in interactive mode:
docker run -ti -p8080:8080 node bash
As usual, -ti stands for interactive terminal. -
Press Ctrl+p + Ctrl+q to exit container but keep it running in background.
-
Find container ID by executing docker ps :In my case it’s db8ce50cfd72
-
Copy project files to container:
12345docker cp \ #copy files/Users/pav/helloapp \ #from local machinedb8:/helloapp #to container with ID starting with db8Btw, you don’t have to type the whole container ID in any of Docker commands. First three characters is usually enough, assuming no other container ID start with them.
-
Go back to container: docker attach db8
-
Start hello app: node /helloapp/hello.js
Behold! Hello world is working again.
Unlike with the first approach, this container is self sufficient. Once you’ve committed it into new image with docker commit command, you can move it to different hosts. On the other hand, if you app changes its message from “Hello World” to “Goodbye cruel world”, you’ll have to repeat all these steps again. When the content can change, there’s third way.
Third approach: use Dockerfile to build new image with project files baked in.
Dockerfile makes it possible to describe image structure in plain text format and then ‘compile’ it into real image. If you think about it, all we want to achieve can be described in just four steps:
- Take existing node container,
- copy project files into it,
- open port 8080,
- run the app.
Here’s how we’d describe that in Dockerfile:
1 2 3 4 |
FROM node:latest COPY hello.js /helloapp/hello.js EXPOSE 8080 ENTRYPOINT ["node", "/helloapp/hello.js"] |
It’s almost perfect match. We took latest node container, copied files, configured container to allow connections to port 8080 and set up the entry point: whenever container starts – launch hello.js . FROM, COPY, EXPOSE and ENTRYPOINT keywords are just a few from ones you can use.
Now, if I build this Dockerfile:
docker build -t helloapp:latest .
I’ll get helloapp image tagged as latest (but I could tag it with arbitrary version number instead, e.g. 0.1-beta) with project files baked into it.
I can start it ( docker run -d helloapp), move between machines, delete and then rebuild again. Dockerfile acts like a source code for image, which I can add to version control system (VCS) along with other project files.
From these three approaches, the first two are good for occasional quick tasks, but Dockerfile is the way to go when you need to build images more repeatedly. It’s VCS friendly, automates otherwise tedious image creation process and when some of its dependences change, you’re just few keystrokes away from rebuilding it.
Nice Blog! Thanks for sharing very useful post. keep it up.