Learn Docker with NodeJS
Nov 02, 2022
When it comes to developing or hosting a web server means you have to install a database, nodejs, and all other tools required for your project. Managing and running all these becomes time-consuming and a headache for developers.
That is where docker helps us. It runs the applications in an isolated environment called a container. Compared to a virtual machine docker container is very lightweight and contains everything needed to run the application.
In this post, you will learn about docker by running a nodejs web server with docker. Let’s get started.
If you want to view the completed project files before starting check out this GitHub repository.
Creating a basic NodeJS server
Let’s create a basic express web server first to run with docker. Create a project directory named docker-nodejs
and open your terminal or command prompt in that directory. Run npm init -y
and then npm install express
. It will generate package.json
with the default config and install the express framework.
Next, update the package.json
type and scripts like below.

Why change the module type? to enable ECMAScript.
Now, create the index.js
file in the project directory and paste the code below.
import express from "express";
const app = express();
app.get("/", (req, res) => {
res.status(200).send("HI 👋");
});
app.listen(8000, () => {
console.log("Server is running on http://localhost:8000");
});
As you can see, we are sending a response on /
the endpoint and start listening to the server on port 8000
.
Run npm run dev
and visit to http://localhost:8000
to check if the server is working or not.
Writing the Dockerfile
What is a Dockerfile? A Dockerfile
is like a text file where it contains all the commands a user would run on the command line or terminal.
First, create a Dockerfile
file and paste the code below to get an idea of what it looks like.
FROM node:lts-alpine
WORKDIR /project
COPY ["package.json", "package-lock.json", "./"]
RUN npm install
COPY . .
EXPOSE 8000
CMD ["npm", "run", "dev"]
Before explaining the code above, we have to know what is a docker container and image. Because to run our web server in docker we have to build an image with the above Dockerfile
and create or run a container with that image.
What is a Docker Container?
The docker container is a software package that will contain all resources/dependencies to run your application. For example, your project source files, Nodejs, or if you want to connect your application to a database, it will also contain the database.
What is a Docker Image?
To create a docker container you need to have all resources, configurations, or Instructions about the container. You can think of the image as a template with instructions to create a docker container. We declare all resources, configurations, commands, etc in the Dockerfile
that is responsible for building the image.
The above code is a script that we will use later to build the image and create a docker container with it.
Explaining the Dockerfile
Since you learned what is docker container and image it will be easier to understand how the Dockerfile
works. Because it’s all about copying the source files and running the npm command.
Let’s talk about FROM
command from the first line.
To build an image you can also inherit other images into your image. Since we need nodejs to run our application we are including nodejs image.
But what is lts-alpine
after node
? This could be like node:16.18.0
where the node is the image name and 16.18.0
is the version of nodejs. But here we are using the lts
version of nodejs and alpine
means we are using the Alpine Linux nodejs version. The alpine version is more lightweight and smaller in size.
Next, we got the WORKDIR
command. When we were creating our basic express server, we created a directory and create files and run commands in that directory. WORKDIR
is also the same. we declared a WORKDIR
to use that working directory as the default path.
Then with the COPY
command, we are copying the package.json
and package-lock.json
file to the working directory. The pattern works like ["<src1>", "<src2>",..., "<dest>"]
. We declared our target files at the beginning of the array and declared the destination folder at the end of the array. By ./
means the current working directory which is the /project
folder because we declared this folder as the default path with the WORKDIR
command.
Next, we are using the RUN
command to run the npm install
command to install all the dependencies.
After that, we are using the COPY
command again. But this time we are copying everything thing from our project folder to the image.
You might be wondering if you are copying every file then why copy the package.json
and package-lock.json
file separately?
The reason is to take advantage of cached Docker layers. when we build the image, every step with COPY
command will be cached. For example, when we build the image again and we didn’t change or update any files on our project, it will use the cached layers. It won’t copy the files since it’s already been cached.
So, if there are no changes to the package.json
and package-lock.json
file it will use the cached layer from previous builds. but when developing our application, we often change our code and other files which will remove the cache layers and rebuild the image as new where it will run the npm install
command over and over again.
To stop this from happening we separated the npm install
process and after that we copy all the files from the directory. So, the npm install
command will only run where there is a new change on the package.json
or package-lock.json
file.
Docker layer caching mainly works on the RUN
, COPY
, and ADD
commands.
Next, we are using the EXPOSE
command to tell docker that our container listens on the 8000
ports at runtime. If you look at the code of our express server that is the port, we are using to access the server.
And finally, we are using the CMD
command to run our application. The command is written in JSON array to run the command in the exec form instead of the shell form. For example, the exec form is node app.js
and the shell form is /bin/sh -c node app.js
.
Create a .dockerignore
file
If you look at the line COPY . .
, we are copying everything from our project directory along with the node_modules
folder too. Since we are running RUN npm install
on building the image we don’t need this folder. It is also a bad practice to copy the entire node_modules
which will make the image build slower.
Just like .gitignore
, docker also supports .dockerignore
where you can exclude files and directories from being copied to the working directory.
Let’s create a .dockerignore
file and add node_modules
like below.

Build Image
So, we are finally ready to build our image after learning everything about how docker works. As we know, we have to build the image first to create and run our docker container.
To do this we will use the docker build
command which builds docker images from the Dockerfile. First, let’s run docker build --help
on the terminal to see how the command works.

As you can see in the image above, it expects a PATH
argument which is the project file location. Since the Dockerfile exists in the same project directory we can just use .
as the PATH
value.
Also, we are going to use the --tag
flag to name our image so that we can recognize the image from the docker image list.
Let’s build our image by naming the image nodejs-docker
.
docker build --tag nodejs-docker .

Now, if you try to run the command again you will see docker using the cached layers from the previous one since nothing changed in the source files.
Notice that every step from the Dockerfile is cached separately.

But if you try to update your code and run again, you will see docker won’t use cache anymore from that step. But it will use the cache of previous steps because the package.json
and package-lock.json
weren’t updated.

Now run the command below to see your docker image list.
docker image ls

Running the Image Inside of a Container
To run the Image, we use the docker run
command. The docker run
command creates a new container with the image and starts the container.
If you run the docker run --help
command, you will be able to see all the parameters this command supports. We are going to use three options, -detach
, -publish
, and -name
.
- The detach option runs the container in detached mode. This means it will create and run the containers in the background. The reason why we are doing this is that we are not going to use this
docker run
command every time we start our container. This will create new containers every time we run. after the container gets created, we can use thedocker container start/stop
command to start or stop the containers. - The publish option is to expose the port on the host machine. Just because the container is running, you won’t be able to access
http:localhost:8000
from your computer. Because the container runs in an isolated environment. So, you have to declare both ports of your container and host machine. The format of the--publish
command is[host port]:[container port]
. If we want to accesshttp:localhost:8000
from our browser, it will be 8000:8000 or if we want to access fromhttp:localhost:4000
it will be 4000:8000. - The name option is to assign a name to your container so that we can run the container by using that name later.
And lastly, we will add the image name to run the image. The final command will look like this.
docker run -d --name nodejs-server -p 8000:8000 nodejs-docker
-d
is the short form of -detach
and -p
is for -publish
. Try running this command it will output the ID of the container. Then try visiting http:localhost:8000
to check if your server is running.
If your run docker container ls
you will see the list of created containers below.

Now try running the docker container stop nodejs-server
command. It will stop your running web server. If you want to start again just use the docker container start nodejs-server
command again.
Conclusion
So, there you have it. That is how you should run a NodeJS web server with docker. I can’t even imagine running a backend project without docker on my local machine these days. Because installing and running databases, Redis, and all the other tools one by one is very time-consuming.
If you want to learn more about docker there is another tool called Compose in docker where you won’t have to declare ports, names, images, etc. every time you create a container.