Deploying a Node.js App to a DigitalOcean Droplet with Docker

Introduction

JavaScript has come a long way over the years, and we're now at a point where you can write and deploy a web application very easily. Frameworks like Express, Sails, and Meteor have only made this easier.

Following most tutorials on the internet means you'll be working on your local machine with a local project. But what if we'd like to share the project with the world, or our friends? Today we're going to be looking at how to deploy a Node.js app to a DigitalOcean Droplet, so that anybody on the internet can interact with it.

Prerequisites

Docker

We'll be using Docker to containerize our application into a small, easily deployable unit. This unit can be deployed anywhere where Docker is installed.

Create an account with Docker Hub, and download the community/desktop edition. We'll use this account later!

DigitalOcean

DigitalOcean is a paid hosting service - we'll be using their $5 a month service, and we can turn it off as soon as we're done to minimize costs, but you'll need a payment method to give to DigitalOcean before using it.

Node Application

For this, we're going to create a simple Express app that serves a status endpoint for us to hit and find out if our app is running. On the command line, let's create a directory:

$ mkdir digital-ocean

And then move into the directory and initialize our app:

$ cd digital-ocean
$ npm init

Feel free to hit ENTER to skip / set the default answers for all of the following questions, or add package names/descriptions if you feel like it.

For the purposes of this tutorial, we'll assume that "entry point" is left as index.js. You should end up with something that looks like this:

package name: (digital-ocean)
version: (1.0.0)
description:
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)

{
  "name": "digital-ocean",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}


Is this OK? (yes)

If you look in the directory now (ls on the command line), you'll see a lonely package.json file. This contains the configuration for our app. Let's hop in there and add a line to the "scripts" block:

{
  "name": "digital-ocean",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node index.js",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}

This allows us to start our app by running npm start. Scripts are super useful for setting up tasks that you'll perform frequently with an application, such as testing or various build processes.

Next, we want to install Express:

$ npm install express

And finally, let's write the code that'll serve our /status endpoint. In the same directory as our package.json file, create a file called index.js:

const express = require('express')
const app = express()
const port = 3000

app.get('/status', (req, res) => res.send({status: "I'm alive!"}))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

Finally, let's test our application by running:

$ npm start

Opening a web browser and navigating to http://localhost:3000/status - you should be greeted with something like this:

{"status":"I'm alive!"}

We now have a working Express application, which we can now bundle up and deploy using Docker and a Droplet server.

Dockerizing a Node.js Application

We now have a working application, but we want to be able to deploy it. We could create a server and then set it up to have the exact same configuration as our current machine but that can be fiddly. Instead, let's package it using Docker.

How Docker Works

Docker allows us to define a set of instructions that create what are called layers. If you want to imagine what a layer looks like, imagine your filesystem frozen at a moment in time. Each new layer is a modification or addition to that filesystem, that's then frozen again.

These compositions of layers on top of each other form what's known as an image, which is essentially a filesystem in a box, ready to go.

This image can be used to create containers, which are living versions of that filesystem, ready to run a task that we define for it.

Another useful aspect of this is that we can use pre-made images as the first layer in our own images, giving us a jumpstart by avoiding boilerplate configurations.

Building a Docker Image

The first thing we'll want to do is create a Dockerfile. This file is a set of instructions for Docker to interpret in order to understand exactly how to package your application as an image.

In your project folder, create a file called Dockerfile, and then input these commands:

FROM node:13-alpine

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000
CMD [ "npm", "start" ]

There are a few components here, let's walk through this line by line:

  • FROM node:10: Tells Docker to use another image as the base layer in our Dockerfile - in this case, we're getting an image with Node.js installed, version 10.

  • WORKDIR /usr/src/app: Tells Docker the folder that it should be performing the following commands in.

  • COPY package*.json ./: Tells Docker to copy only package.json & package-lock.json into the Docker image. We do this because Docker can cache compositions of layers - meaning that if nothing changes in our package.json, we can just pull a composition of layers that we've already built before.

  • RUN npm install: Does what it says on the tin, and runs the npm install command to create a new layer of the image with all of our modules installed. Again, if nothing has changed in our package.json, this will pull a pre-built version.

  • COPY . .: Copies the rest of the application into the filesystem. As the application is likely to change more frequently (i.e. every time you make a code change) it makes sense to make this one of the last layers for caching purposes.

  • EXPOSE 3000: Tells Docker to open port 3000 on the container when it is running.

  • CMD [ "npm", "start" ]: Runs npm start on instantiation of the container, and runs our app inside of it.

Running our Docker Build

Now that we've got our instructions written, let's actually build our image! In the same directory as your Dockerfile, run:

$ docker build . -t digital-ocean-app

This builds an image, and then gives it a specific name or 'tag' - in this instance, it's digital-ocean-app. To test that our app works, let's run it locally with:

$ docker run -p 3000:3000 digital-ocean-app

This will run our Docker image as a container, and execute the CMD part of our Dockerfile.

The -p 3000:3000 section does what's known as port mapping. The number before the colon is the port on our local machine that we want to map, and the number after is the port within the container that we want that to route through to.

This means that port 3000 on our machine will now connect to the port 3000 in the Docker container that our application is running on.

To test this, open up your browser and navigate back to http://localhost:3000/status and you should see your status endpoint.

Publishing the Image to Docker Hub

Now that we have our packaged Docker image, we need to store it somewhere that we can pull it back down from. You'll need to log back into Docker Hub, and then click 'Create Repository'. Much like how Git repositories allow us to store our version-controlled Git projects, Docker repositories allow us to store our Docker images.

You'll need to fill out the name of the repository, as well as an optional description and whether or not it's a public or private repository (whether or not you need to be a logged in as an authorized Docker user, basically).

For now, leave it at public, as it'll make our life easier when we try to deploy to DigitalOcean. Finally, scroll to the bottom and hit 'Create'.

Back on the command line, we need to tag our image before we push it:

$ docker tag digital-ocean-app <USER_NAME>/digital-ocean-app

We'll need to replace the <USER_NAME> section with our Docker Hub username. Optionally, if we want to specify that we're pushing a specific version of our image, we can do:

$ docker tag digital-ocean-app <USER_NAME>/digital-ocean-app:<VERSION>

The <VERSION> is called the 'image tag' - we could put a number there (1.0, 1.1, etc.) to represent releases, or even describe an environment (dev, staging, prod). I tend to use the Git commit hash so I know exactly what I'm running, and can compare against my commit history.

By default, every time you push, your repository will automatically create an image with the tag :latest, so we always know what the most recently pushed image's tag is.

To be able to push to our repository, we'll need to log in:

$ docker login

Enter your Docker Hub credentials.

Once you're successfully logged in you'll be able to push your image with:

$ docker push <USER_NAME>/digital-ocean-app:<OPTIONAL_VERSION>

DigitalOcean

Finally, we can deploy our dockerized app onto DigitalOcean. First, let's go create an account:

You'll have to give a few personal details, including payment details, as well as setting up an initial project. Feel free to just give it a name, but if you plan on doing anything extensive then select a few of the options to optimize your setup.

Once finished, you'll get redirected to the root page for your project. On the left hand side is a tool bar with several options. Feel free to explore - DigitalOcean is good at letting you know if something you're about to do will cost you.

Creating an SSH Key

Before we do anything, we'll need to create an SSH key and upload the public part to DigitalOcean. SSH keys come in two parts, a public key and a private key.

A private key is used to authenticate a user to a system. The system does this by performing a function using the public key to verify that the private key is the one used to generate the public key. If it is, they both came from the same place, and so the user can be trusted.

DigitalOcean will want a public key that it can place on any Droplets we start, so that we can access them with a key that we know only we have.

Let's create an SSH keypair now:

$ ssh-keygen -t rsa -b 4096

This command should work on Windows, Linux, and MacOS.

This will ask you for a file where you want to save the key which you can call something like - digital-ocean-key.

It'll also ask for a passphrase - feel free to set one if you wish or you could leave it empty. If you created it in the same folder as we've been working out of, you'll see two files - one called digital-ocean-key and one called digital-ocean-key.pub - these are respectively your private and public keys.

Adding the SSH Key to your DigitalOcean Account

In your DigitalOcean account, on the bottom left hand side, there is a link for 'Security'. Follow this link, and the next page will have an option to add an SSH key:

Click 'Add an SSH key' and you'll be presented with a dialog to enter your key. Simply copy the contents of your digital-ocean-key.pub into the large text box (you can get the contents printed to the terminal with cat digital-ocean-key.pub).

In the smaller box below it, give that key a name.

Setting Up a DigitalOcean Droplet

Once you've added your SSH key. click on the 'Droplets' link on the left, and then on the next page click 'Create Droplet'.

In DigitalOcean, a Droplet is a private virtual server that can be easily configured and used to run your applications.

On this page, you'll be presented with a number of options for configuring your DigitalOcean Droplet, including the distribution, the plan, the size/cost per month, region, and authentication.

Instead of selecting a distribution and configuring it ourselves, we're going to get DigitalOcean to create a Droplet that already has Docker running on it for us.

Click on the 'Marketplace' above the various Linux distributions - this is where you can find various existing configurations - these are Droplets that, when started, will start with the described software already installed.

This is a real time saver, and means that we can start up multiple instances with the exact same configuration if we wanted to, instead of having to individually configure them all.

There should be an option for a Docker Droplet. If not, click on 'See all Marketplace Apps', and you'll be able to find a suitable Docker configuration there:

Under 'Plan', we want to select 'Standard'. Let's select the $5 a month option, for demonstration purposes.

Feel free to choose whichever region is suitable for you - generally the closest one will be easiest to access, but it shouldn't have a massive impact.

Under Authentication, select 'SSH Key', and select which keys you would like to use (like the one you created in the last step). You can also name your Droplet if you wish. When you're finished, click 'Create Droplet' at the bottom.

Wait a minute for your Droplet to start up. It'll appear under the 'Droplets' panel with a green dot next to it when it is up and ready. At this point, we're ready to connect to it.

Running Docker Images on DO Droplets

Click on the started Droplet, and you'll see details about it. At the moment, we're interested in the IP address - this is the address that the Droplet is at on the internet.

To access it, we'll need to connect to it using our previously created private key. From the same folder as that private key, run:

$ ssh -i digital-ocean-key [email protected]<IP_ADDRESS>

The -i digital-ocean-key specifies that you're using an SSH key, and where that key is located. The [email protected]<IP_ADDRESS> specifies the user, and the address you're attempting to connect to - in this instance, the user we're trying to connect with is root, and the IP address would be the address of the DigitalOcean Droplet.

Once you're connected to the instance, it's just a simple matter of running your Docker image. If you left it in a public repository then you can run it easily using the same name you used to push it:

$ docker run -p 3000:3000 <DOCKER_USER_NAME>/digital-ocean-app:<OPTIONAL_TAG>

If you put it in a private repository, you'll need to log in with docker login again before running your docker run command.

Once your Docker container is running, open up a tab in your browser and navigate to <IP_ADDRESS>:3000/status - where <IP_ADDRESS> is the IP address of the DigitalOcean Droplet that you're running the container on.

You should be presented with your previously created status endpoint - congratulations! You now have a Node.js app running on DigitalOcean!

Conclusion

There are a few directions you can go in from here. First, you'll probably want to build out your application so that it does more useful things.

You might want to look into buying a domain to host your app at, and pointing that at your DigitalOcean Droplet so that it's easier to access.

I'd also recommend exploring some of the rest of the capabilities of DigitalOcean - you could use some of their networking tools to secure your Droplet by restricting access to the SSH port (22), explore some of the other images that Docker have, or even look at spinning up databases for your application!