Create An Nginx Reverse Proxy With Docker

How To Run Multiple Docker Containers Under One URL

Manny
8 min readJul 21, 2019
Create An Nginx Reverse Proxy With Docker

What Is A Reverse Proxy?

“a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers.”

A reverse proxy is like a middleman (proxy) between a user (client) making a request to that proxy and that proxy making requests and retrieving its results from other servers.

A&W Reverse Proxy Metaphor

A metaphor for this is a fast food chain. Let’s use A&W for this, because I love their Beyond Meat Burgers. You, the user (client), approach the counter to make an order (request) to the sales clerk (proxy), which accepts your requests, relays the requests to the cooks (other servers) in the back, and returns the food to you at the point of where you made the request.

Why Would You Need This?

The main reason for creating this is so that you’re hosting everything under one domain name or ip address under port 80 and don’t require that the user specify special port numbers when making requests to the frontend, backend, or other services.

Another reason is to avoid CORS issues because the requests being made from the frontend are coming from the same location from the backend, so there should be no additional configurations needed for the backend.

Example: What We’re Avoiding

# Domain Name: http://yourdomain.com# Frontend: http://yourdomain.com:3001# Backend: http://yourdomain.com:5000

Example: What We’re Trying To Achieve

# Domain Name: http://yourdomain.com# Frontend: http://yourdomain.com# Backend: http://yourdomain.com/api

Requirements

  • Docker CE 18.09.2 or higher

That’s it! As long as you have Docker installed, then you’re set. You might need a text editor, but this can be achieved on Mac OS or a Linux OS if Docker is installed.

Structure

The way we’re going to structure this is by having a three (3) docker containers running under the same network but only have the reverse proxy exposed to the client.

# Container A: nginx-proxy# Container B: frontend# Container C: backend# Request Frontend -> A <- B
# Request Backend -> A <- C

Creating Our Backend Container

For this, we’re going just going to create a simple NodeJS backend API that is just going to expose the version number of its API.

We’re going to borrow the NodeJS alpine docker, and run the following:

# Notice that I didn't specify a port with -pdocker run -it -d --name backend node:10.15.3-alpine;

Creating NodeJS Application

Next we’ll enter the container, download the necessary dependencies, and start our NodeJS server.

docker exec -it backend /bin/sh;

We’ll need an editor, and I like nano so we’ll install it:

apk add nano;

Next we’ll go into the /home/node directory and create our project there:

cd /home/node;npm init;npm install express --save;touch index.js;nano index.js;

In our file we’ll add the following:

/home/node/index.js

const express = require('express');
const app = express();
const port = 5000;
const version = '1.0.0';
app.get('/', (req, res) => res.send({ version }));app.listen(port, () => console.log(`Listening on port ${port}`));

To save it ctrl + x, then y, and then enter.

Let’s run it now:

node index.js;

If we open up http://localhost:5000 you’ll notice that nothing is showing:

Testing Initial NodeJS Application

To test if our server is running, let’s open up a new Terminal window and enter our docker container again:

docker exec -it backend /bin/sh;

To test this, we’ll need to add curl to our container:

apk add curl;

Then to test it we should run:

curl localhost:5000;# Expected Output
# {"version":"1.0.0"}

Hurray! it’s working.

Let’s exit from our containers by pressing ctrl + q and then ctrl + q.

Creating Our Frontend Container

Our next step is to create a static HTML frontend with JavaScript that makes a request to the backend to retrieve the version number.

Note: The HTTP Request will NOT work until we’ve setup the reverse proxy.

Setting Up Container

For this we’ll borrow a simple nginx docker image and run the following:

docker run -it -d --name frontend nginx:stable-alpine;

Next we’re going to enter the docker container, and add a simple HTML page with some JavaScript which makes a request to our backend.

docker exec -it frontend /bin/sh;

Let’s first check if our nginx container is already running automatically with curl:

apk add curl;curl localhost:# Expected Output
# <!DOCTYPE html>
# <html>
# <head>
# <title>Welcome to nginx!</title>
# <style>
# body {
# width: 35em;
# margin: 0 auto;
# font-family: Tahoma, Verdana, Arial, sans-serif;
# }
# </style>
# </head>
# <body>
# <h1>Welcome to nginx!</h1>
# <p>If you see this page, the nginx web server is successfully installed and
# working. Further configuration is required.</p>
# <p>For online documentation and support please refer to
# <a href="http://nginx.org/">nginx.org</a>.<br/>
# Commercial support is available at
# <a href="http://nginx.com/">nginx.com</a>.</p>
# <p><em>Thank you for using nginx.</em></p>
# </body>
# </html>

Creating Vanilla JavaScript Frontend

Let’s create our own application by removing this index.html, adding nano, and creating our own code:

# add nano
apk add nano;
# change directories
cd /usr/share/nginx/html;
# remove index.html
rm index.html;
# create new index.html
touch index.html;
# edit file
nano index.html;

Our file should look something like this:

index.html

<!DOCTYPE html>
<html>
<head>
<title>Frontend</title>
<script>
window.onload = function () {
fetch('/api', { method: 'get'}).then((response) => {
const json = response.json();
if (response.ok) {
return json;
}
return Promise.reject(new Error('Something went wrong.'));
})
.then((response) => {
document.getElementById('version').innerHTML = JSON.stringify(response);
}).catch((error) => {
document.getElementById('error').innerHTML = error && error.message || 'Something else went wrong.';
});
};
</script>
</head>
<body>
<h1>My Application Version</h1>
<p id="version"></p>
<p id="error"></p>
</body>
</html>

Save it with ctrl + x, then y, and then enter.

Testing Frontend Application

This script should return what we just wrote:

curl localhost;# Expected output should be above

Communicating Between Containers

Now we have two containers running with no exposed ports to the client, but we need to get them to communicate with each other. In order to do that we need to be able to put them all under the same network. This isn’t really for the frontend to communicate with the backend behind the scenes, but more so for the reverse proxy to be able to identify the containers at point the right urls to them.

Adding Containers To Same Network

First let’s create our network:

docker network create mynetwork;

Next we’ll add the container by their name:

# Connect backend
docker network connect mynetwork backend;
# Connect frontend
docker network connect mynetwork frontend;

Let’s see if they have been added by running:

docker network inspect mynetwork;# Expected output should have the container names listed under "Containers"

Testing Our Network

To see if cross communication is happening let’s enter one of the containers and make a request to the other container based on its name:

# Enter container
docker exec -it frontend /bin/sh;
# Make a request to the backend
curl http://backend:5000;
# Expected Ouput
# {"version":"1.0.0"}

Cross communication is working!

Configuring Nginx Container (Reverse Proxy)

This next part involves using the same nginx image but doing some minor changes and configuration to its default.conf files.

Start by creating the container but exposing port 80 this time and adding it right away to the network:

docker run -it -d -p 80:80 --network=mynetwork --name proxy nginx:stable-alpine;

Configuring Nginx Settings

Next we’ll enter the container and start configuring the settings to work with the frontend and backend.

docker exec -it proxy /bin/sh;# go to the main configuration filecd /etc/nginx/conf.d;

Let’s see what we’re dealing with by running:

cat default.conf;# ...
# location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
# }
# ...

We’re going to modify this, but first we need nano:

apk add nano;nano default.conf;

We’ll modify the file so that there should be two location routes defined:

default.conf

...
location / {
root /usr/share/nginx/html;
index index.html index.htm;
proxy_pass http://frontend;
}
location /api {
proxy_pass http://backend:5000/;
}
...

Take note that its http://backend:5000/ and NOT http://backend:5000.

Save the file and restart nginx by doing:

nginx -s reload;

Testing Out Connections

First we’ll install curl and then ping the containers in the same network again.

apk add curl;# Original frontend
curl http://frontend;
# Should be the same
curl http://localhost;
# Original backend
curl http://backend:5000;
# Should be the same backend
curl http://localhost/api;

Seeing that this works, we can now test it on the browser because the port of the proxy is exposed on port 80.

Backend

Proxy’ed Backend Exposed On http://localhost/api

Frontend

Proxy’ed Frontend Exposed On http://localhost

Our frontend is communicating to the backend through one url, YAY!

Creating A Dockerfile

To automate this process a little bit, we could also create a Dockerfile which takes our configurations and builds the container without the need to configure things manually. For this we’ll create two files, a default.conf file which could be easily copied over and a Dockerfile which would take that file and build the machine.

default.conf

server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
proxy_pass http://frontend;
}
location /api {
proxy_pass http://backend:5000/;
}
#error_page 404 /404.html; # redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}

Dockerfile

FROM nginx:stable-alpineCOPY default.conf /etc/nginx/conf.dEXPOSE 80/tcp
EXPOSE 443/tcp
CMD ["/bin/sh", "-c", "exec nginx -g 'daemon off;';"]WORKDIR /usr/share/nginx/html

Taking It Further

This is a based on how you can accomplish reverse proxies with Docker but there are there things you can do to take it further.

Adding SSL Support

You could use Let’s Encrypt and create an SSL certificate used on the proxy to be able to blanket all connections it proxies.

React Frontend

You could build a frontend with Docker which communicates with you backend. I highly recommend checking out my article Deploying ReactJS With Docker.

NodeJS Backend

You could build an entire REST API with NodeJS with Docker.

Docker Compose

We could automate the entire process with Docker compose to add 3 containers to be setup and running in one configuration.

Kubernetes Orchestration

Taking this further would be to add some sort of orchestration with Kubernetes, which may not require a proxy in this case.

Final Thoughts

If you got value from this, please share this, comment, and give feedback. Programming is always an ongoing process and I’ll admit that even I’m still learning.

Thanks again for reading!

Please share it on twitter 🐦 or other social media platforms. Thanks again for reading. 🙏

Please also follow me on twitter: @codingwithmanny and instagram at @codingwithmanny.

--

--

Manny

DevRel Engineer @ Berachain | Prev Polygon | Ankr & Web Application / Full Stack Developer