Deploying and Scaling HTTP APIs with Docker and NGINX
You’re about to learn the easiest way to deploy and scale HTTP APIs in Docker. This post is for people who already know what Docker is and have familiarity with basic docker commands and concepts. This is meant to help you deploy and scale HTTP APIs, with emphasis on the scaling part. I will explain all the things that make the scaling possible.
All the code snippets given here can be found in this repository. I recommend cloning the repository to easily follow along with the post.
How it Will Work
This diagram shows how the scaling will work:
- NGINX will act as the reverse proxy and load balancer for the API.
- All API containers and NGINX will be in the same user defined bridge network.
- Docker DNS will be used to automatically identify the IPs of the containers.
The Implemenation Details That Actually Matter
In this section, we’ll talk about the implementation details that are needed to make the theory possible. Anything that is not mentioned here can be replaced and doesn’t matter to the deployment.
The API
err := http.ListenAndServe("simpleapi:11011", nil)
While the above snippet is in golang, it can be any language or framework, the only thing that matters is the host it is listening on. In this case, it is listening on the host simpleapi
and the port 11011
. The hostname should correspond to the service name in your docker-compose.yml
file.
When you use the service name, docker will automatically assign an IP to this container in the specified network and register it with Docker’s DNS server. When there are multiple containers, Docker will register them under the same DNS name and return the IPs of the container in round-robin fashion. This lets you easily distribute load across multiple containers without worrying about any network configuration.
NGINX Config File
The proxy_pass
directive in the NGINX config is what helps us connect to all the simpleapi
containers running in the network. Like mentioned earlier, Docker’s DNS takes care of the address resolution for us.
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 11011;
location / {
proxy_pass http://simpleapi:23480;
}
}
}
Docker Compose File
The compose file itself is just putting things together. We make sure both the service
and nginx
are running in the same network.
services:
simpleapi:
image: simpleapi
restart: always
deploy:
mode: replicated
replicas: 5
networks:
- simpleapi-network
ports:
- "23480"
simpleapi-nginx:
image: nginx:latest
restart: always
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- simpleapi
ports:
- "11011:11011"
networks:
- simpleapi-network
networks:
simpleapi-network:
The entire setup can be deployed with docker-compose up --build
. You might seem some service containers exiting with status code 0. This is due to them finding a local address with that specific port free. The containers will automatically find an address and be up.
GRPC API
For deploying a gRPC API, the only thing that you’ll need to change is the NGINX config
server {
listen <listen_port> http2;
location / {
grpc_pass grpc://<service_name>:<service_port>;
}
}