Thursday 20 April 2017

Get Started, Part 3: Services

Prerequisites

  • Install Docker.
  • Read the orientation in Part 1.
  • Learn how to create containers in Part 2.
  • Make sure you have pushed the container you created to a registry, as instructed; we’ll be using it here.
  • Ensure your image is working by running this and visiting http://localhost/ (slotting in your info for usernamerepo, and tag):
    docker run -p 80:80 username/repo:tag
    

Introduction

In part 3, we scale our application and enable load-balancing. To do this, we must go one level up in the hierarchy of a distributed application: the service.
  • Stack
  • Services (you are here)
  • Container (covered in part 2)

Understanding services

In a distributed application, different pieces of the app are called “services.” For example, if you imagine a video sharing site, there will probably be a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.
A service really just means, “containers in production.” A service only runs one image, but it codifies the way that image runs – what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
Luckily it’s very easy to define, run, and scale services with the Docker platform – just write a docker-compose.yml file.

Your first docker-compose.yml File

docker-compose.yml file is a YAML file that defines how Docker containers should behave in production.

docker-compose.yml

Save this file as docker-compose.yml wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and use that info to replace username/repo:tag:
version: "3"
services:
  web:
    image: username/repo:tag
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:
This docker-compose.yml file tells Docker to do the following:
  • Run five instances of the image we uploaded in step 2 as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
  • Immediately restart containers if one fails.
  • Map port 80 on the host to web’s port 80.
  • Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves will publish to web’s port 80 at an ephemeral port.)
  • Define the webnet network with the default settings (which is a load-balanced overlay network).

Run your new load-balanced app

Now let’s run it. You have to give your app a name – here it is set to getstartedlab :
docker stack deploy -c docker-compose.yml getstartedlab
Note: If you get an error that “this node is not a swarm manager,” go ahead and run docker swarm init and then retry. We’ll get into the meaning of that command in part 4.
See a list of the five containers you just launched:
docker stack ps getstartedlab
You can run curl http://localhost several times in a row, or go to that URL in your browser and hit refresh a few times. Either way, you’ll see the container ID randomly change, demonstrating the load-balancing; with each request, one of the five replicas is chosen at random to respond.

Scale the app

You can scale the app by changing the replicas value in docker-compose.yml, saving the change, and re-running the docker stack deploy command:
docker stack deploy -c docker-compose.yml getstartedlab
Docker will do an in-place update, no need to tear the stack down first or kill any containers.

Take down the app

Take the app down with docker stack rm:
docker stack rm getstartedlab
It’s as easy as that to stand up and scale your app with Docker. You’ve taken a huge step towards learning how to run containers in production. Up next, you will learn how to run this app on a cluster of machines.
Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.

No comments:

Post a Comment

User Interface(UI) for Docker, Portainer

Portainer gives you access to a central overview of your Docker host or Swarm cluster. From the dashboard, you can easily access any manag...