Monday, 24 April 2017

Creating the Dockerfile to Automatically Build the Image

Dockerfile Basics

Dockerfiles are scripts containing commands declared successively, which are to be executed in that order by docker to automatically create a new docker image. They help greatly with deployments.
These files always begin with defining an base image using the FROM command. From there on, the build process starts and each following action taken forms the final image which will be committed on the host.
Usage:
# Build an image using the Dockerfile at current location
# Tag the final image with [name] (e.g. *nginx*)
# Example: sudo docker build -t [name] .
sudo docker build -t nginx_img . 
Note: To learn more about Dockerfiles, check out our article: Docker Explained: Using Dockerfiles to Automate Building of Images.

Dockerfile Commands Overview

Add

Copy a file from the host into the container

CMD

Set default commands to be executed, or passed to the ENTRYPOINT

ENTRYPOINT

Set the default entrypoint application inside the container

ENV

Set environment variable (e.g. "key = value")

EXPOSE

Expose a port to outside

FROM

Set the base image to use

MAINTAINER

Set the author / owner data of the Dockerfile

RUN

Run a command and commit the ending result (container) image

USER

Set the user to run the containers from the image

VOLUME

Mount a directory from the host to the container

WORKDIR

Set the directory for the directives of CMD to be executed

Creating the Dockerfile

To create a Dockerfile at the current location using the nano text editor, execute the following command:
sudo nano Dockerfile
Note: Append all the following lines one after the other to form the Dockerfile.

Defining the Fundamentals

Let's begin our Dockerfile by defining the basics (fundamentals) such as the FROM image (i.e. Ubuntu) and the MAINTAINER.
Append the following:
############################################################
# Dockerfile to build Python WSGI Application Containers
# Based on Ubuntu
############################################################

# Set the base image to Ubuntu
FROM ubuntu

# File Author / Maintainer
MAINTAINER Maintaner Name

Updating the Default Application Repository for Installations

Run the following to update the apt-get repository with additional applications just as we did in the previous section.
Append the following:
# Add the application resources URL
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list

# Update the sources list
RUN apt-get update

Installing the Basic Tools

After updating the default application repository sources list, we can begin our deployment process by getting the basic applications we will need.
Append the following:
# Install basic applications
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
Note: Although you are unlikely to ever need some of the tools above, we are getting them nonetheless - just-in-case.

Base Installation Instructions for Python and Basic Python Tools

For deploying Python WSGI applications, you are extremely likely to need some of the tools which we worked with before (e.g. pip). Let's install them now before proceeding with setting up the framework (i.e. your WAF) and the your web application server (WAS) of choice.
Append the following:
# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip

Application Deployment

Given that we are building docker images to deploy Python web applications, we can very all take advantage of docker's ADD command to copy the application repository, preferably with a REQUIREMENTS file to quickly get running in one single step.
Note: To package everything together in a single file and not to repeat ourselves, an application folder, structured similarly to the one below might be a good way to go.
Example application folder structure:
/my_application
    |
    |- requirements.txt  # File containing list of dependencies
    |- /app              # Application module
    |- app.py            # WSGI file containing the "app" callable
    |- server.py         # Optional: To run the app servers (CherryPy)
Note: To see about creating this structure, please roll back up and refer to the section Installing The Web Application and Its Dependencies.
Append the following:
# Copy the application folder inside the container
ADD /my_application /my_application
Note: If you want to deploy from an online host git repository, you can use the following command to clone:
RUN git clone [application repository URL]
Please do not forget to replace the URL placeholder with your actual one.

Bootstrapping Everything

After adding the instructions for copying the application, let's finish off with final configurations such as pulling the dependencies from the requirements.txt.
# Get pip to download and install requirements:
RUN pip install -r /my_application/requirements.txt

# Expose ports
EXPOSE 80

# Set the default directory where CMD will execute
WORKDIR /my_application

# Set the default command to execute
# when creating a new container
# i.e. using CherryPy to serve the application
CMD python server.py

Final Dockerfile

In the end, this is what the Dockerfile should look like:
############################################################
# Dockerfile to build Python WSGI Application Containers
# Based on Ubuntu
############################################################

# Set the base image to Ubuntu
FROM ubuntu

# File Author / Maintainer
MAINTAINER Maintaner Name

# Add the application resources URL
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list

# Update the sources list
RUN apt-get update

# Install basic applications
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip

# Copy the application folder inside the container
ADD /my_application /my_application

# Get pip to download and install requirements:
RUN pip install -r /my_application/requirements.txt

# Expose ports
EXPOSE 80

# Set the default directory where CMD will execute
WORKDIR /my_application

# Set the default command to execute    
# when creating a new container
# i.e. using CherryPy to serve the application
CMD python server.py
Again save and exit the file by pressing CTRL+X and confirming with Y.

Using the Dockerfile to Automatically Build Containers

As we first went over in the "basics" section, Dockerfiles' usage consists of calling them with docker build command.
Since we are instructing docker to copy an application folder (i.e. /my_application) from the current directory, we need to make sure to have it alongside this Dockerfile before starting the build process.
This docker image will allow us to quickly create containers running Python WSGI applications with a single command.
To start using it, build a new container image with the following:
sudo docker build -t my_application_img . 
And using that image - which we tagged myapplicationimg - we can run a new container running the application with:
sudo docker run -name my_application_instance -p 80:80 -i -t my_application_img
Now you can visit the IP address of your droplet, and your application will be running via a docker container.
Example:
# Usage: Visit http://[my droplet's ip]
http://95.85.10.236/
Sample Response:
Hello World!

Building a Docker Container To Sandbox Python WSGI Apps

Let's begin!

Creating a Base Docker Container From Ubuntu

Using docker's RUN command, we will begin with creating a new container based on the Ubuntu image. We are going to attach a terminal to it using the -t flag and will have bash as the running process.
We are going to expose port 80 so that our application will be accessible from the outside. In future, you might want to load-balance multiple instances and "link" containers to each other to access them using a reverse-proxy running container, for example.
sudo docker run -i -t -p 80:80 ubuntu /bin/bash
Note: After executing this command, docker might need to pull the Ubuntu image before creating a new container for you.
Remember: You will be attached to the container you create. In order to detach yourself and go back to your main terminal access point, run the escape sequence: CTRL+P followed by CTRL+Q. Being attached to a docker container is like being connected to a new droplet from inside another.
To attach yourself back to this container:
  1. List all running containers using "sudo docker ps"
  2. Find its ID
  3. Use "sudo docker attach [id]" to attach back to its terminal
Important: Please do not forget that since we are in a container, all the following commands will be executed there, without affecting the host it resides.

Preparing the Base Container for the Installation

In order to deploy Python WSGI web applications inside a container - and the tools we need for the process - the relevant application repositories must be available for the downloads. Unfortunately (and intentionally to keep things simple) this is not the case with the default Ubuntu image that comes with docker.
Let's append Ubuntu's universe repository to the default list of application sources list of the base image.
echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list
Update the list with the newly added source.
apt-get update
Before we proceed with setting up Python WSGI applications, there are some tools we should have such as nano, tar, curl, etc. - just in case.
Let's download some useful tools inside our container.
apt-get install -y tar \
                   git \
                   curl \
                   nano \
                   wget \
                   dialog \
                   net-tools
                   build-essential

Installing Common Python Tools for Deployment

For our tutorial (as an example), we are going to create a very basic Flask application. After following this article, you can use and deploy your favorite framework instead, the same way you would deploy it on a virtual server.
Remember: All the commands and instructions below still take place inside a container, which acts almost as if it is a brand new droplet of its own.
Let's begin our deployment process with installing Python and pip the Python package manager:
# Install pip's dependency: setuptools:
apt-get install -y python python-dev python-distribute python-pip

Installing The Web Application and Its Dependencies

Before we begin with creating a sample application, we better make sure that everything - i.e. all the dependencies - are there. First and foremost, you are likely to have your Web Application Framework (WAF) as your application's dependency (i.e. Flask).
As we have pip installed and ready to work, we can use it to pull all the dependencies and have them set up inside our container:
# Download and install Flask framework:
pip install flask
After installing pip, let's create a basic, sample Flask application inside a "my_application" folder which is to contain everything.
# Make a my_application folder
mkdir my_application

# Enter the folder
cd my_application 
Note: If you are interested in deploying your application instead of this simple-sample example, see the "Quick Tip" mid-section below.
Let's create a single, one page flask "Hello World!" application using nano.
# Create a sample (app.py) with nano:
nano app.py
And copy-and-paste the contents below for this small application we have just mentioned:
from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

if __name__ == "__main__":
    app.run()
Press CTRL+X and approve with Y to save and close.
Alternatively, you can use a "requirements.txt" to contain your application's dependencies such as Flask.
To create a requirements.txt using nano text editor:
nano requirements.txt
And enter the following inside, alongside all your dependencies:
flask
cherrypy
Press CTRL+X and approve with Y to save and close.
Note: You can create your a list of your actual application's dependencies using pip. To see how, check out our tutorial Common Python Tools: Using virtualenv, Installing with Pip, and Managing Packages.
Our final application folder structure:
/my_application
    |
    |- requirements.txt  # File containing list of dependencies
    |- /app              # Application module (which should have your app)
    |- app.py            # WSGI file containing the "app" callable
    |- server.py         # Optional: To run the app servers (CherryPy)
Note: Please see the following section regarding the "server.py" - Configuring your Python WSGI Application .
Remember: This application folder will be created inside the container. When you are automatically building images (see the following section on Dockerfiles), you will need to make sure to have this structure on the host, alongside the Dockerfile.
__ * Quick tip for actual deployments * __

How to get your application repository and its requirements inside a container

In the above example, we created the application directory inside the container. However, you will not be doing that to deploy your application. You are rather likely to pull its source from a repository.
There are several ways to copy your repository inside a container.
Below explained are two of them:
# Example [1]
# Download the application using git:
# Usage: git clone [application repository URL]
# Example:
git clone https://github.com/mitsuhiko/flask/tree/master/examples/flaskr

# Example [2]
# Download the application tarball:
# Usage: wget [application repository tarball URL]
# Example: (make sure to use an actual, working URL)
wget http://www.github.com/example_usr/application/tarball/v.v.x

# Expand the tarball and extract its contents:
# Usage: tar vxzf [tarball filename .tar (.gz)]
# Example: (make sure to use an actual, working URL)
tar vxzf application.tar.gz

# Download and install your application dependencies with pip.
# Download the requirements.txt (pip freeze output) and use pip to install them all:
# Usage: curl [URL for requirements.txt] | pip install -r -
# Example: (make sure to use an actual, working URL)
curl http://www.github.com/example_usr/application/requirements.txt | pip install -r -

Configuring your Python WSGI Application

To serve this application, you will need a web server. The web server, which powers the WSGI app, needs to be installed in the same container as the application's other resources. In fact, it will be the process that docker runs.
Note: In this example, we will use CherryPy's built-in production ready HTTP web server due to its simplicity. You can use Gunicorn, CherryPy or even uWSGI (and set them up behind Nginx) by following our tutorials on the subject.
Download and install CherryPy with pip:
pip install cherrypy
Create a "server.py" to serve the web application from "app.py":
nano server.py
Copy and paste the contents from below for the server to import your application and start serving it:
# Import your application as:
# from app import application
# Example:

from app import app

# Import CherryPy
import cherrypy

if __name__ == '__main__':

    # Mount the application
    cherrypy.tree.graft(app, "/")

    # Unsubscribe the default server
    cherrypy.server.unsubscribe()

    # Instantiate a new server object
    server = cherrypy._cpserver.Server()

    # Configure the server object
    server.socket_host = "0.0.0.0"
    server.socket_port = 80
    server.thread_pool = 30

    # For SSL Support
    # server.ssl_module            = 'pyopenssl'
    # server.ssl_certificate       = 'ssl/certificate.crt'
    # server.ssl_private_key       = 'ssl/private.key'
    # server.ssl_certificate_chain = 'ssl/bundle.crt'

    # Subscribe this server
    server.subscribe()

    # Start the server engine (Option 1 *and* 2)

    cherrypy.engine.start()
    cherrypy.engine.block()
And that's it! Now you can have a "dockerized" Python web application securely kept in its sandbox, ready to serve thousands and thousands of client requests by simply running:
python server.py
This will run the server on the foreground. If you would like to stop it, press CTRL+C.
To run the server in the background, run the following:
python server.py &
When you run an application in the background, you will need to use a process manager (e.g. htop) to kill (or stop) it.
Note: To learn more about configuring Python WSGI applications for deployment with CherryPy, check out our tutorial: How to deploy Python WSGI apps Using CherryPy Web Server
To test that everything is running smoothly, which they should given that all the port allocations are already taken care of, you can visit http://[your droplet's IP] with your browser to see the "Hello World!" message.

Thursday, 20 April 2017

Why Docker for AWS?

Native to Docker

Docker for AWS provides a Docker-native solution that avoids operational complexity and adding unneeded additional APIs to the Docker stack.
Docker for AWS allows you to interact with Docker directly (including native Docker orchestration), instead of distracting you with the need to navigate extra layers on top of Docker. You can focus instead on the thing that matters most: running your workloads. This will help you and your team to deliver more value to the business faster, to speak one common “language”, and to have fewer details to keep in your head at once.
The skills that you and your team have already learned, and will continue to learn, using Docker on the desktop or elsewhere will automatically carry over to using Docker on AWS. The added consistency across clouds also helps to ensure that a migration or multi-cloud strategy is easier to accomplish in the future if desired.

Skip the boilerplate and maintenance work

Docker for AWS bootstraps all of the recommended infrastructure to start using Docker on AWS automatically. You don’t need to worry about rolling your own instances, security groups, or load balancers when using Docker for AWS.
Likewise, setting up and using Docker swarm mode functionality for container orchestration is managed across the cluster’s lifecycle when you use Docker for AWS. Docker has already coordinated the various bits of automation you would otherwise be gluing together on your own to bootstrap Docker swarm mode on these platforms. When the cluster is finished booting, you can jump right in and start running docker service commands.
We also provide a prescriptive upgrade path that helps users upgrade between various versions of Docker in a smooth and automatic way. Instead of experiencing “maintenance dread” as you ponder your future responsibilities upgrading the software you are using, you can easily upgrade to new versions when they are released.

Minimal, Docker-focused base

The custom Linux distribution used by Docker for AWS is carefully developed and configured to run Docker well. Everything from the kernel configuration to the networking stack is customized to make it a favorable place to run Docker. For instance, we make sure that the kernel versions are compatible with the latest and greatest in Docker functionality, such as the overlay2 storage driver.
Instead of facing the trade-offs of a general purpose operating system, Docker’s custom Linux distribution focuses on only one thing: providing the best Docker experience for you and your team.

Self-cleaning and self-healing

Even the most conscientious admin can be caught off guard by issues such as unexpectedly aggressive logging or the Linux kernel killing memory-hungry processes. In Docker for AWS, your cluster is resilient to a variety of such issues by default.
Log rotation native to the host is configured for you automatically, so chatty logs won’t use up all of your disk space. Likewise, the “system prune” option allows you to ensure unused Docker resources such as old images are cleaned up automatically. The lifecycle of nodes is managed using auto-scaling groups or similar constructs, so that if a node enters an unhealthy state for unforeseen reasons, the node will be taken out of load balancer rotation and/or replaced automatically and all of its container tasks will be rescheduled.
These self-cleaning and self-healing properties are enabled by default and don’t need configuration, so you can breathe easier as the risk of downtime is reduced.

Logging native to the platforms

Centralized logging is a critical component of many modern infrastructure stacks. To have these logs indexed and searchable proves invaluable for debugging appliation and system issues as they come up. Out of the box, Docker for AWS forwards logs from containers to a native cloud provider abstraction (CloudWatch).

Next-generation Docker bug reporting tools

One common pain point in open source issue reporting is effectively communicating the current state of your infrastructure and the issues you are seeing to the upstream. In Docker for AWS, you receive new tools to communicate any issues you experience quickly and securely to Docker employees. The Docker for AWS shell includes a docker-diagnose script which, at your request, will transmit detailed diagnostic information to Docker support staff to reduce the traditional “please-post-the-output-of-this-command” back and forth frequently encountered in bug reports.

Dockerfile: ENTRYPOINT vs CMD

ENTRYPOINT or CMD

Ultimately, both ENTRYPOINT and CMD give you a way to identify which executable should be run when a container is started from your image. In fact, if you want your image to be runnable (without additional docker run command line arguments) you must specify an ENTRYPOINT or CMD.
Trying to run an image which doesn't have an ENTRYPOINT or CMD declared will result in an error

$ docker run alpine
FATA[0000] Error response from daemon: No command specified
Many of the Linux distro base images that you find on the Docker Hub will use a shell like /bin/sh or /bin/bash as the the CMD executable. This means that anyone who runs those images will get dropped into an interactive shell by default (assuming, of course, that they used the -i and -t flags with the docker run command).
image layers
This makes sense for a general-purpose base image, but you will probably want to pick a more specific CMD or ENTRYPOINT for your own images.

Overrides

The ENTRYPOINT or CMD that you specify in your Dockerfile identify the default executable for your image. However, the user has the option to override either of these values at run time.
For example, let's say that we have the following Dockerfile
FROM ubuntu:trusty
CMD ping localhost
If we build this image (with tag "demo") and run it we would see the following output:

$ docker run -t demo
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.038 ms
^C
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.032/0.039/0.008 ms
You can see that the ping executable was run automatically when the container was started. However, we can override the default CMD by specifying an argument after the image name when starting the container:

$ docker run demo hostname
6c1573c0d4c0
In this case, hostname was run in place of ping
The default ENTRYPOINT can be similarly overridden but it requires the use of the --entrypoint flag:

$ docker run --entrypoint hostname demo
075a2fa95ab7
Given how much easier it is to override the CMD, the recommendation is use CMD in your Dockerfile when you want the user of your image to have the flexibility to run whichever executable they choose when starting the container. For example, maybe you have a general Ruby image that will start-up an interactive irb session by default (CMD irb) but you also want to give the user the option to run an arbitrary Ruby script (docker run ruby ruby -e 'puts "Hello"')
In contrast, ENTRYPOINT should be used in scenarios where you want the container to behave exclusively as if it were the executable it's wrapping. That is, when you don't want or expect the user to override the executable you've specified.
There are many situations where it may be convenient to use Docker as portable packaging for a specific executable. Imagine you have a utility implemented as a Python script you need to distribute but don't want to burden the end-user with installation of the correct interpreter version and dependencies. You could package everything in a Docker image with an ENTRYPOINT referencing your script. Now the user can simply docker run your image and it will behave as if they are running your script directly.
Of course you can achieve this same thing with CMD, but the use of ENTRYPOINT sends a strong message that this container is only intended to run this one command.
The utility of ENTRYPOINT will become clearer when we show how you can combine ENTRYPOINT and CMD together, but we'll get to that later.

Shell vs. Exec

Both the ENTRYPOINT and CMD instructions support two different forms: the shell form and the exec form. In the example above, we used the shell form which looks like this:
CMD executable param1 param2
When using the shell form, the specified binary is executed with an invocation of the shell using /bin/sh -c. You can see this clearly if you run a container and then look at the docker ps output:

$ docker run -d demo
15bfcddb11b5cde0e230246f45ba6eeb1e6f56edb38a91626ab9c478408cb615

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
15bfcddb4312 demo:latest "/bin/sh -c 'ping localhost'" 2 seconds ago
Here we've run the "demo" image again and you can see that the command which was executed was /bin/sh -c 'ping localhost'.
This appears to work just fine, but there are some subtle issues that can occur when using the shell form of either the ENTRYPOINT or CMD instruction. If we peek inside our running container and look at the running processes we will see something like this:

$ docker exec 15bfcddb ps -f
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:14 ? 00:00:00 /bin/sh -c ping localhost
root 9 1 0 20:14 ? 00:00:00 ping localhost
root 49 0 0 20:15 ? 00:00:00 ps -f
Note how the process running as PID 1 is not our ping command, but is the /bin/sh executable. This can be problematic if we need to send any sort of POSIX signals to the container since /bin/sh won't forward signals to child processes (for a detailed write-up, see Gracefully Stopping Docker Containers).
Beyond the PID 1 issue, you may also run into problems with the shell form if you're building a minimal image which doesn't even include a shell binary. When Docker is constructing the command to be run it doesn't check to see if the shell is available inside the container -- if you don't have /bin/sh in your image, the container will simply fail to start.
A better option is to use the exec form of the ENTRYPOINT/CMD instructions which looks like this:
CMD ["executable","param1","param2"]
Note that the content appearing after the CMD instruction in this case is formatted as a JSON array.
When the exec form of the CMD instruction is used the command will be executed without a shell.
Let's change our Dockerfile from the example above to see this in action:

FROM ubuntu:trusty
CMD ["/bin/ping","localhost"]
Rebuild the image and look at the command that is generated for the running container:

$ docker build -t demo .
[truncated]

$ docker run -d demo
90cd472887807467d699b55efaf2ee5c4c79eb74ed7849fc4d2dbfea31dce441

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
90cd47288780 demo:latest "/bin/ping localhost" 4 seconds ago
Now /bin/ping is being run directly without the intervening shell process (and, as a result, will end up as PID 1 inside the container).
Whether you're using ENTRYPOINT or CMD (or both) the recommendation is to always use the exec form so that's it's obvious which command is running as PID 1 inside your container.

ENTRYPOINT and CMD

Up to this point, we've discussed how to use ENTRYPOINT or CMD to specify your image's default executable. However, there are some cases where it makes sense to use ENTRYPOINT and CMD together.
Combining ENTRYPOINT and CMD allows you to specify the default executable for your image while also providing default arguments to that executable which may be overridden by the user. Let's look at an example:

FROM ubuntu:trusty
ENTRYPOINT ["/bin/ping","-c","3"]
CMD ["localhost"]
Let's build and run this image without any additional docker run arguments:

$ docker build -t ping .
[truncated]

$ docker run ping
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.051 ms

--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.025/0.038/0.051/0.010 ms

$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED
82df66a2a9f1 ping:latest "/bin/ping -c 3 localhost" 6 seconds ago
Note that the command which was executed is a combination of the ENTRYPOINT and CMD values that were specified in the Dockerfile. When both an ENTRYPOINT and CMD are specified, the CMD string(s) will be appended to the ENTRYPOINT in order to generate the container's command string. Remember that the CMD value can be easily overridden by supplying one or more arguments to `docker run` after the name of the image. In this case we could direct our ping to a different host by doing something like this:

$ docker run ping docker.io
PING docker.io (162.242.195.84) 56(84) bytes of data.
64 bytes from 162.242.195.84: icmp_seq=1 ttl=61 time=76.7 ms
64 bytes from 162.242.195.84: icmp_seq=2 ttl=61 time=81.5 ms
64 bytes from 162.242.195.84: icmp_seq=3 ttl=61 time=77.8 ms

--- docker.io ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 76.722/78.695/81.533/2.057 ms

$ docker ps -l --no-trunc
CONTAINER ID IMAGE COMMAND CREATED
0d739d5ea4e5 ping:latest "/bin/ping -c 3 docker.io" 51 seconds ago
Running the image starts to feel like running any other executable -- you specify the name of the command you want to run followed by the arguments you want to pass to that command.
Note how the -c 3 argument that was included as part of the ENTRYPOINT essentially becomes a "hard-coded" argument for the ping command (the -c flag is used to limit the ping count to the specified number). It's included in each invocation of the image and can't be overridden in the same way as the CMD parameter.

Always Exec

When using ENTRYPOINT and CMD together it's important that you always use the exec form of both instructions. Trying to use the shell form, or mixing-and-matching the shell and exec forms will almost never give you the result you want.
The table below shows the command string that results from combining the various forms of the ENTRYPOINT and CMD instructions.

Dockerfile    Command
ENTRYPOINT /bin/ping -c 3
CMD localhost    /bin/sh -c '/bin/ping -c 3' /bin/sh -c localhost
ENTRYPOINT ["/bin/ping","-c","3"]
CMD localhost    /bin/ping -c 3 /bin/sh -c localhost
ENTRYPOINT /bin/ping -c 3
CMD ["localhost"]"    /bin/sh -c '/bin/ping -c 3' localhost
ENTRYPOINT ["/bin/ping","-c","3"]
CMD ["localhost"]    /bin/ping -c 3 localhost
The only one of these that results in a valid command string is when the ENTRYPOINT and CMD are both specified using the exec form.

Conclusion

User Interface(UI) for Docker, Portainer

Portainer gives you access to a central overview of your Docker host or Swarm cluster. From the dashboard, you can easily access any manag...