1. Running & Inspecting Containers
By the end of this exercise, you should be able to:
-
Start a container
-
List containers in multiple ways
-
Query the docker command line help
-
Remove containers
Interactive Containers
Create and start a new CentOS 7 container running ping to 8.8.8.8
. Docker will downlaod the CentOS 7 image since you do not have it available locally.
[user@node ~]$ docker container run centos:7 ping 8.8.8.8
Unable to find image 'centos:7' locally
7: Pulling from library/centos
a02a4930cb5d: Pull complete
Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Status: Downloaded newer image for centos:7
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=108 time=7.07 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=108 time=7.11 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=108 time=7.03 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=108 time=7.09 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=108 time=7.01 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=108 time=7.00 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5006ms
rtt min/avg/max/mdev = 7.008/7.056/7.110/0.039 ms
Press CTRL+C
after a few pings. This stops and exits the container.
Detached Containers
This first container sent its STDOUT to your terminal. Create a second container, this time in detached mode:
[user@node ~]$ docker container run --detach centos:7 ping 8.8.4.4
8aef3d0d411c7b02532292ec3267a54f9258eaafb71d3d73a8ad41e702bd35a2
Instead of seeing the executed command (ping 8.8.4.4
), Docker engine displays a long hexidecimal number, which is the full container ID of your new container. The container is running detached, which means the container is running as a background process, rather than printing its STDOUT to your terminal.
Listing running Containers
List the running Docker containers using the docker container ls
container command. You will see only one container running.
[user@node ~]$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8aef3d0d411c centos:7 "ping 8.8.4.4" 3 minutes ago Up 3 minutes zen_jang
Listing all Containers
Now you know that the docker container ls
command only shows running containers. You can show all containers that exist (running or stopped) by using docker container ls --all
. Your container ID and name will vary. Note that you will see two containers: a stopped container and a running container.
[user@node ~]$ docker container ls --all
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
8aef3d0d411c centos:7 "ping 8.8.4.4" 2 minutes ago Up 2 minutes zen_jang
00f763b9308d centos:7 "ping 8.8.8.8" 4 minutes ago Exited (0)... inspiring_cheb
Where did those names come from? All containers have names, which in most Docker CLI commands can be substituted for the container ID as we’ll see in later exercises. By default, containers get a randomly generated name of the form <adjective>_<scientist / technologist>
, but you can choose a name explicitly with the --name
flag in docker container run
.
Naming Containers
Start up another detached container, this time giving it a name "opendnsping".
[user@node ~]$ docker container run --detach --name opendnsping \
centos:7 ping 208.67.222.222
3bdc61a95e76fdfe2597ef18aa00321a53dcdc9c36b2db97fbe738f8a623ecad
List all your containers again. You can see all of the containers, including your new one with your customized name.
[user@node ~]$ docker container ls --all
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
3bdc61a95e76 centos:7 "ping 208.67.222.222" 23 seconds ago Up 22 seconds opendnsping
8aef3d0d411c centos:7 "ping 8.8.4.4" 4 minutes ago Up 14 minutes zen_jang
00f763b9308d centos:7 "ping 8.8.8.8" 9 minutes ago Exited (0)... inspirin...
Remove exited Containers
Next, remove the exited container. To do this, use docker container rm <container ID>
. In the example above, the Docker container ID is 00f763b9308d
. You only need as many characters as to uniquely identify this container from all the others.
[user@node ~]$ docker container rm <container ID>
00f763b9308d
Remove running Containers
Now try to remove one of the other Docker containers using the same command. It does not work. Why?
[user@node ~]$ docker container rm <container ID>
Error response from daemon: You cannot remove a running container
3bdc61a95e76fdfe2597ef18aa00321a53dcdc9c36b2db97fbe738f8a623ecad.
Stop the container before attempting removal or force remove
You can see that running containers are not removed. You’ll have to look for an option to remove a running container. In order to find out the option you need to do a force remove, check the command line help. To do this with the docker container rm
command, use the --help
option:
[user@node ~]$ docker container rm --help
Usage: docker container rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Options:
-f, --force Force the removal of a running container (uses SIGKILL)
-l, --link Remove the specified link
-v, --volumes Remove the volumes associated with the container
Note
|
Help works with all Docker commands Not only can you use --help with docker container rm , but it works on all levels of docker commands. For example, docker --help provides you will all available docker commands, and docker container --help provides you with all available container commands.
|
Now, run a force remove on the running container you tried to remove in the two previous steps. This time it works.
[user@node ~]$ docker container rm --force <container ID>
8aef3d0d411c
Start another detached container pinging 8.8.8.8
, with the name pinggoogledns
.
[user@node ~]$ docker container run --detach --name pinggoogledns \
centos:7 ping 8.8.8.8
38e121e629611daa0726a21d634bc5189400377d82882cc6fd8a3870dc9943a0
Now that you’ve finished your testing, you need to remove your containers. In order to remove all of them at once, you want to get only the container IDs. Look at docker container ls --help to get the information you need:
[user@node ~]$ docker container ls --help
Usage: docker container ls [OPTIONS]
List containers
Aliases:
ls, ps, list
Options:
-a, --all Show all containers (default shows just running)
-f, --filter filter Filter output based on conditions provided
--format string Pretty-print containers using a Go template
-n, --last int Show n last created containers (includes all states)
-l, --latest Show the latest created container (includes all states)
--no-trunc Don\'t truncate output
-q, --quiet Only display numeric IDs
-s, --size Display total file sizes
To get only the container IDs, use the --quiet
option. If you want to use only the container IDs of all existing containers to perform an action on, you can use --quiet
with the --all
option.
[user@node ~]$ docker container ls --all --quiet
3bdc61a95e76
38e121e62961
Since we are done running pings on the public DNS servers, kill the containers. To do this, use the syntax docker container rm --force <containerID>
. However, this only kills one container at a time. We want to kill all the containers, no matter what state the containers are in. To get this information, you will need to use the output from docker container ls --quiet
--all
. To capture this output within the command, use $(…)
to nest the listing command inside the docker container rm command.
[user@node ~]$ docker container rm --force \
$(docker container ls --quiet --all)
3bdc61a95e76
38e121e62961
Conclusion
This exercise taught you how to start, list, and kill containers. In this exercise you ran your first containers using docker container run
, and how they are running commands inside the containers. You also learned to how to list your containers, and how to kill the containers using the command docker container rm
. If you run into trouble, you’ve learned that the --help
option can provide you with information that could help get you answers.
2. Interactive containers
By the end of this exercise, you should be able to:
-
Launch an interactive shell in a new or existing container
-
Run a child process inside a running container
-
List containers using more options and filters
Writing to Containers
Create a container using the centos:7
image. Connect to its bash shell in interactive mode using the -i
flag and request a TTY connection using the -t
flag:
[user@node ~]$ docker container run -it centos:7 bash
Explore your container’s filesystem with ls, and then create a new file. Use ls
again to confirm you have successfully created your file. Use the -l
option with ls
to list the files and directories in long list format.
[user@node ~]$ ls -l
[user@node ~]$ echo 'Hello there...' > test.txt
[user@node ~]$ ls -l
Exit the connection to the container:
[user@node ~]# exit
Run the same command as before to start a container using the centos:7
image:
[user@node ~]$ docker container run -it centos:7 bash
Use ls
to explore your container. You will see that your previously created test.txt is nowhere to be found in your new container. Exit this container in the same way you did above.
Reconnecting to Containers
We wish to recover test.txt
written to our container in the first example, but starting a new container didn’t get us there. We need to restart and reconnect to our original container. List all your stopped containers:
[user@node ~]$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
cc19f7e9aa91 centos:7 "bash" About a minute ago Exited (0) About a minute ago
2b8de2ffdf85 centos:7 "bash" 2 minutes ago Exited (0) About a minute ago
...
We can restart a container via the container ID listed in the first column. Use the container ID for the first centos:7 container you created with bash as its command (see the CREATED column above to make sure you’re choosing the first bash container you ran):
[user@node ~]$ docker container start <container ID>
[user@node ~]$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS ...
2b8de2ffdf85 centos:7 "bash" 5 minutes ago Up 21 seconds ...
Your container status has changed from Exited to Up, via docker container start.
Run ps -ef
inside the container you just restarted using Docker’s exec command (exec runs the specified process as a child of the PID 1 process inside the container):
[user@node ~]$ docker container exec <container ID> ps -ef
What process is PID 1 inside the container? Find the PID of that process on the host machine by using:
[user@node ~]$ docker container top <container ID>
Launch a bash shell in your running container with docker container exec:
[user@node ~]$ docker container exec -it <container ID> bash
List the contents of the container’s filesystem with ls -l; your test.txt should be where you left it. Exit the container again by typing exit.
Using Container Listing Options
In the last step, we saw how to get the short container ID of all our containers using docker container ls -a
. Try adding the --no-trunc
flag to see the entire container ID:
[user@node ~]$ docker container ls -a --no-trunc
This long ID is the same as the string that is returned after starting a container with docker container run.
List only the container ID using the -q flag:
[user@node ~]$ docker container ls -a -q
List the last container to have been created using the -l
flag:
[user@node ~]$ docker container ls -l
Finally, you can also filter results with the --filter
flag; for example, try filtering by exit code:
[user@node ~]$ docker container ls -a --filter "exited=0" The output of this command will list the containers that have exited successfully.
Clean up with:
[user@node ~]$ docker container rm -f $(docker container ls -aq)
Conclusion
In this demo, you saw that files added to a container’s filesystem do not get added to all containers created from the same image. Changes to a container’s filesystem are local to itself, and exist only in that particular container. You also learned how to restart a stopped Docker container using docker container start
, how to run a command in a running container using docker container exec
, and also saw some more options for listing containers via docker container ls
.
3. Detached containers and Logging
By the end of this exercise, you should be able to:
-
Run a container detached from the terminal
-
Fetch the logs of a container
-
Attach a terminal to the
STDOUT
of a running container
Running a Container in the Background
First try running a container as usual; the STDOUT
and STDERR
streams from whatever is PID 1
inside the container are directed to the terminal:
[user@node ~]$ docker container run centos:7 ping 127.0.0.1 -c 2
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1019ms
rtt min/avg/max/mdev = 0.021/0.025/0.029/0.004 ms
The same process can be run in the background with the -d
flag:
[user@node ~]$ docker container run -d centos:7 ping 127.0.0.1
d5ef517cc113f36738005295066b271ae604e9552ce4070caffbacdc3893ae04
This time, we only see the container’s ID; its STDOUT
isn’t being sent to the terminal.
Use this second container’s ID to inspect the logs it generated:
[user@node ~]$ docker container logs <container ID>
These logs correspond to STDOUT
and STDERR
from the container’s PID 1
.
Also note when using container IDs: you don’t need to specify the entire ID. Just enough characters from the start of the ID to uniquely identify it, often just 2 or 3, is sufficient.
Attaching to Container Output
We can attach a terminal to a container’s PID 1
output with the attach command; try it with the last container you made in the previous step:
[user@node ~]$ docker container attach <container ID>
We can leave attached mode by then pressing CTRL+C
.
After doing so, list your running containers; you should see that the container you attached to has been killed, since the CTRL+C
issued killed PID 1
in the container, and therefore the container itself.
Try running the same thing in detached interactive mode:
[user@node ~]$ docker container run -d -it centos:7 ping 127.0.0.1
Attach to this container like you did the first one, but this time detach with CTRL+P CTRL+Q
(sequential, not simultaneous), and list your running containers.
In this case, the container should still be happily running in the background after detaching from it.
Using Logging Options
We saw previously how to read the entire log of a container’s PID 1
;
we can also use a couple of flags to control what logs are displayed. --tail
n limits the display to the last n lines;
try it with the container that should be running from the last step:
[user@node ~]$ docker container logs --tail 5 <container ID>
You should see the last 5 pings from this container.
We can also follow the logs as they are generated with -f
:
[user@node ~]$ docker container logs -f <container ID>
The container’s logs get piped in real time to the terminal (CTRL+C
to break out of following mode - note this doesn’t kill the process like when we attached to it, since now we’re tailing the logs, not attaching to the process).
Finally, try combining the tail and follow flags to begin following the logs from 10 lines back in history.
Conclusion
In this exercise, we saw our first detached containers. Almost all containers you ever run will be running in detached mode; you can use container attach to interact with their PID 1
processes, as well as container logs to fetch their logs. Note that both attach and logs interact with the PID 1 process only - if you launch child processes inside a container, it’s up to you to manage their STDOUT
and STDERR
streams. Also, be careful when killing processes after attaching to a container; as we saw, it’s easy to attach to a container and then kill it, by issuing a CTRL+C
to the PID 1
process you’ve attached to.
4. Starting, Stopping, Inspecting and Deleting Containers
By the end of this exercise, you should be able to:
-
Restart containers which have exited
-
Distinguish between stopping and killing a container
-
Fetch container metadata using docker container inspect
-
Delete containers
Starting and Restarting Containers
Start by running a container in the background, and check that it’s really running:
[user@node ~]$ docker container run -d centos:7 ping 8.8.8.8
[user@node ~]$ docker container ls
Stop the container using docker container stop, and check that the container is indeed stopped:
[user@node ~]$ docker container stop <container ID>
[user@node ~]$ docker container ls -a
Note
|
Note that the stop command takes a few seconds to complete. docker container stop first sends a SIGTERM to the PID 1 process inside a container, asking it to shut down nicely; it then waits 10 seconds before sending a SIGKILL to kill it off, ready or not. The exit code you see (137 in this case) is the exit code returned by the PID 1 process (ping ) upon being killed by one of these signals.
|
Start the container again with docker container start, and attach to it at the same time with the -a
flag:
[user@node ~]$ docker container start -a <container ID>
As you saw previously, this brings the container from the Exited to the Up state; in this case, we’re also attaching to the PID 1
process.
Detach and stop the container with CTRL+C
, then restart the container without attaching and follow the logs starting from 10 lines previous.
Finally, stop the container with docker container kill:
[user@node ~]$ docker container kill <container ID>
Unlike docker container stop, container kill just sends the SIGKILL
right away - no grace period.
Inspecting a Container
Start your ping container again, then inspect the container details using docker container inspect:
[user@node ~]$ docker container start <container ID>
[user@node ~]$ docker container inspect <container ID>
You get a JSON object describing the container’s config, metadata and state.
Find the container’s IP and long ID in the JSON output of inspect. If you know the key name of the property you’re looking for, try piping to grep:
[user@node ~]$ docker container inspect <container ID> | grep IPAddress
The output should look similar to this:
"SecondaryIPAddresses": null,
"IPAddress": "<Your IP Address>"
Now try grepping for Cmd, the PID 1
command being run by this container. grep’s simple text search doesn’t always return helpful results:
[user@node ~]$ docker container inspect <container ID> | grep Cmd
"Cmd": \[
A more powerful way to filter this JSON is with the --format
flag. Syntax follows Go’s text/template package: link::http://golang.org/pkg/text/template/[golang text template].
For example, to find the Cmd value we tried to grep for above, instead try:
[user@node ~]$ docker container inspect --format='{{.Config.Cmd}}' <container ID>
[ping 8.8.8.8]
This time, we get a the value of the Config.Cmd key from the inspect JSON.
Keys nested in the JSON returned by docker container inspect can be chained together in this fashion. Try modifying this example to return the IP address you grepped for previously.
Finally, we can extract all the key/value pairs for a given object using the json function:
[user@node ~]$ docker container inspect --format='{{json .Config}}' <container ID>
Try adding | jq
to this command to get the same output a little bit easier to read.
Deleting Containers
Start three containers in background mode, then stop the first one.
List only exited containers using the --filter
flag we learned earlier, and the option status=exited
.
Delete the container you stopped above with docker container rm
, and do the same listing operation as above to confirm that it has been removed:
[user@node ~]$ docker container rm <container ID>
[user@node ~]$ docker container ls ...
Now do the same to one of the containers that’s still running; notice docker container rm
won’t delete a container that’s still running, unless we pass it the force flag -f
. Delete the second container you started above:
[user@node ~]$ docker container rm -f <container ID>
Try using the docker container ls
flags we learned previously to remove the last container that was run, or all stopped containers.
Recall that you can pass the output of one shell command cmd-A into a variable of another command cmd-B with syntax like cmd-B $(cmd-A).
When done, clean up any containers you may still have:
[user@node ~]$ docker container rm -f $(docker container ls -aq)
Conclusion
In this exercise, you explored the lifecycle of a container, particularly in terms of stopping and restarting containers.
Keep in mind the behavior of docker container stop
, which sends a SIGTERM
, waits a grace period, and then sends a SIGKILL
before forcing a container to stop;
this two step process is designed to give your containers a chance to shut down 'nicely': dump their state to a log, finish a database transaction, or do whatever your application needs them to do in order to exit without causing additional problems. Make sure you bear this in mind when designing containerized software.
Also keep in mind the docker container inspect
command we saw, for examining container metadata, state and config; this is often the first place to look when trying to troubleshoot a failed container.
5. Interactive Image Creation
By the end of this exercise, you should be able to:
-
Capture a container’s filesystem state as a new docker image
-
Read and understand the output of
docker container diff
Modifying a Container
Start a bash terminal in a CentOS container:
[centos@node-0 ~]$ docker container run -it centos:7 bash
Install a couple pieces of software in this container - there’s nothing special about wget, any changes to the filesystem will do. Afterwards, exit the container:
[root@dfe86ed42be9 /]# yum install -y which wget
[root@dfe86ed42be9 /]# exit
Finally, try docker container diff
to see what’s changed about a container relative to its image; you’ll need to get the container ID via docker container ls -a
first:
[centos@node-0 ~]$ docker container ls -a
[centos@node-0 ~]$ docker container diff <container ID>
C /root
A /root/.bash_history
C /usr
C /usr/bin
A /usr/bin/gsoelim
...
Those C
's at the beginning of each line stand for files Changed, and A
for Added; lines that start with D
indicate Deletions.
Capturing Container State as an Image
Installing which and wget in the last step wrote information to the container’s read/write layer; now let’s save that read/write layer as a new read-only image layer in order to create a new image that reflects our additions, via the docker container commit:
[centos@node-0 ~]$ docker container commit <container ID> myapp:1.0
Check that you can see your new image by listing all your images:
[centos@node-0 ~]$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 1.0 34f97e0b087b 8 seconds ago 300MB
centos 7 5182e96772bf 44 hours ago 200MB
Create a container running bash using your new image, and check that which and wget are installed:
[centos@node-0 ~]$ docker container run -it myapp:1.0 bash
[root@2ecb80c76853 /]# which wget
The which commands should show the path to the specified executable, indicating they have been installed in the image. Exit your container when done by typing exit.
Conclusion
In this exercise, you saw how to inspect the contents of a container’s read / write layer with docker container diff
, and commit those changes to a new image layer with docker container commit
.
Committing a container as an image in this fashion can be useful when developing an environment inside a container, when you want to capture that environment for reproduction elsewhere.
6. Creating Images with Dockerfiles (1/2)
By the end of this exercise, you should be able to:
-
Write a
Dockerfile
using theFROM
andRUN
commands -
Build an image from a
Dockerfile
-
Anticipate which image layers will be fetched from the cache at build time
-
Fetch build history for an image
Writing and Building a Dockerfile
Create a folder called myimage, and a text file called Dockerfile
within that folder. In Dockerfile
, include the following instructions:
FROM centos:7
RUN yum update -y
RUN yum install -y wget
This serves as a recipe for an image based on centos:7
, that has all its default packages updated and wget installed on top.
Build your image with the build
command. Don’t miss the .
at the end; that’s the path to your Dockerfile
. Since we’re currently in the directory myimage which contains it, the path is just .
(here).
[user@node myimage]$ docker image build -t myimage .
You’ll see a long build output - we’ll go through the meaning of this output in a demo later. For now, your image creation was successful if the output ends with Successfully tagged myimage:latest
.
Verify that your new image exists with docker image ls
, then use your new image to run a container and wget something from within that container, just to confirm that everything worked as expected:
[user@node myimage]$ docker container run -it myimage bash
[root@1d86d4093cce /]# wget example.com
[root@1d86d4093cce /]# cat index.html
[root@1d86d4093cce /]# exit
You should see the HTML from example.com, downloaded by wget from within your container.
It’s also possible to pipe a Dockerfile
in from STDIN
; try rebuilding your image with the following:
[user@node myimage]$ cat Dockerfile | docker image build -t myimage -f - .
Note
|
This is useful when reading a Dockerfile from a remote location with curl, for example. |
Using the Build Cache
In the previous step, the second time you built your image should have completed immediately, with each step except the first reporting using cache. Cached build steps will be used until a change in the Dockerfile
is found by the builder.
Open your Dockerfile
and add another RUN
step at the end to install vim
:
FROM centos:7
RUN yum update -y
RUN yum install -y wget
RUN yum install -y vim
Build the image again as before; which steps is the cache used for?
Build the image again; which steps use the cache this time?
Swap the order of the two RUN
commands for installing wget
and vim
in the Dockerfile
:
FROM centos:7
RUN yum update -y
RUN yum install -y vim
RUN yum install -y wget
Build one last time. Which steps are cached this time?
Using the history
Command
The docker image history
command allows us to inspect the build cache history of an image. Try it with your new image:
[user@node myimage]$ docker image history myimage:latest
IMAGE CREATED CREATED BY SIZE
f2e85c162453 8 seconds ago /bin/sh -c yum install -y wget 87.2MB
93385ea67464 12 seconds ago /bin/sh -c yum install -y vim 142MB
27ad488e6b79 3 minutes ago /bin/sh -c yum update -y 86.5MB
5182e96772bf 44 hours ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 44 hours ago /bin/sh -c #(nop) LABEL org.label-schema.... 0B
<missing> 44 hours ago /bin/sh -c #(nop) ADD file:6340c690b08865d... 200MB
Note
|
Note the image id of the layer built for the yum update command.
|
Replace the two RUN
commands that installed wget
and vim
with a single command:
FROM centos:7
RUN yum update -y
RUN yum install -y wget vim
Build the image again, and run docker image history on this new image. How has the history changed?
Conclusion
In this exercise, we’ve seen how to write a basic Dockerfile
using FROM
and RUN
commands, some basics of how image caching works, and seen the docker image history
command. Using the build cache effectively is crucial for images that involve lengthy compile or download steps. In general, moving commands that change frequently as late as possible in the Dockerfile
will minimize build times. We’ll see some more specific advice on this later in this lesson.
7. Creating Images with Dockerfiles (2/2)
By the end of this exercise, you should be able to:
-
Define a default process for an image to containerize by using the
ENTRYPOINT
orCMD
Dockerfile
commands -
Understand the differences and interactions between
ENTRYPOINT
andCMD
Setting Default Commands
Add the following line to the bottom of your Dockerfile
from the last exercise:
CMD ["ping", "127.0.0.1", "-c", "5"]
This sets ping as the default command to run in a container created from this image, and also sets some parameters for that command.
Rebuild your image:
[user@node myimage]$ docker image build -t myimage .
Run a container from your new image with no command provided:
[user@node myimage]$ docker container run myimage
You should see the command provided by the CMD
parameter in the Dockerfile running.
Try explicitly providing a command when running a container:
[user@node myimage]$ docker container run myimage echo "hello world"
Providing a command in docker container run
overrides the command defined by CMD
.
Replace the CMD
instruction in your Dockerfile
with an ENTRYPOINT
:
ENTRYPOINT ["ping"]
Build the image and use it to run a container with no process arguments:
[user@node myimage]$ docker image build -t myimage .
[user@node myimage]$ docker container run myimage
You’ll get an error. What went wrong?
Try running with an argument after the image name:
[user@node myimage]$ docker container run myimage 127.0.0.1
You should see a successful ping output. Tokens provided after an image name are sent as arguments to the command specified by ENTRYPOINT
.
Combining Default Commands and Options
Open your Dockerfile
and modify the ENTRYPOINT
instruction to include 2 arguments for the ping command:
ENTRYPOINT ["ping", "-c", "3"]
If CMD
and ENTRYPOINT
are both specified in a Dockerfile
, tokens listed in CMD
are used as default parameters for the ENTRYPOINT
command. Add a CMD
with a default IP to ping:
CMD ["127.0.0.1"]
Build the image and run a container with the defaults:
[user@node myimage]$ docker image build -t myimage .
[user@node myimage]$ docker container run myimage
You should see it pinging the default IP, 127.0.0.1.
Run another container with a custom IP argument:
[user@node myimage]$ docker container run myimage 8.8.8.8
This time, you should see a ping to 8.8.8.8. Explain the difference in behavior between these two last containers.
Conclusion
In this exercise, we encountered the Dockerfile
commands CMD
and ENTRYPOINT
.
These are useful for defining the default process to run as PID 1
inside the container right in the Dockerfile
, making our containers more like executables and adding clarity to exactly what process was meant to run in a given image’s containers.
8. Multi-Stage Builds
By the end of this exercise, you should be able to:
-
Write a
Dockerfile
that describes multiple images, which can copy files from one image to the next. -
Enable
BuildKit
for faster build times
Defining a multi-stage build
Make a new folder named 'multi' to do this exercise in, and cd into it.
Add a file hello.c
to the multi folder containing Hello World in C:
#include <stdio.h>
int main (void)
{
printf ("Hello, world!\n");
return 0;
}
Try compiling and running this right on the host OS:
[user@node multi]$ gcc -Wall hello.c -o hello
[user@node multi]$ ./hello
Now let’s Dockerize our hello world application. Add a Dockerfile
to the multi folder with this content:
FROM alpine:3.5
RUN apk update && \
apk add --update alpine-sdk
RUN mkdir /app
WORKDIR /app
COPY hello.c /app
RUN mkdir bin
RUN gcc -Wall hello.c -o bin/hello
CMD /app/bin/hello
Build the image and note its size:
[user@node multi]$ docker image build -t my-app-large .
[user@node multi]$ docker image ls | grep my-app-large
REPOSITORY TAG IMAGE ID CREATED SIZE
my-app-large latest a7d0c6fe0849 3 seconds ago 189MB
Test the image to confirm it was built successfully:
[user@node multi]$ docker container run my-app-large
It should print`hello world`" in the console.
Update your Dockerfile
to use an AS clause on the first line, and add a second stanza describing a second build stage:
FROM alpine:3.5 AS build
RUN apk update && \
apk add --update alpine-sdk
RUN mkdir /app
WORKDIR /app
COPY hello.c /app
RUN mkdir bin
RUN gcc -Wall hello.c -o bin/hello
FROM alpine:3.5
COPY --from=build /app/bin/hello /app/hello
CMD /app/hello
Build the image again and compare the size with the previous version:
[user@node multi]$ docker image build -t my-app-small .
[user@node multi]$ docker image ls | grep 'my-app-'
REPOSITORY TAG IMAGE ID CREATED SIZE
my-app-small latest f49ec3971aa6 6 seconds ago 4.01MB
my-app-large latest a7d0c6fe0849 About a minute ago 189MB
As expected, the size of the multi-stage build is much smaller than the large one since it does not contain the Alpine SDK.
Finally, make sure the app works:
[user@node multi]$ docker container run --rm my-app-small
You should get the expected Hello, World!
output from the container with just the required executable.
Building Intermediate Images
In the previous step, we took our compiled executable from the first build stage, but that image wasn’t tagged as a regular image we can use to start containers with; only the final FROM
statement generated a tagged image. In this step, we’ll see how to persist whichever build stage we like.
Build an image from the build stage in your Dockerfile
using the --target
flag:
[user@node multi]$ docker image build -t my-build-stage --target build .
Notice all its layers are pulled from the cache; even though the build stage wasn’t tagged originally, its layers are nevertheless persisted in the cache.
Run a container from this image and make sure it yields the expected result:
[user@node multi]$ docker container run -it --rm my-build-stage /app/bin/hello
List your images again to see the size of my-build-stage compared to the small version of the app.
Optional: Building from Scratch
So far, every image we’ve built has been based on a pre-existing image, referenced in the FROM
command. But what if we want to start from nothing, and build a completely original image? For this, we can build FROM scratch
.
In a new directory ~/scratch, create a file named sleep.c that just launches a sleeping process for an hour:
#include <stdio.h>
#include <unistd.h>
int main()
{
int delay = 3600; //sleep for 1 hour
printf ("Sleeping for %d second(s)...\n", delay);
sleep(delay);
return 0;
}
Create a file named Dockerfile
to build this sleep program in a build stage, and then copy it to a scratch-based image:
FROM alpine:3.8 AS build
RUN ["apk", "update"]
RUN ["apk", "add", "--update", "alpine-sdk"]
COPY sleep.c /
RUN ["gcc", "-static", "sleep.c", "-o", "sleep"]
FROM scratch
COPY --from=build /sleep /sleep
CMD ["/sleep"]
This image will contain nothing but our executable and the bare minimum file structure Docker needs to stand up a container filesystem.
Note
|
Note we’re statically linking the sleep.c binary, so it will have everything it needs bundled along with it, not relying on the rest of the container’s filesystem for anything. |
Build your image:
[user@node scratch]$ docker image build -t sleep:scratch .
List your images, and search for the one you just built:
[user@node scratch]$ docker image ls | grep scratch
REPOSITORY TAG IMAGE ID CREATED SIZE
sleep scratch 1b68b20a85a8 9 minutes ago 128kB
This image is only 128 kB
, as tiny as possible.
Run your image, and check out its filesystem; we can’t list directly inside the container, since ls isn’t installed in this ultra-minimal image, so we have to find where this container’s filesystem is mounted on the host. Start by finding the PID of your sleep process after its running:
[user@node scratch]$ docker container run --name sleeper -d sleep:scratch
[user@node scratch]$ docker container top sleeper
UID PID PPID C STIME TTY TIME CMD
root 1190 1174 0 15:21 ? 00:00:00 /sleep
In this example, the PID for sleep is 1190.
List your container’s filesystem from the host using this PID:
[user@node scratch]$ sudo ls /proc/<PID>/root
dev etc proc sleep sys
We see not only our binary sleep but a bunch of other folders and files. Where does these come from? runC, the tool for spawning and running containers, requires a json config of the container and a root file system. At runtime, Docker Engine adds these minimum requirements to form the most minimal container filesystem possible.
Clean up by deleting your container:
[user@node scratch]$ docker container rm -f sleeper
Optional: Enabling BuildKit
In addition to the default builder, BuildKit
can be enabled to take advantages of some optimizations of the build process.
Back in the ~/multi
directory, turn on BuildKit
:
[user@node multi]$ export DOCKER_BUILDKIT=1
Add an AS label to the final stage of your Dockerfile (this is not strictly necessary, but will make the output in the next step easier to understand):
...
FROM alpine:3.5 AS prod
RUN apk update
COPY --from=build /app/bin/hello /app/hello
CMD /app/hello
Re-build my-app-small, without the cache:
[user@node multi]$ docker image build --no-cache -t my-app-small-bk .
[+] Building 15.5s (14/14) FINISHED
=> [internal] load Dockerfile
=> => transferring dockerfile: 97B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for docker.io/library/alpine:3.5
=> CACHED [prod 1/3] FROM docker.io/library/alpine:3.5
=> [internal] load build context
=> => transferring context: 87B
=> CACHED [internal] helper image for file operations
=> [build 2/6] RUN apk update && apk add --update alpine-sdk
=> [prod 2/3] RUN apk update
=> [build 3/6] RUN mkdir /app
=> [build 4/6] COPY hello.c /app
=> [build 5/6] RUN mkdir bin
=> [build 6/6] RUN gcc -Wall hello.c -o bin/hello
=> [prod 3/3] COPY --from=build /app/bin/hello /app/hello
=> exporting to image
=> => exporting layers
=> => writing image sha256:22de288...
=> => naming to docker.io/library/my-app-small-bk
Notice the lines marked like [prod 2/3]
and [build 4/6]
: prod and build in this context are the AS labels you applied to the FROM
lines in each stage of your build in the Dockerfile
; from the above output, you can see that the build stages were built in parallel. Every step of the final image was completed while the build environment image was being created; the prod environment image creation was only blocked at the COPY
instruction since it required a file from the completed build image.
Comment out the COPY
instruction in the prod image definition in your Dockerfile
, and rebuild; the build image is skipped. BuildKit
recognized that the build stage was not necessary for the image being built, and skipped it.
Turn off BuildKit:
[user@node multi]$ export DOCKER_BUILDKIT=0
Conclusion
In this exercise, you created a Dockerfile
defining multiple build stages. Being able to take artifacts like compiled binaries from one image and insert them into another allows you to create very lightweight images that do not include developer tools or other unnecessary components in your production-ready images, just like how you currently probably have separate build and run environments for your software. This will result in containers that start faster, and are less vulnerable to attack.
9. Managing Images
By the end of this exercise, you should be able to:
-
Rename and retag an image
-
Push and pull images from the public registry
-
Delete image tags and image layers, and understand the difference between the two operations
Making an Account on Docker’s Hosted Registry
If you don’t have one already, head over to link::https://hub.docker.com[https://hub.docker.com] and make an account.
For the rest of this workshop, <Docker ID>
refers to the username you chose for this account.
Tagging and Listing Images
Download the centos:7 image from Docker Hub:
[user@node ~]$ docker image pull centos:7
Make a new tag of this image:
[user@node ~]$ docker image tag centos:7 my-centos:dev
Note
|
Note no new image has been created; my-centos:dev is just a pointer pointing to the same image as centos:7.
|
List your images:
[user@node ~]$ docker image ls
You should have centos:7
and my-centos:dev
both listed, but they ought to have the same hash under image ID, since they’re actually the same image.
Sharing Images on Docker Hub
Push your image to Docker Hub:
[user@node ~]$ docker image push my-centos:dev
You should get a denied: requested access to the resource is denied error.
Login by doing docker login
, and try pushing again. The push fails again because we haven’t namespaced our image correctly for distribution on Docker Hub; all images you want to share on Docker Hub must be named like <Docker ID>/<repo name>[:<optional tag>]
.
Retag your image to be namespaced properly, and push again:
[user@node ~]$ docker image tag my-centos:dev <Docker ID>/my-centos:dev
[user@node ~]$ docker image push <Docker ID>/my-centos:dev
Search Docker Hub for your new <Docker ID>/my-centos
repo, and confirm that you can see the :dev
tag therein.
Next, write a Dockerfile
that uses <Docker ID>/my-centos:dev
as its base image, and installs any application you like on top of that. Build the image, and simultaneously tag it as :1.0
:
[user@node ~]$ docker image build -t <Docker ID>/my-centos:1.0 .
Push your :1.0
tag to Docker Hub, and confirm you can see it in the appropriate repository.
Finally, list the images currently on your node with docker image ls. You should still have the version of your image that wasn’t namespaced with your Docker Hub user name; delete this using docker image rm
:
[user@node ~]$ docker image rm my-centos:dev
Only the tag gets deleted, not the actual image. The image layers are still referenced by another tag.
Conclusion
In this exercise, we praciced tagging images and exchanging them on the public registry. The namespacing rules for images on registries are mandatory: user-generated images to be exchanged on the public registry must be named like <Docker ID>/<repo name>[:<optional tag>]
; official images in the Docker registry just have the repo name and tag.
Note
|
Also note that as we saw when building images, image names and tags are just pointers; deleting an image with docker image rm just deletes that pointer if the corresponding image layers are still being referenced by another such pointer. Only when the last pointer is deleted are the image layers actually destroyed by docker image rm.
|
10. Database Volumes
By the end of this exercise, you should be able to:
-
Provide a docker volume as a database backing to Postgres
-
Recover a Postgres database from volume contents after destroying the original Postgres container
Launching Postgres
Download a postgres image, and look at its history to determine its default volume usage:
[user@node ~]$ docker image pull postgres:9-alpine
[user@node ~]$ docker image inspect postgres:9-alpine
...
"Volumes": {
"/var/lib/postgresql/data": {}
},
...
You should see a Volumes
block like the above, indicating that those paths in the container filesystem will get volumes automatically mounted to them when a container is started based on this image.
Set up a running instance of this postgres container:
[user@node ~]$ docker container run --name some-postgres \
-v db_backing:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=password \
-d postgres:9-alpine
Notice the explicit volume mount, -v db_backing:/var/lib/postgresql/data
; if we hadn’t done this, a randomly named volume would have been mounted to the container’s /var/lib/postgresql/data
. Naming the volume explicitly is a best practice that will become useful when we start mounting this volume in multiple containers.
Writing to the Database
The psql
command line interface to postgres comes packaged with the postgres image; spawn it as a child process in your postgres container interactively, to create a postgres terminal:
[user@node ~]$ docker container exec \ -it some-postgres psql -U postgres Create an arbitrary table in the database:
postgres=# CREATE TABLE PRODUCTS(PRICE FLOAT, NAME TEXT);
postgres=# INSERT INTO PRODUCTS VALUES('18.95', 'widget');
postgres=# INSERT INTO PRODUCTS VALUES('1.45', 'sprocket');
Double check you created the table you expected, and then quit this container:
postgres=# SELECT * FROM PRODUCTS;
price | name
---------+-----------
18.95 | widget
1.45 | sprocket
(2 rows)
postgres=# \q
Delete the postgres container:
[user@node ~]$ docker container rm -f some-postgres
Create a new postgres container, mounting the db_backing
volume just like last time:
[user@node ~]$ docker container run \
--name some-postgres \
-v db_backing:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=password \
-d postgres:9-alpine
Reconnect a psql
interface to your database, also like before:
[user@node ~]$ docker container exec \
-it some-postgres psql -U postgres
List the contents of the PRODUCTS
table:
postgres=# SELECT * FROM PRODUCTS;
The contents of the database have survived the deletion and recreation of the database container; this would not have been true if the database was keeping its data in the writable container layer. As above, use \q
to quit from the postgres prompt.
Conclusion
Whenever data needs to live longer than the lifecycle of a container, it should be pushed out to a volume outside the container’s filesystem; numerous popular databases are containerized using this pattern.
11. Introduction to Container Networking
By the end of this exercise, you should be able to:
-
Create docker bridge networks and attach containers to them
-
Design networks of containers that can successfully resolve each other via DNS and reach each other across a Docker software defined network.
Inspecting the Default Bridge
In the dropdown menu at the top of the Strigo webpage, click into node-1
. See what networks are present on your host:
[centos@node-1 ~]$ docker network ls
You should have entries for host
, none
, and bridge
.
Find some metadata about the default bridge
network:
[centos@node-1 ~]$ docker network inspect bridge
Note
|
Note especially the private subnet assigned by Docker’s IPAM driver to this network. The first IP in this range is used as the network’s gateway, and the rest will be assigned to containers as they join the network. |
See similar info from common networking tools:
[centos@node-1 ~]$ ip addr
Note
|
Note the bridge network’s gateway corresponds to the IP of the docker0 device in this list. docker0 is the linux bridge itself, while bridge is the name of the default Docker network that uses that bridge.
|
Use brctl
to see connections to the docker0
bridge:
[centos@node-1 ~]$ brctl show docker0
bridge name bridge id STP enabled interfaces
docker0 8000.02427f12c30b no
At the moment, there are no connections to docker0
.
Connecting Containers to docker0
Start a container and reexamine the network; the container is listed as connected to the network, with an IP assigned to it from the bridge network’s subnet:
[centos@node-1 ~]$ docker container run --name u1 -dt centos:7
[centos@node-1 ~]$ docker network inspect bridge
...
"Containers": {
"11da9b7db065f971f78aebf14b706b0b85f07ec10dbf6f0773b1603f48697961": {
"Name": "u1",
"EndpointID": "670c495...",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
...
Inspect the network interfaces with ip
and brctl
again, now that you have a container running:
[centos@node-1 ~]$ ip addr
...
5: veth6f244c3@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
master docker0 state UP
link/ether aa:71:82:6c:f3:88 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a871:82ff:fe6c:f388/64 scope link
valid_lft forever preferred_lft forever
[centos@node-1 ~]$ brctl show docker0
bridge name bridge id STP enabled interfaces
docker0 8000.02427f12c30b no veth6f244c3
ip addr
indicates a veth endpoint has been created and plugged into the docker0
bridge, as indicated by master docker0
, and that it is connected to device index 4 in this case (indicated by the @if4
suffix to the veth device name above). Similarly, brctl
now shows this veth connection on docker0
(notice that the ID for the veth connection matches in both utilities).
Launch a bash shell in your container, and look for the eth0
device therein:
[centos@node-1 ~]$ docker container exec -it u1 bash
[root@11da9b7db065 /]# yum install -y iproute
[root@11da9b7db065 /]# ip addr
...
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
We see that the eth0
device in this namespace is in fact the device that the veth connection in the host namespace indicated it was attached to, and vice versa - eth0@if5
indicates it is plugged into networking interface number 5, which we saw above was the other end of the veth connection. Docker has created a veth connection with one end in the host’s docker0
bridge, and the other providing the eth0
device in the container.
Defining Additional Bridge Networks
In the last step, we investigated the default bridge network; now let’s try making our own. User defined bridge networks work exactly the same as the default one, but provide DNS lookup by container name, and are firewalled from other networks by default.
Create a bridge network by using the bridge
driver with docker network create
:
[centos@node-1 ~]$ docker network create --driver bridge my_bridge
Launch a container connected to your new network via the --network
flag:
[centos@node-1 ~]$ docker container run --name=u2 --network=my_bridge -dt centos:7
Use the inspect
command to investigate the network settings of this container:
[centos@node-1 ~]$ docker container inspect u2
my_bridge
should be listed under the Networks
key.
Launch another container, this time interactively:
[centos@node-1 ~]$ docker container run --name=u3 --network=my_bridge -it centos:7
From inside container u3
, ping u2
by name: ping u2
. The ping succeeds, since Docker is able to resolve container names when they are attached to a custom network.
Try starting a container on the default network, and pinging u1
by name:
[centos@node-1 ~]$ docker container run centos:7 ping u1
ping: u1: Name or service not known
The ping fails; even though the containers are both attached to the bridge
network, Docker does not provide name lookup on this default network. Try the same command again, but using u1
's IP instead of name, and you should be successful.
Finally, try pinging u1
by IP, this time from container u2
:
[centos@node-1 ~]$ docker container exec u2 ping <u1 IP>
The ping fails, since the containers reside on different networks; all Docker networks are firewalled from each other by default.
Clean up your containers and networks:
[centos@node-1 ~]$ docker container rm -f $(docker container ls -aq)
[centos@node-1 ~]$ docker network rm my_bridge
Conclusion
In this exercise, you explored the fundamentals of container networking. The key take away is that containers on separate networks are firewalled from each other by default. This should be leveraged as much as possible to harden your applications; if two containers don’t need to talk to each other, put them on separate networks.
You also explored a number of API objects:
-
docker network ls
lists all networks on the host -
docker network inspect <network name>
gives more detailed info about the named network -
docker network create --driver <driver> <network name>
creates a new network using the specified driver; so far, we’ve only seen the bridge driver, for creating a linux bridge based network. -
docker network connect <network name> <container name or id>
connects the specified container to the specified network after the container is running; the--network
flag indocker container run
achieves the same result at container launch. -
docker container inspect <container name or id>
yields, among other things, information about the networks the specified container is connected to.
12. Container Port Mapping
By the end of this exercise, you should be able to:
-
Forward traffic from a port on the docker host to a port inside a container’s network namespace
-
Define ports to automatically expose in a Dockerfile
Port Mapping at Runtime
Run an nginx container with no special port mappings:
[centos@node-1 ~]$ docker container run -d nginx
nginx stands up a landing page at <ip>:80
. If you try to visit this at your host or container’s IP it won’t be visible; no external traffic can make it past the linux bridge’s firewall to the nginx container.
Now run an nginx container and map port 80 on the container to port 5000 on your host using the -p
flag:
[centos@node-1 ~]$ docker container run -d -p 5000:80 nginx
Note
|
Note that the syntax is: -p [host-port]:[container-port] .
|
Verify the port mappings with the docker container port
command
[centos@node-1 ~]$ docker container port <container id>
80/tcp -> 0.0.0.0:5000
Visit your nginx landing page at <host ip>:5000
, e.g. using curl -4 localhost:5000
, just to confirm it’s working as expected.
Exposing Ports from the Dockerfile
In addition to manual port mapping, we can expose some ports for automatic port mapping on container startup using a Dockerfile. In a new directory ~/port
, create a Dockerfile:
FROM nginx
EXPOSE 80
Build your image as my_nginx
:
[centos@node-1 port]$ docker image build -t my_nginx .
Use the -P
flag when running to map all ports mentioned in the EXPOSE
directive:
[centos@node-1 port]$ docker container run -d -P my_nginx
Use docker container ls
or docker container port
to find out which host ports were used, then visit your nginx landing page in a browser at <node-1 public IP>:<port>
.
Clean up your containers:
[centos@node-1 port]$ docker container rm -f $(docker container ls -aq)
Conclusion
In this exercise, we saw how to explicitly map ports from our container’s network stack onto ports of our host at runtime with the -p
option to docker container run
, or more flexibly in our Dockerfile with EXPOSE
, which will result in the listed ports inside our container being mapped to random available ports on our host. In both cases, Docker is writing iptables rules to forward traffic from the host to the appropriate port in the container’s network namespace.
13. Starting a Compose App
By the end of this exercise, you should be able to:
-
Read a basic docker compose yaml file and understand what components it is declaring
-
Start, stop, and inspect the logs of an application defined by a docker compose file
Inspecting a Compose App
Download the Dockercoins app from github:
[user@node ~]$ git clone -b ee3.0 \
https://github.com/docker-training/orchestration-workshop.git
[user@node ~]$ cd orchestration-workshop/dockercoins
This app consists of 5 services: a random number generator rng
, a hasher
, a backend worker
, a redis
queue, and a web
frontend; the code you just downloaded has the source code for each process and a Dockerfile to containerize each of them.
Have a brief look at the source for each component of your application. Each folder under ~/orchestration-workshop/ dockercoins
contains the application logic for the component, and a Dockerfile for building that logic into a Docker image. We’ve pre-built these images as training/dockercoins-rng:1.0
, training/dockercoins-worker:1.0
et cetera, so no need to build them yourself.
Have a look in docker-compose.yml
; especially notice the services
section. Each block here defines a different Docker service. They each have exactly one image which containers for this service will be started from, as well as other configuration details like network connections and port exposures. Full syntax for Docker Compose files can be found here: link::https://dockr.ly/2iHUpeX[Docker Compose Specification].
Starting the App
Stand up the app:
[user@node dockercoins]$ docker-compose up
After a moment, your app should be running; visit <node 0 public IP>:8000
to see the web frontend visualizing your rate of Dockercoin mining.
Logs from all the running services are sent to STDOUT
. Let’s send this to the background instead; kill the app with CTRL+C
, sending a SIGTERM
to all running processes; some exit immediately, while others wait for a 10s timeout before being killed by a subsequent SIGKILL
. Start the app again in the background:
[user@node dockercoins]$ docker-compose up -d
Check out which containers are running thanks to Compose:
[user@node dockercoins]$ docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------
dockercoins_hasher_1 ruby hasher.rb Up 0.0.0.0:8002->80/tcp
dockercoins_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
dockercoins_rng_1 python rng.py Up 0.0.0.0:8001->80/tcp
dockercoins_webui_1 node webui.js Up 0.0.0.0:8000->80/tcp
dockercoins_worker_1 python worker.py Up
Compare this to the usual docker container ls
; do you notice any differences? If not, start a couple of extra containers using docker container run…
, and check again.
Viewing Logs
See logs from a Compose-managed app via:
[user@node dockercoins]$ docker-compose logs
The logging API in Compose follows the main Docker logging API closely. For example, try following the tail of the logs just like you would for regular container logs:
[user@node dockercoins]$ docker-compose logs --tail 10 --follow
Note that when following a log, CTRL+S
and CTRL+Q
pauses and resumes live following; CTRL+C
exits follow mode as usual.
Conclusion
In this exercise, you saw how to start a pre-defined Compose app, and how to inspect its logs. Application logic was defined in each of the five images we used to create containers for the app, but the manner in which those containers were created was defined in the docker-compose.yml
file; all runtime configuration for each container is captured in this manifest. Finally, the different elements of Dockercoins communicated with each other via service name; the Docker daemon’s internal DNS was able to resolve traffic destined for a service, into the IP or MAC address of the corresponding container.
14. Scaling a Compose App
By the end of this exercise, you should be able to:
-
Scale a service from Docker Compose up or down
Scaling a Service
Any service defined in our docker-compose.yml
can be scaled up from the Compose API; in this context, 'scaling' means launching multiple containers for the same service, which Docker Compose can route requests to and from.
Scale up the worker
service in our Dockercoins app to have two workers generating coin candidates by redeploying the app with the --scale
flag, while checking the list of running containers before and after:
[user@node dockercoins]$ docker-compose ps
[user@node dockercoins]$ docker-compose up -d --scale worker=2
[user@node dockercoins]$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------
dockercoins_hasher_1 ruby hasher.rb Up 0.0.0.0:8002->80/tcp
dockercoins_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
dockercoins_rng_1 python rng.py Up 0.0.0.0:8001->80/tcp
dockercoins_webui_1 node webui.js Up 0.0.0.0:8000->80/tcp
dockercoins_worker_1 python worker.py Up
dockercoins_worker_2 python worker.py Up
A new worker container has appeared in your list of containers.
Look at the performance graph provided by the web frontend; the coin mining rate should have doubled. Also check the logs using the logging API we learned in the last exercise; you should see a second worker
instance reporting.
Investigating Bottlenecks
Try running top
to inspect the system resource usage; it should still be fairly negligible. So, keep scaling up your workers:
[user@node dockercoins]$ docker-compose up -d --scale worker=10
[user@node dockercoins]$ docker-compose ps
Check your web frontend again; has going from 2 to 10 workers provided a 5x performance increase? It seems that something else is bottlenecking our application; any distributed application such as Dockercoins needs tooling to understand where the bottlenecks are, so that the application can be scaled intelligently.
Look in docker-compose.yml
at the rng
and hasher
services; they’re exposed on host ports 8001 and 8002, so we can use httping
to probe their latency.
[user@node dockercoins]$ httping -c 5 localhost:8001
[user@node dockercoins]$ httping -c 5 localhost:8002
rng
on port 8001 has the much higher latency, suggesting that it might be our bottleneck. A random number generator based on entropy won’t get any better by starting more instances on the same machine; we’ll need a way to bring more nodes into our application to scale past this, which we’ll explore in the next unit on Docker Swarm.
For now, shut your app down:
[user@node dockercoins]$ docker-compose down
Conclusion
In this exercise, we saw how to scale up a service defined in our Compose app using the --scale
flag. Also, we saw how crucial it is to have detailed monitoring and tooling in a microservices-oriented application, in order to correctly identify bottlenecks and take advantage of the simplicity of scaling with Docker.
15. Cleaning up Docker Resources
By the end of this exercise, you should be able to:
-
Assess how much disk space docker objects are consuming
-
Use
docker prune
commands to clear out unneeded docker objects -
Apply label based filters to
prune
commands to control what gets deleted in a cleanup operation
Find out how much memory Docker is using by executing:
[centos@node-3 ~]$ docker system df
The output will show us how much space images, containers and local volumes are occupying and how much of this space can be reclaimed.
Reclaim all reclaimable space by using the following command:
[centos@node-3 ~]$ docker system prune
Answer with y
when asked if we really want to remove all unused networks, containers, images and volumes.
Create a couple of containers with labels (these will exit immediately; why?):
[centos@node-3 ~]$ docker container run --label apple --name fuji -d alpine
[centos@node-3 ~]$ docker container run --label orange --name clementine -d alpine
Delete only those stopped containers bearing the apple
label:
[centos@node-3 ~]$ docker container ls -a
[centos@node-3 ~]$ docker container prune --filter 'label=apple'
[centos@node-3 ~]$ docker container ls -a
Only the container named clementine
should remain after the targeted prune.
Finally, prune containers launched before a given timestamp using the until
filter; start by getting the current RFC 3339 time (hlink::ttps://tools.ietf.org/html/rfc3339[RFC3339] - note Docker requires the otherwise optional T
separating date and time), then creating a new container:
[centos@node-3 ~]$ TIMESTAMP=$(date --rfc-3339=seconds | sed 's/ /T/')
[centos@node-3 ~]$ docker container run --label tomato --name beefsteak -d alpine
And use the timestamp returned in a prune:
[centos@node-3 ~]$ docker container prune -f --filter "until=$TIMESTAMP"
[centos@node-3 ~]$ docker container ls -a
Note the -f
flag, to suppress the confirmation step. label
and until
filters for pruning are also available for networks and images, while data volumes can only be selectively pruned by label
; finally, images can also be pruned by the boolean dangling
key, indicating if the image is untagged.
Conclusion
In this exercise, we saw some very basic docker prune
usage - most of the top-level docker objects have a prune
command (docker container prune
, docker volume prune
etc). Most docker objects leave something on disk even after being shut down; consider using these cleanup commands as part of your cluster maintenance and garbage collection plan, to avoid accidentally running out of disk on your Docker hosts.
16. Inspection Commands
By the end of this exercise, you should be able to:
-
Gather system level info from the docker engine
-
Consume and format the docker engine’s event stream for monitoring purposes
Inspecting System Information
We can find the info
command under system
. Execute:
[centos@node-3 ~]$ docker system info
This provides some high-level information about the docker deployment on the current node, and the node itself. From this output, identify:
-
how many images are cached on your machine?
-
how many containers are running or stopped?
-
what version of containerd are you running?
-
whether Docker is running in swarm mode?
Monitoring System Events
There is another powerful system command that allows us to monitor what’s happening on the Docker host. Execute the following command:
[centos@node-3 ~]$ docker system events
Please note that it looks like the system is hanging, but that is not the case. The system is just waiting for some events to happen.
Open a second connection to node-3
and execute the following command:
[centos@node-3 ~]$ docker container run --rm alpine echo 'Hello World!'
and observe the generated output in the first terminal. It should look similar to this:
2017-01-25T16:57:48.553596179-06:00 container create 30eb63 ...
2017-01-25T16:57:48.556718161-06:00 container attach 30eb63 ...
2017-01-25T16:57:48.698190608-06:00 network connect de1b2b ...
2017-01-25T16:57:49.062631155-06:00 container start 30eb63 ...
2017-01-25T16:57:49.065552570-06:00 container resize 30eb63 ...
2017-01-25T16:57:49.164526268-06:00 container die 30eb63 ...
2017-01-25T16:57:49.613422740-06:00 network disconnect de1b2b ...
2017-01-25T16:57:49.815845051-06:00 container destroy 30eb63 ...
Granular information about every action taken by the Docker engine is presented in the events stream.
If you don’t like the format of the output then we can use the --format
parameter to define our own format in the form of a link::https://golang.org/pkg/text/template/[Go template]. Stop the events watch on your first terminal with CTRL+C
, and try this:
[centos@node-3 ~]$ docker system events --format '--> {{.Type}}-{{.Action}}'
now the output looks a little bit less cluttered when we run our alpine container on the second terminal as above.
Finally we can find out what the event structure looks like by outputting the events in json
format (once again after killing the events watcher on the first terminal and restarting it with):
[centos@node-3 ~]$ docker system events --format '{{json .}}' | jq
which should give us for the first event in the series after re-running our alpine container on the other connection to node-3
something like this (note, the output has been prettyfied for readability):
{
"status":"create",
"id":"95ddb6ed4c87d67fa98c3e63397e573a23786046e00c2c68a5bcb9df4c17635c",
"from":"alpine",
"Type":"container",
"Action":"create",
"Actor":{
"ID":"95ddb6ed4c87d67fa98c3e63397e573a23786046e00c2c68a5bcb9df4c17635c",
"Attributes":{
"image":"alpine",
"name":"sleepy_roentgen"
}
},
"time":1485385702,
"timeNano":1485385702748011034
}
Conclusion
In this exercise we have learned how to inspect system wide properties of our Docker host by using the docker system info
command; this is one of the first places to look for general config information to include in a bug report. We also saw a simple example of docker system events
; the events stream is one of the primary sources of information that should be logged and monitored when running Docker in production. Many commercial as well as open source products (such as Elastic Stack) exist to facilitate aggregating and mining these streams at scale.
17. Appendix
A Brief Introduction to Bash
If you’re not familiar with the common Linux commands (cd
, ls
, ps
, sudo
, etc.), this exercise is designed to lead you through the common linux commands that we use in the Docker training.
By the end of this exercise, you should be able to: * Manage files and folders using Linux commands * Inspect processes running on your machine and use some discovery tools.
Pre-requisites:
-
A Linux machine.
Managing Files and Folders
In this first part we will walk through common commands for creating and manipulating files and folders from the command line.
Open a terminal.
Terminals work by accepting text-based commands, and retuning text-based responses. Ask the terminal what directory you are currently in; in all our instructions, $
represents the terminal prompt, and you should enter what you see after the prompt. Yours might look a bit different; in any case, type pwd
:
$ pwd
pwd
stands for Print Working Directory. You should see something like:
/Users/ckaserer
This means your terminal is currently in the ckaserer
folder, which itself is in the /Users
folder (yours will have different folder names, but the logic is the same).
List the files and folders in your current directory:
$ ls
Applications Documents Library Music
Projects Desktop
Here we show some typical output below the command; again, you only need to enter what comes after the command prompt, ls
in this case. ls
lists all the contents of your current directory.
Change directory with cd
:
$ cd Desktop
Again, your directories might be named differently - that’s ok. Try to navigate to a directory you recognize the names of; Desktop
or Downloads
are common.
Use ls
again to see what’s in this directory. Compare what you see in your Desktop
directory to what you actually see on your machine’s desktop; the contents are identical. The terminal is just another way to browse the contents of your machine, based on programmatic text rather than visual analogies.
Make a new folder on your desktop (or wherever you currently are):
$ mkdir demo-dir
If you do ls
again, you’ll see a new directory demo-dir
has been made.
Change directory to demo-dir
, and open a new plain text file called demo.txt
using the simple text editor nano
:
$ cd demo-dir
$ nano demo.txt
You’re taken to nano
's text-based interface for creating a plain-text file. There are many other plain-text editors available, the most popular being vim
, but nano
is probably the simplest if this is your first time using a text editor from the command line.
Add any text you want to your file, just by typing something.
Save your file by pressing CMD+O
, and pressing return when asked for a filename. Exit by typing CMD+X
.
Dump the contents of your file to the screen:
$ cat demo.txt
Make a copy of your file with cp
:
$ cp demo.txt demo.copy.txt
Move and rename your file with mv
:
$ mv demo.txt demo.moved.txt
Delete a file:
$ rm demo.moved.txt
Check what directory you’re in, then back up one level with the special ..
path, and check again:
$ pwd
/Users/ckaserer/Desktop/demo-dir
$ cd ..
$ pwd
/Users/ckaserer/Desktop
cd ..
took us one level up in our directory tree, from demo-dir
back out to the directory that contains it, Desktop
in my example.
Delete your demo-dir
, again with remove, but this time using the -r
flag, which stands for recursive in order to delete the folder and all its contents:
$ rm -r demo-dir
Some Common Tools
In this section we will see the basic usage of different command line tools we’ll see again when working with Docker.
Inspecting Processes
In the next steps we will use ps
and top
. Check whether these tools are existing in our system:
$ which pwd
which
is a quick way to identify the location of executables and should return something similar to the following:
/bin/pwd
The pwd
command we used above is actually a tiny program, and it lives in our filesystem at the path indicated by which
. If which
returns nothing, that means the requested program probably isn’t installed on your machine.
List all processes running in your system:
$ ps -aux
A long list of processes is returned.
We will see later how to filter this output to extract only the information we need.
There different other tools to inspect processes. Try for example showing the list of processes with usage details (cpu usage, memory, etc.):
$ top
Exit with CTRL+C
.
The Superuser
In some cases you want to run commands that require privileges that you don’t have.
Try creating a new user:
$ adduser myName
This should return Permission denied.
Run the same command with sudo:
$ sudo adduser myName
Sudo runs the command with the privileges of the super user.
List the existing users:
$ cat /etc/passwd
You should see the user myName
in the bottom of the list.
Pinging of an address
ping
is a tool used to test the reachability of a network address.
Send a ping to your localhost
:
$ ping localhost
The ping should be successful. Interrupt it with CTRL+C
.
Try pinging an unreachable address:
$ ping -c 3 192.168.1.1
The flag -c
stands for the count of the sent packets. If no device is plugged to your machine and has this address, the request should timeout after 3 packets with 100% packet loss
.
Making HTTP Requests
Use the curl
command to issue HTTP requests across the network.
curl
an example webpage:
$ curl example.com
You’ll get some HTML corresponding to a dummy webpage, downloaded directly to your terminal.
Working with Commands
Command piping
So far, every command we’ve used as accepted text as input, and returned text as output. We can send the text output from one command into the text input of another command using a pipe, |
.
Earlier we saw the ps
command, to write a large table of all the processes running on our machine. We can send that table to grep, which is a text search tool that will pick out lines containing something we’re interested in:
$ ps -aux | grep 'ps'
Rather than getting every process on the system, we can just pick out the ps process by text-searching for it using grep
.
Another common grep usage is with cat
, to find a string in a file. Search your /etc/passwd
file for the root
user:
$ cat /etc/passwd | grep root
Instead of getting every user on the system, only lines with the string root
are printed out, making it easier to findwhat you’re looking for.
Successive Commands
We can run several commands in a one liner using semicolon ;
or double-ampersand &&
.
Create a new directory:
$ mkdir newDir
We know that to remove a directory we need to use rm -r
. Let’s simulate a error by forgetting the -r
flag, and immediately after removing we’ll create a new directory with the same name:
$ rm newDir ; mkdir newDir
This should returns:
rm: cannot remove ‘newDir’: Is a directory
mkdir: cannot create directory ‘newDir’: File exists
The semicolon ;
runs the second command even when the first command wasn’t successful.
Combining commands with the double-ampersand &&
insures that the second command will run only if the first command was successful. Try the following:
$ rm newDir && mkdir newDir
This should return the error for the first command only:
rm: cannot remove ‘newDir’: Is a directory
Breaking long commands
Some commands get too long, for example because it includes a long file path or it’s a one liner that combines several commands. For better readability, you can break long commands into several lines using the backslash:
$ mkdir aDirectoryWithAVeryLongName ; \
cd aDirectoryWithAVeryLongName ; \
echo "this is a test file" > myTestFile ; \
cat myTestFile ; cd ..
This one liner will create a directory, cd into it, create a file, cat the content of the file, and finally change directory a level up.
Conclusion
We saw most of the Linux commands that we will use in the actual Docker training. Feel free to discover more commands in the following cheat sheet: link::https://www.git-tower.com/blog/command-line-cheat-sheet/[Command Line Cheat Sheet]