Get up to speed with Docker

Other posts in this series:  Docker for developers

A Container is to VM today, what VM was to Physical Servers a while ago. The workload seems to shifting towards containers, and fast! In case you haven’t started ramping on it yet, you may find it a bit overwhelming to begin with.

What is a Virtual Machine, and why is it used?

Not too long ago in an enterprise setup, there were plenty of servers in the datacenter idling around since they hosted applications which were not fully utilizing the server resources. This wastage of computing resources was the need why solutions like virtual machines (or VMs) cropped up.

For simplicity, just assume that every major application you have needs ONE full blown server to provide decent isolation.

With VMs, instead of one application per physical server, you can have multiple applications on the same server using virtual machines. All the VMs work and feel exactly like the physical server, but with lesser resources! For instance, if an application on a server was using 20% resource, you can host upto 5 similar applications on a single physical server.

VMs were a smart idea at the time, but it still led to a lot of wasted resources like Disk, Memory and CPU. This was because every VM needed to have hardware allocated explicitly to it. For ex. if the physical server had 1 TB of Disk and 16 CPU cores, you can spin up 4 VMs with 250 GB and 4 CPU cores each.

What is a Container?

Containers take it to the next level so you can consider them as an evolutionary step. Whereas VMs helped reduce the physical footprint of a server in terms of space, installation, CAPEX, OPEX, etc... the fundamental bloat still existed... namely, the Operating System!

Every VM still had the OS installed, and you still had to manage and allocate resources for the entire OS!!! In simple terms, it didn’t help with the licensing cost involved with the Operating Systems at all.

You can consider the container as an Application System (in contrast to Operating System). Practically speaking, a container is where you put all your application code along with its dependencies packed in such a way that it would execute on any Linux box. Here is a 30000 feet view of how containers look like from an architecture perspective. As you can see, there is just one Linux OS instance running with multiple containers on the top.

A container hence, is very lightweight and spinning it up from scratch takes seconds instead of minutes! In fact, it ends up consuming far little hardware resources when compared to VMs and is a lot more portable. Although not a panacea, it is a boon for today’s Software solutions.

Containers provide you an isolated view of the file system and you can play around with your view of the container. Other containers on the same server won’t get affected at all. This helps the developers a lot, since in today’s open source world, the developers tend to work with different versions of different libraries and frameworks.

This also makes your deployment a breeze, since all you need to do now is to ensure that your software works in your container properly. If it works locally, it simply continues to work on any other Linux box, and this is a big, Big, BIG relief! Works on my box syndrome, is no longer true with a container backing you 😉

This isolation applies to process trees too. This way, a process in one container cannot kill a process in another container. How cool is that!?!

Oh, and before I forget, the same isolation applies to the networking stack, and it means that you can have different routing tables, IPs, etc. on the same server in different containers. Containers are able to do this magic using namespaces.

What is Docker?

Docker is one of the best container technologies. However, it is not just a container! It is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

As mentioned before, once you Dockerize an application, you can move it to AWS, Azure, Digital Ocean or any host provider of your choice. In fact, if you like, you can keep it on-premises in your own Data Center.

Where can you run Docker?

Docker runs natively on Linux, but you can install it on OSX and Windows too and that’s great. Keep in mind though, that installing Docker on Windows or OSX doesn’t mean that you can run Linux apps on Windows. All it means is that you will spin a Linux VM on OSX or Windows which in turn will host the Docker container for you.

Installation

You can install Docker on Mac, Windows, Ubuntu, CentOS, and other supported platforms by following the respective installation guides. As mentioned earlier, because the Docker daemon uses Linux-specific kernel features, you can’t run Docker natively in Windows or OS X. Instead, you must use docker-machine to create and attach to a virtual machine (VM). This machine is a Linux VM that hosts Docker for you on your Mac or Windows.

For a Docker installation on Linux, your physical machine is both the localhost and the Docker host. In networking, localhost means your computer. The Docker host is the computer on which the containers run.

In an OS X installation, the docker daemon is running inside a Linux VM called default.

Image Source

Installation of Docker Toolbox gives you:

Docker Client docker binary Docker Machine docker-machine binary Docker Compose docker-compose binary Kitematic – Desktop GUI for Docker Docker Quickstart Terminal app

Once you start the setup, and once you reach Quick Start stage, click on either of the Icons. I chose Docker Quickstart Terminal.

Immediately a terminal spawned up, and started downloading some files and setting it up. Notice that the boot2docker.iso file got downloaded and installed in my local profile in a hidden folder called .docker

This download is a one time process. And if in case you encounter any error, most likely it will be because of existing VirtualBox installation. I have found that the best way is to simply remove VirtualBox and retry the installation of Docker Toolbox. It has a built in copy of Virtual Box and does the rest for you.

How to verify if the installation is successful?

When you start Virtual Box, you should see a VM created for you named default and the status should be running. You can issue the following command and check the output.

$docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Image is up to date for hello-world:latest

Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

 https://hub.docker.com

For more examples and ideas, visit:

 https://docs.docker.com/userguide/

It is quite amazing that the steps 1 – 4 took just a few seconds to execute.

Playing Around

Let’s play around with Docker a bit.

Display Docker Information

Docker information command is useful and you should use it while discussing your issues with someone or may be asking questions on StackOverflow and the likes.

$docker info

Output looks similar to this:

Containers: 1
Images: 2
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 4
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: BY7J:ERQH:OT7K:CHW5:LXOF:WOK5:763U:ONPI:7J3A:27SF:7GB7:JQ7G
Debug mode (server): true
 File Descriptors: 13
 Goroutines: 22
 System Time: 2015-12-13T08:03:59.387979964Z
 EventsListeners: 1
 Init SHA1: 
 Init Path: /usr/local/bin/docker
 Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
 provider=virtualbox

Display version of Client and Server

If you want the version of Docker, you can use the following command:

$docker -v

If you would like to get version of Docker client, as well as server, use this:

$docker version

On my Mac, I got:

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64

Show all containers

If you run the following you will see just the running containers.

$docker ps

However, more most practical reasons, I prefer using:

$docker ps -als
  • a – Shows all containers
  • l – Shows latest containers
  • s – Shows the information with Size

Output is similar to:

CONTAINER ID IMAGE       COMMAND   CREATED     STATUS              NAMES        SIZE
0ce16565063d hello-world "/hello"  5 mins ago  Exited(0)5 mins ago tender_boyd  0 B (virtual 972 B)

Show all images

Just like containers, you can list all images using:

$docker images -a

This will show you all images including intermediate images (without the -a switch it remains hidden)

Output is similar to the following:

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
hello-world         latest              0a6ba66e537a        8 weeks ago         960 B
<none>              <none>              b901d36b6f2f        8 weeks ago         960 B

Containers and Images? What’s the difference??

I have been talking about containers all through the series so far. Image is what makes the Containers possible. To use an analogy (from a developer perspective), if an Image is a class, then a Container is an instance of the class!

The Docker Toolbox

So far, we have been playing around with Docker and learnt some random but important commands. Let’s dive a bit deeper and learn some basic concepts and start from what you have already installed... The Docker Toolbox! The toolbox contains the following.

VirtualBox

The Virtual Box is used to host your Linux VM. The default VM is created automatically for you, and the Docker daemon is initialised and set up inside the VM. Your Host client talks to the daemon on the VM and returns the output to your terminal or client.

Docker Machine

So the story starts with a VM. Use the following command to create a brand new Docker Machine or VM.

$docker-machine create --driver virtualbox newdefault

Here, you are asking Docker to provision a new VM called newdefault using VirtualBox driver. In a few moments (typically less than 5 mins), you will have your VM ready.

Running pre-create checks...
Creating machine...
(newdefault) Creating VirtualBox VM...
(newdefault) Creating SSH key...
(newdefault) Starting VM...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env newdefault

Notice the last line of the output above. It asks you to run the following command to get details about the VMs environment.

$docker-machine env newdefault

The output is similar to:

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/<profile_name>/.docker/machine/machines/newdefault"
export DOCKER_MACHINE_NAME="newdefault"
# Run this command to configure your shell: 
# eval "$(docker-machine env newdefault)"

Notice that this folder is in your host and in a hidden (.docker) folder. To list all the docker machines, you can use:

$docker-machine ls

NAME         ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS
default      *        virtualbox   Running   tcp://192.168.99.100:2376           
newdefault   -        virtualbox   Running   tcp://192.168.99.101:2376

Just like you added, you can remove the VM as easily:

$docker-machine rm newdefault
(newdefault) Stopping VM...
Successfully removed newdefault

Docker Client

The Docker client, in the form of the Docker binary, is the primary user interface to Docker. It accepts commands from the user and communicates back and forth with a Docker daemon. You have already used it earlier when you said:

  • docker info
  • docker version
  • docker –help
  • etc.

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. You will learn about it later.

Docker Kitematic

It is a GUI which is still in beta. You can think of it as a wrapper to the Docker Machine that helps in provisioning a VirtualBox VM and install the Docker Engine locally on your machine. It also allows you to search an image from the Docker hub. It has decent features, but I am not yet a big fan of it. Suit yourself and use it if you like.

Docker Engine

Hosted at github, the Docker engine is the core of all this goodness! The Docker engine (or daemon) is standardised so that it looks exactly the same regardless of which host you are running it at. This means that if your code is working well on your Docker Engine, it will continue to work across the board as far as the Docker Engine is same!

Docker Hub

Docker hub is hosted in cloud and allows you to search for publicly available content along with official content. If you would like to build your own image and keep it private, you can opt for the private repo as well. Check the price. It also provides features like automated builds from any Github repository that contains a DockerFile. For now, I will be focusing more on the basics.

Docker Images

You can use the following command to pull images that you use regularly. Basically, you are downloading the images from the Docker hub and storing it in your local hub.

$docker pull -a nginx

1.7.1: Pulling from library/nginx
d634beec75db: Pull complete 
622c3cca0921: Downloading [===================================>               ]  34.4 MB/48.05 MB
93bb7ce11f7b: Download complete 
37428040ef03: Downloading [======>                                            ] 15.64 MB/117.3 MB
962ee5fce90a: Downloading [==========================================>        ] 13.26 MB/15.49 MB
62894f1269b7: Download complete 
89b1b503e116: Download complete 
5c4203a8cb67: Download complete 
39090aa53822: Download complete 
139948f637a7: Download complete 
2b6bc89114fb: Download complete 
3dff0815150f: Download complete 
1ed3089c7eba: Download complete 

If you ignore the -a switch, it will download ONLY the image that is tagged latest. Every image can have one or more tags. Once the images are downloaded you can run the following command to view them. Be a little careful, when you use the command above. You may end up downloading way more than what you anticipated.

$docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nginx               latest              b4c1ea3cfac2        5 days ago          133.9 MB
hello-world         latest              0a6ba66e537a        8 weeks ago         960 B
nginx               1.9.0               3ccb933aeb88        6 months ago        132.9 MB
nginx               1.7.12              5109569c0fed        7 months ago        93.45 MB
nginx               1.7                 5109569c0fed        7 months ago        93.45 MB
nginx               1.7.11              520f1dbba9d6        8 months ago        93.44 MB
nginx               1.7.10              08d86913635a        8 months ago        93.43 MB
nginx               1.7.9               137c9943560d        10 months ago       91.66 MB
nginx               1.7.8               de5bce5363f2        12 months ago       91.74 MB
nginx               1.7.7               8a49b89c0906        12 months ago       100 MB
nginx               1.7.6               6c49a122ac2d        13 months ago       100 MB
nginx               1.7.5               56278ae8ea94        14 months ago       100.2 MB
nginx               1.7.1               1ed3089c7eba        17 months ago       499.1 MB

If you would like to view all tags, search for the Image at DockerHub and in the result, click the Tags view.

Deleting images that contain a specific string

Let’s say you downloaded way too many images and want to delete the ones that’s not required. You can delete them by using a command similar to following. This command uses docker rmi and takes all images that match 1.7 as input. Be careful, and when in doubt remove docker rmi -f portion in the command below to ONLY view the images. Once satisfied with the output, trigger the command!

$ docker rmi -f $(docker images | grep "1.7" | awk '{print $3}')
Untagged: nginx:1.7
Untagged: nginx:1.7.12
Deleted: 5109569c0fed01112dba5583ff7d3e3a7ba3c029e91fc170f8f06a73274bce59
Deleted: 1b03d3f2a77ed79bf1702c032fb860f12e71cd6661319574a6fa7eb87c11f799
Deleted: fea4feb4d44bcfb316a56655fe53562f383f9ab1dd56b613b270ad2e57c2488a
Deleted: 7abe60a22c0dbdadcbb5c9fb7203656224918d96da4b669f9911207a74c9dfad
Deleted: 23299622ed2916b7b62992a6aa57d2cde72298305ac3dc1d017d29d85aae990e
Deleted: 249130fac1e49eeec59de82c0837d8a167626e11395028234075a693a7f04392
Deleted: 57557712fa860bf8bcb726d1daaaa97e29a3d8dfff53508f2423f3d747a7916a
Deleted: b643082c07545b8454956a09c630b4448fa8c6daa3785a70884362104aae571b
Deleted: 16b4b2a3620a4a7f1652c7adad5df9c67c636a268872966ca4481d3a1f220fef
Deleted: e6c41bf19dd468ed931d3c6c6d866fa2fdf9c2a6f5d90e5f597abfef9f386983
Deleted: 1d57bb32d3b45ddbb4e5b61895d6934966f56b6b429cb4706164589d090fba4d
Deleted: 91408b29417e19428beaed3bc2cc99cd6a08d88452e7206913eafaa538aa04b2
Deleting images that do NOT contain a specific string

What if there are too many images with different versions and you want to have ONLY the latest images. In that case you can use -v switch. The command below will delete all the images except the ones tagged latest.

docker rmi -f $(docker images | grep -v "latest" | awk '{print $3}')
Pulling only a specific tag

To download a specific version, use tags. The following command will download nginx version 1.9.0:

$docker pull nginx:1.9.0

Things to keep in mind about Containers

  • You don’t boot a container. You start it.
  • The containers run on a Docker host, which in turn runs on Linux.
  • Docker containers are running instances of an Image.
  • Learn $docker run command well, in order to work with containers.
  • By default, once the command is executed, the containers exit!

Containers can be started without pulling an image explicitly

You don’t necessarily need to pull and then create a container. You can run a command like the following, and if docker is unable to find the image locally, it will automatically start downloading it. By default, it downloads the image that is tagged default.

$docker run -it ubuntu:latest /bin/bash

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
0bf056161913: Downloading 9.191 MB/65.67 MB
1796d1c62d0c: Download complete 
e24428725dd6: Download complete 
89d5d8e8bafb: Download complete 
  • -i switch implies interactive.
  • -t switch allocates a pseudo-tty.
  • ubuntu-latest is the name of the image from which I want the container to be created
  • /bin/bash is the command that I want to execute inside the container.

Containers can be started attached

The previous command implies that the shell is interactive running in foreground. This means you are now logged in to the container and do pretty much what you want, like installing patches, applications, etc. However, logout will not work. Instead, you have to say exit. The moment you exit, your container’s job is over and it will be exited.

How to avoid exiting the container?

Type CTRL + P + Q to leave the container running and get out of it. If you run docker ps you can see two running containers since I have executed the previous command twice and exited by typing CTRL + P + Q. This is to demonstrate, that running 2 such commands will create 2 different containers instead of attaching to the existing one.

$docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              
PORTS               NAMES
75a4aa4fc39a        ubuntu:latest       "/bin/bash"         4 minutes ago       Up 4 minutes
                            furious_rosalind
e755f663d19d        ubuntu:latest       "/bin/bash"         12 minutes ago      Up 12 minutes
                           ecstatic_thompson

Stop containers from outside

Use the docker stop <containerID> OR <Name>

$docker stop 75a4aa4fc39a
75a4aa4fc39a

$docker stop ecstatic_thompson
ecstatic_thompson

Attach to a container

$docker attach ecstatic_thompson
root@e755f663d19d:/#

Notice, the root prompt! You are inside the container now. You can exit it, by saying exit.

Start a container detached

docker start command that you saw in the previous section started the container, detached. If you like, you can run some commands from outside the container, so that you don’t have to login on the container > do some stuff > and exit (with or without stopping the container). This is what you need to do:

$docker exec -it e755f663d19d touch /tmp/{1..10}.txt

The command above, uses exec to connect to a detached, but running container... touches 10 files (1.txt, 2.txt, and so on) and remains on the host terminal. How cool is that! Whenever needed you can run bash in an interactive mode.

$docker attach ecstatic_thompson

root@e755f663d19d:/# ls /tmp
1.txt  10.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt  9.txt

As you can see the files are right there, where we created it. Try editing 1.txt from inside the session, and do CTRL + P + Q and sure enough, you can cat the file as expected.

$docker exec -it e755f663d19d cat /tmp/1.txt

First file inside docker!

Restart a Container

$docker restart <container_id>

Remove a Container

If you would like to remove containers ensure that it is stopped before you try to remove them.

$docker stop <container_id>
$docker remove <container_id>

If you have been playing around with Docker, you may end up with a lot of containers and want to get rid of them at once. To do that, use the following commands. Needless to less, be careful with such commands 😉

docker stop `docker ps -aq`

What is Dockerfile?

You should be pretty comfortable using a Docker Image and creating Containers from it by now. But what if you want to create a custom image explicitly for your requirements?

Answer: Dockerfile!

How to create a Dockerfile?

  • Create a folder anywhere on your system. Think of this folder as your Context. I am using the following path: /Users/<username>/Desktop/nginx_base
  • Create a new file called Dockerfile. (mind the case!)
  • Inside the Dockerfile write the following text: (read the comments to understand what each line is doing.
# Every line that has a # prefix acts as a comment line! Use them.

# The structure of every line is INSTRUCTION <arguments>
# It is a well known and highly used convention in the Docker world.
# However, the INSTRUCTION is not case sensitive.

# To begin with, it is a good idea to use one of the existing images
# First instruction is always FROM and it tells docker to find a base image
# you are trying to use. 
FROM nginx:1.9

# MAINTAINER instruction adds metadata about the maintainer
MAINTAINER  rahul soni Soni < rahul soni@xxxx.com>

# RUN instruction creates an image layer every time it is executed.
# It is recommended to concatenate the RUN command.

# This RUN command will update, install nano and curl... and finally create a directory
RUN apt-get -y update && apt-get install -y nano curl && mkdir /home/myapps 

# EVN TERM will ensure that nano opens properly when you attach your debugger
ENV TERM xterm

# COPY will copy the app and default configuration
# Check the default.conf for more details
COPY app /home/myapps/app
COPY default.conf /etc/nginx/conf.d/default.conf

# EXPOSE will open port 80 & 443
EXPOSE 80 443
  • Before you build the image, you will now need to create default.conf file and the app directory. The former is to overwrite your Nginx settings. The latter is a folder that contains a sample application called app that has a file called index.html with some text in it.
  • Content of nginx_base/default.conf is as follows: As you can see below, the location is mapped to /app and try_files ensures that when you hit http://IP/app it searches for the file, and if the file is not present it searches for the directory called http://IP/app/.
server {
    listen       80;
    server_name  localhost;

    index index.html;

    location = /app {
        try_files $uri $uri/;
    }

    location /app/ {
        root /home/myapps;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
  • Also, notice how the default path of the application and root is changed in the configuration above. The core idea is to have the content in the directory nginx_base and copy it to respective places to build a template form which multiple containers can be created.
  • To complete this little scenario, simply create an index.html file in your nginx_base/app directory and proceed towards building the image.
  • Great, you are all set! The command below will build your image. Remember to run this command from the nginx_base directory. It will create an image called im rahul sonisoni/nginx-base.
$ docker build -t im rahul sonisoni/nginx-base .

REPOSITORY               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
im rahul sonisoni/nginx-base   latest              6507bc42ad5a        5 minutes ago       155.7 MB
nginx                    1.9                 5328fdfe9b8e        3 days ago          133.9 MB
ubuntu                   latest              89d5d8e8bafb        11 days ago         187.9 MB
  • If you issue the following command, you should be able to view your new Image
$ docker images
  • To create a container use the following command. It will create a container called wfe1
docker run -d -P --name wfe1 im rahul sonisoni/nginx-base
  • You can view all containers using ps -a
  • Nice, now you can connect to the container using:
docker exec -it wfe1 /bin/bash
  • This will take you inside the container and you can verify your files located at /home/myapps/app
root@3464572a7d2b:/# cat /home/myapps/app/index.html
Hello world from an Nginx container!
  • Get out of the container using CTRL + P + Q
  • Use the following command to find out exposed ports.
$ docker port wfe1

443/tcp -> 0.0.0.0:32812
80/tcp -> 0.0.0.0:32813
  • To test if your website is up and running, start by finding the IP of the VM (default) that has hosted all your containers.
$ docker-machine ip default

192.168.99.100
  • Final step: combine the IP and the port to view your website!
http://192.168.99.100:32813/app

You should see the output from your host machine!!! Pretty cool, right?

Tagging an Image

As you already know, you can view all images using:

$ docker images

REPOSITORY               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
im rahul sonisoni/nginx-base   latest              6507bc42ad5a        20 hours ago        155.7 MB

Notice the Tag. An easy way to create one is to use the following command... (notice that the build command now has a :0.1 attached to it as the tag).

$ docker build -t im rahul sonisoni/nginx-base:0.1 .

Run docker images again, and notice that another tag has been added with the same Image ID

$ docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
im rahul sonisoni/nginx-base   0.1                 6507bc42ad5a        21 hours ago        155.7 MB
im rahul sonisoni/nginx-base   latest              6507bc42ad5a        21 hours ago        155.7 MB

Untagging an Image

You can use the following command to untag a particular image.

$ docker rmi im rahul sonisoni/nginx-base:latest

Untagged: im rahul sonisoni/nginx-base:latest

rmi is used to remove an image, and since we had more than one tag… removing the image:tag, didn’t remove the image. Instead, it simply untagged it. Neat!

.dockerignore

.dockerignore file is pretty similar to .gitignore. The idea is to simply add a .dockerignore file and mention the paths string expressions that you don’t want to include in your build. In effect:

  • */temp* will exclude every file and folder that has temp in it in the immediate subdirectory of the root.
  • */*/temp* will do the same for file and folders two levels below the root.
  • temp? will exclude all files and folders like tempa, tempb, temp1, etc.

CMD instruction

There can be only one CMD instruction in a Dockerfile. In case you have multiple, only the last one will execute. This command is useful to provide defaults for an executing container. If needed you can provide arguments to the docker run command and it would simply override CMD instruction. To see this in action, add the CMD instruction as follows in the Docker file and run the following command one by one:

ENV p=ping s=google.com
CMD $p $s

In the command above, ENV sets an environment variable and CMD tells it to execute ping google.com at runtime. Test it out by building a new container

$ docker build -t im rahul sonisoni/nginx-base:0.2 .

$ docker run -d -P --name cmddemo im rahul sonisoni/nginx-base:0.2

$ docker ps
CONTAINER ID        IMAGE                        COMMAND          \
CREATED             STATUS              PORTS \
NAMES
0428cd9e35b2        im rahul sonisoni/nginx-base:0.2   "/bin/sh -c 'ping goo"   \
5 minutes ago       Up 5 minutes        0.0.0.0:32855->80/tcp, 0.0.0.0:32854->443/tcp   \
cmddemo

View Logs

In the previous the command should printout the result for ping google.com. So, how will you be able to see what’s happening inside of the container?

$ docker logs -f cmddemo

Wait a few seconds you should be able to see the logs from outside the container.

ENTRYPOINT + CMD together!

To avoid using CMD instruction being overridden, you can use ENTRYPOINT and CMD together. Modify the Dockerfile to have an ENTRYPOINT instruction and a default parameter using CMD instruction. Notice how ENTRYPOINT mentions /bin/ping below. This would eventually call /bin/ping localhost.

ENTRYPOINT ["/bin/ping","-c","100"]
CMD ["localhost"]

Now, run the following commands to build and create a new container called entrypoint-demo

$ docker build -t im rahul sonisoni/nginx-base:0.3 .

$ docker run -d -P --name entrypoint-demo im rahul sonisoni/nginx-base:0.3

$ docker logs -f entrypoint-demo
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.048 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.070 ms

Finally, run the following commands to build and create a new container called entrypoint-demo-override. The following run command is exactly the same except difference in container name, and the command that overrides the default value of CMD, which was localhost! Instead google.com is being pinged.

$ docker run -d -P --name entrypoint-demo-override im rahul sonisoni/nginx-base:0.3 google.com

$ docker logs -f entrypoint-demo-override
PING google.com (216.58.197.78): 56 data bytes
64 bytes from 216.58.197.78: icmp_seq=0 ttl=61 time=87.618 ms
64 bytes from 216.58.197.78: icmp_seq=1 ttl=61 time=87.781 ms
64 bytes from 216.58.197.78: icmp_seq=2 ttl=61 time=87.469 ms
64 bytes from 216.58.197.78: icmp_seq=3 ttl=61 time=88.194 ms

List all VMs that have Docker running

In case you have multiple VMs running, use the following command to find which one is active.

$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   ERRORS
default   *        virtualbox   Running   tcp://192.168.99.100:2376

Connect your shell to different VMs

If you run docker commands directly in a new Terminal instance, you might get this error message...

Cannot connect to the Docker daemon. Is the docker daemon running on this host?

This is because your shell is not connected to the default machine. Once you run this command you will be back in business.

$ eval "$(docker-machine env default)"

It is not a very friendly command, right? In case you forget, run a simpler command to get all details about a specific VM. You will find the eval command above, as the last line of the output below.

$ docker-machine env default

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/<profile name>/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
# Run this command to configure your shell: 
# eval "$(docker-machine env default)"

With that last example, let's call it a day! This has already been a very long post and I hope you found it worth spending your time on. Keep it handy for some of the common commands and scenarios you might hit and do let me know your share of tips and tricks!

Happy Docking!!!

© 2023, Attosol Private Ltd. All Rights Reserved.