Monday, April 8, 2019

Docker - Networking


When we talked about Docker, we said that containers are isolated. Then how do we communicate with our containers. Say we are using MySQL database. It is not useful if we can't access it.

Docker has a network concept. It has several network drivers to work with. Depending on how do we want out container to behave, we can select our network. This help us to communicate container with a container or container with host.

Network commands summary

  • docker network ls - list available networks
  • docker network create - create a network
  • docker network rm - remove a network
  • docker network inspect - inspect a network
  • docker network connect - connect container to a network
  • docker network disconnect - disconnect container from a network

Docker network drivers

  • bridge - This is the default network. When the Docker daemon service starts, it configures a virtual bridge names docker0. When we don't specify the network this is the one docker uses. Docker creates a private network inside the host which allows containers to communicate with each other.
  • host - This tells docker to use host computers network directly.
  • none - Disable network for the container.

Network commands

Just like other Docker commands, it has the same pattern.
docker network

Lets list available network commands.
docker network help

Inspecting a network

Use the inspect command to inspect a Docker network.
docker network inspect bridge

Create a network

We can create our own network using create command.
docker network create mynetwork

Docker prints the id of the created network. Use the inspect command to see properties. You will see that it has used bridge as the driver since we didn't specify a driver to be used. We can specify a driver using -d option.

Remove a network

We can use rm command to remove a network.
docker network rm mynetwork

Connect to a network

By default our containers connect to bridge network. To use another network, we can use --net option when we create the container.
docker container run -it --net=mynetwork nginx

Connect with the world

Now we need to use our containers from the host. There is no meaning of isolating a container if we can't access it.

We can get the exposed port of an image by inspecting it. Issue the inspect command and see the line for ExposedPorts

$ docker image inspect nginx
            "ExposedPorts": {
                "80/tcp": {}

We can use -p or --publish option to bind this port to the host's port when running an image.
$ docker container run -it -p 81:80 nginx
$ docker container run -it --net=mynetwork -p 81:80 nginx

Now we can access this in our browser.

We can get the containers port using port command
docker port <cotainer_name/id>

Now when we inspect the container, we can see that it has attached to host's port.
            "Ports": {
                "80/tcp": [
                        "HostIp": "",
                        "HostPort": "81"

Docker - Volumes


Sharing is caring. When the container is running/down or removed we need to access the data within it. Be it a database, web application logs it needs to share some form of data with host or with the other containers. Docker provides volume to achieve this.

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.

Volume commands

  • docker volume create - create a volume
  • docker volume ls - list available volumes
  • docker volume remove - remove a volume
  • docker volume prune - remove all unused volumes
  • docker volume inspect - inspect a volume

Create a volume

docker volume create my-volume

List volumes

docker volume ls

Inspect a volume

docker volume inspect my-volume

Remove a volume

docker volume rm my-volume

Remove all unused volumes

docker volume prune

Start a container with a volume

We can start a container with a volume using --mount or -v flag. As in the docs, New users should try --mount syntax which is simpler than --volume syntax.
If the volume does not exist, Docker creates the volume for us.

docker container run -d \
--name my-nginx \
--mount source=my-volume,target=/app \


Docker - Images and Containers


Docker image to container(s)


An image is a read-only template with instructions for creating a Docker container. It is a combination of file system and parameters. Often, an image is based on another image with some additional customization.
We can use existing images or create our own images.


A container is a runnable instance of an image. We can create as many as we want from an image. Container is isolated from the host by default. We can modify it's behavior using network, volume, etc.
When a container is created, we can stop, restart, remove it.

Download an Image

We can download a Docker image using 2 methods.
  • pull - we can use pull command to get an image
$ docker image pull nginx
  • create a container - when we create a container from an image, it downloads the image from the repository if it is not available in the host.
$ docker container run nginx

Docker command structure

There are many commands. So Docker has grouped them together into a common format.
docker container ls

Useful command summary

  • docker image ls - list available images
  • docker image rm - remove an image
  • docker image inspect - inspect an image
  • docker container run - run a container
  • docker container ls - list containers
  • docker container stop - stop a running container

Dry run

Let's test these commands with nginx.
$ docker container run --name my-nginx -p 80:80 nginx
$ docker image ls
$ docker container ls
Visit localhost in your browser. You should see nginx is running.
$ docker container inspect my-nginx
$ docker image inspect nginx

Now let's stop our container
$ docker container stop my-nginx
$ docker container ls
$ docker container ls -a

Let's start it again
$ docker container start my-nginx

Let's remove our container completely.
$ docker container stop my-nginx
$ docker container remove my-nginx

Let's remove image as well.
$ docker image remove nginx

Docker - Introduction


Let's talk about Docker container ;)

The Problem

Software packaging, distribution, installation is not that easy. It is true that there are easy to use software packages. Normally software depends on other libraries. To install a software it needs to install those dependencies first. What if those libraries have other dependencies? What if there are version conflicts?

Let's see a picture of a software installation.

It is a web of libraries. Now imagine we need to uninstall our software. Will it remove it's dependencies properly. Will that have an impact on other software? How do we install another version? What if you need to main multiple computers with this same setup?

There are many questions though we can somehow solve. Imagine time, energy we spend on these. Is it worth?

Lets say we want install MySQL as the database. Why do we need to spend lot of time for that when our main task is something else. These are reasons we need to find other way to software distribution.

  • Difficult to install.
  • Hard to maintain.
  • Difficult to uninstall.
  • Difficult to test other versions.
  • Difficult to distribute.

What are the solutions

  • Virtualization: People use virtual machines to ship their software. While this solve most of above problems it has it's own issues. It's a machine inside machine which means waste resources.
  • Containers: While containers look like virtual machine, it's not. Containers are isolated from the host system like virtual machines but it shares same host resources reducing replication cost leads to performance boost.

Software installation with Docker container


Docker is a command line program. A background daemon. Docker simplify container creation. Docker is a tool for container simplification. We only need to give few instructions. Then Docker daemon handle all the heavy work for us.

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. The use of Linux containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.


  • Flexible - Even the most complex applications can be containerized.
  • Lightweight - Containers leverage and share the host kernel.
  • Interchangeable - You can deploy updates and upgrades on-the-fly.
  • Portable - You can build locally, deploy to the cloud, and run anywhere.
  • Scalable - You can increase and automatically distribute container replicas.
  • Stackable - You can stack services vertically and on-the-fly.

Hello World

You can refer the official guide for the Docker installation. For example, for ubuntu
Make sure Docker is running.

$ docker version
$ docker info
Docker officially has a hello-world image. Let's run it.
$ docker container run hello-world