Introduction to Containers and Docker

In order to understand what Docker is and why it is used, we must first know what containers are and what problems do they solve. Containers are completely isolated environments that are set up on top of an existing operating system to provide a virtual separation for the application running inside it from the outside world. And, docker is a software that helps us control the lifecycle of these containers.

Don't worry too much if you are not able to understand it. Let's take a simple everyday life example to help you understand the concept of containers.

Consider a tenant living on rent in a house which he/she shares with the landlord's family or let's say another tenant. The tenant has a separate room, bathroom and kitchen for fulfilling his basic needs. Now if we consider the house to be an operating system and the area available to the tenant as a container then the tenant lives alone in his area, manages it as per his own requirements, has no interference from others but it shares some common resources of the house like electricity, water etc.

Similarly, a container is an isolated environment(user space) set up on top of an operating system, wherein it utilizes the same OS Kernel but has its own processes, services, networking, storage mounts just like a virtual machine, but containers are not a virtual machine(We will learn the difference between VM and containers soon.).

Docker software helps us in managing the lifecycle of these containers which includes setup of containers, monitoring containers, destroying containers, attaching containers to network, etc.

Containerisation is an old concept as there are technologies that enable container setup and have been in use for more than 10 years. Some other ways to create containers are:

  1. LXC

  2. LXT

  3. LXCFS etc.

Docker was created on top of LXC initially but later on, moved on to its own way of creating containers. As per the LXC approach, containers are created as isolated environments on top of a Linux kernel using cgroups(control groups) for resource management like CPU, memory, network, etc and isolated namespaces to create a separate user space for running applications for a different container.

Yes, container is an old concept and yes we can only create containers using a Linux Kernel because only Linux provides support for cgroups and namespaces.

The cgroups limits how much a container can use, whereas namespaces limits how much a container can see.

Now that you have understood about containers, let's talk about docker.

What is Docker or Docker Engine?

Docker or Docker Engine is a software which helps us in managing the lifecycle of containers, define how they will be set up, what applications/software/services they will run inside them, their networking requirements, their storage requirements, and if required how to easily destroy a container and start afresh.

Docker uses docker images to run processes inside the container. We will learn about docker images in detail later on, for now, consider them as files required to install any service inside a docker container.

NOTE: As containerisation is a Linux OS feature, hence docker can be installed only on a Linux operating system like Ubuntu, Fedora, Redhat, etc. and if you want to use docker on Windows operating system you will have to set up a Linux virtual machine on which you can install docker. The Docker Windows Application will install a virtual machine automatically and runs the docker engine on top of it.

Docker or Docker Engine is made of 3 components, in a client-server architecture:

  1. A server with a daemon process (dockerd)

  2. An API, which is utilized by programs to interact with the docker daemon process.

  3. And a command-line interface (CLI) docker using which we can run docker commands to perform different operations on the docker daemon process.

Docker Engine client-server architecture components

The command-line interface uses the API to interact with the docker daemon process. The docker daemon is responsible for creating and managing the docker objects like containers, images, networks, and volumes. We will learn about all these docker objects in upcoming tutorials.

Why use Docker?

Due to increasing demand of microservices and devops, docker has become very popular in the software industry these days as it helps developers and system admins to build and run applications in containers. Here are some of the things which make docker so popular:

1. It's Flexible:

You can run a simple hello world program within a container using docker, you can run a web server like Apache HTTP server or Nginx within a docker container, you can any heavy-weigh application within a docker container and you can even run an operating system within a container using docker.

Although docker doesn't recommend running an operating system inside a container, but you can do so.

2. Easy Setup for different Services

If you have to install multiple small services on a server like a web server, a database, some programming language along with some other required service, traditionally we would just install it on a single server where they will share all the server resources with each other and fight for the same.

But with docker, we can start a separate container for each service, assign them resources as per their requirements, setup communication between each other and you are done.

Even if one service gets too much load, it will never affect services running in other containers.

3. It's Portable - No worries about Environment setup again and again

When we are done developing some application in our local machine and have to deploy the same on production, we face a lot of issue with environment setup. But if you are using docker, you can define the steps for container setup and docker will make sure that the environment is set up in the same way every time wherever you run it.

4. Docker containers are loosely coupled

A container is a self sufficient unit with its own resource quota, own networking setup etc which makes it completely encapsulated and thereby making it easier for system admins to replace or upgrade one container without affecting the others.

5. Highly Secure

Docker automatically makes sure that containers are completely isolated to outside processes.

6. Docker containers are lightweight

Unlike virtual machines, docker container are lightweight as they just create a separate user space, utilising the underlying OS Kernel of the machine on which they are run.


In this tutorial we understood the concept of containers, containerisation, docker or docker engine, docker client-server architecture and why docker is so popular in the software industry these days. In the next tutorial, we will dig in further to understand how these containers are different from virtual machines.