Microservices-Introduction to Docker and Container Based Development — 1

Upulie Handalage
4 min readJul 15, 2022


Taxonomy of Docker terms and concepts


Back in the day applications used to be deployed on a physical server (Application, Database and Web server) separately in separate physical places. This was too much work. So next the deployment technologies moved towards hypervisor, which is a technology run on top of a single high processing hardware box. Which is then used to run virtual machines (VM) on top of it to be able to run servers. This was advantageous as the additional processing power could be used to have more VMs such as main, proxy, load-balancing servers etc. Each of these VMs could have their own operating systems independently, which were handling separate servers on top of them.


However, this too was tedious as each OSs had to be maintained separately and VM booting up was timely. To overcome these issues, the hypervisor was replaced with a single OS (on top of the hardware box), which then had a Docker Engine on top of it. This Docker Engine would have separate containers (Docker) on top of it, which will contain the Servers separately.

Container when compared with the Hypervisor was extremely small and manageable. Therefore are not similar in architecture.


A cloud platform called dot cloud by Docker incorporation invented Docker initially. It is an opensource project licensed by Apache 2.0 license. It is developed using Go language. Using Docker (or any containerized environment for that matter) saves us from having to deal with OSs because we are only using one operating system which is called the host OS.

Docker Engine

It is a separate entity from the Docker project. It occupies a very small space which holds the details about orchestration, registration, security and services.


A place where you can store your docker images (Eg: a customized mongodb image, a customized MySQL image). The storing is done by pushing to the Docker Registry. There fore various registries in Docker (Eg: Amazon, Microsoft) however,

Docker Home is the largest registry (also referred to as repository or cloud). It has around 250K repos and 1 billion downloads. You can use Docker Home to download any repo and customize it according to your need. If your project environment has multiple hosts, you can point those particular hosts to a particular Docker registry too. You repositories can be either public/private. If the company you are working for doesn’t allow to access their private Docker registry, you can either install a similar registry on your local environment or ask for licenses (having your own Docker registry backed by professional bodies).


The process of combining all the instances (with separate process in them) together and, have the common goal of managing them in the most efficient way. The tasks of the orchestration process includes, where the containers should go, what their dependencies are, what comes first etc. The orchestration process is managed using another framework.


Initiated by Google, Kubernetes is a platform used to do the orchestration process. It is now opensource. There are other similar tools you can use instead of Kubernetes, which have their own pros and cons.

Myths about Docker

  1. Although commonly heard elsehow, Docker is persistent by nature. However, if the Docker container you are using is a database, it is always advisable to use it alongside with an external storage environment.
  2. Docker is not for legacy applications. A classic multi-box environment (a stateful application) does not essentially fit together is Docker. Although it is possible, it is better to follow a microservices architecture with Docker. To obtain the full fledged features of dockerization, which includes independent nature of containers, it is better to follow the microlevel service architecture.

OCI (Open Container Initiative)

Docker Inc. implemented a similar framework called Rocket (Rkt), which had the same end goal. Therefore OCI allows governance over these bodies, defining the specifications. Therefore, any container more or less always complies with the OCI standards. This decoupled the engine, moving all the vendor specific code to the engine. The whole purpose of this initiative was to stop the user from being limited into one platform.

Almost all cloud platforms (Amazon, Google Cloud etc) are supporting Docker.

The CI/CD pipeline include, you modifying your Docker application, then testing by pushing it to the cloud. If all tests are passed, the code will then move to a docker registry automatically with CI/CD.

Thanks for reading. Until next time! 👋🏽


  1. https://www.youtube.com/watch?v=cUiU8yDdQmw&list=PLD-mYtebG3X9HaZ1T39-aF4ghEtWy9-v3&index=1
  2. https://docs.microsoft.com/en-us/windows/images/taxonomy-of-docker-terms-and-concepts.png



Upulie Handalage

Everything in my point of view. Here for you to read on....