Ting og tang som rører seg

Hjem / Blogg
TILBAKE TIL HJEMMESIDEN

Dato Mar 2, 2017

Why Docker?

SE HER se her

Why Docker?

Why Docker?

Docker has been the most used container software during the past 2 years, but why do developers use it?

Docker solves two major problems faced by developers: 1. Configuring local environment and 2. Maintaining both environments (production & local) identical.

When a house is built, bricks, doors and windows are purchased so that construction is possible, and software is no different. Instead of "reinventing the wheel", the solution can be achieved using software components that have already been designed or are available elsewhere.

The problem developers face in the software world (where technologies evolve so quickly and are dependent on others) is that packages or shared libraries can only be installed in a single version. In this case, the developer will need to solve the problem by getting older versions of dependent packages. This, in turn, can break other dependencies and push the problem to another set of packets. This problem is known as dependency hell.

Fortunately, however, technologies like Docker remove components from your application architecture, databases, job queues, and install runtimes. By isolating each processes with very little overhead, Docker allows you to navigate beyond dependency hell.

In an environment where you have a stream of new employees, to prepare the environment for each of these individuals without Docker would be a nightmare. Imagine the time drain if you had to configure the development environment for each new developer.

It was not so bad configuring PHP, Mysql, Nginx, and Node for Homebrew, but when it comes to the configuration of virtual hosts, databases and dump run or even project specific technologies, that's where Docker shines.

Everything now works in containers. Containers and virtual machines have similar resource isolation and allocation benefits - but a different architectural approach allows containers to be more portable and efficient.

Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system - all of which can go up to tens of GBs.

Containers include the application and all of its dependencies - but shares its kernel with other containers, running as isolated processes in user space on the host's operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.

All containers are based on Docker images. The images can either be a template / "ISO" / "VDI" or a read-only file. An Image is an ordered collection of root files, system changes and the corresponding execution parameters for use within a container runtime. An image typically contains a collection of layered filesystems stacked on top of each other. An image does not have state and it never changes.

Docker also has Docker Compose, which is a tool for defining and running complex applications. With Compose, you can define a multi-container application in a single file called docker-compose.yml, then spin your application up in a single command (docker-compose up) which does everything that needs to be done to get it running.

The common way of creating applications was, until a few years ago, the monolithic approach - which, if looked at from a functional perspective, was basically a deployment unit that does everything. Monolithic applications are good for small teams and projects, but when you need something that has a larger scale and involves many teams, things start to become troublesome. It is much harder to make changes because the codebase is larger and more people are updating it.

Docker is ideal for micro-services, as it isolates containers for a process or service. This intentional contextualisation of unique services or processes makes it very simple to manage and update any service. Therefore, it is not surprising that the next wave following on from Docker will lead further to the emergence of frameworks with the sole purpose of managing more complex scenarios, such as how to manage single services in a cluster, multiple instances in a service between hosts, or how to coordinate multiple services at a deployment and management level.