Two weeks ago, I went to a docker meetup, because at Apiumtech we really see the potential of it. This meetup was about an introduction to a new version and it also gave a great overview in general. (For anyone who is interested in the speech, you can find it here.
Related post: Dockerize A Multi-Module Scala Project With SBT
As you might have noticed, Docker is one of the hot trends recently. Why is it so hot? In my opinion, because it makes our lives a lot easier. It enables you to set up an environment that can run anywhere from your local machine to your production server is painless. You will almost never be able to make the excuse of “But it works on my machine” again. In this post, I’ll give you a brief introduction and talk about the latest version 1.9.
WHAT IS DOCKER
Build, ship and run. If you have heard something about Docker you might also have seen or heard that catchphrase somewhere. It claims to provide an open platform to package your application in a container that could be moved and executed anywhere. And it does exactly what it promises. Currently, it is the only one supporting applications that run on Linux and Docker for Windows applications is already “on its way”, according to thenewstack.io.
BUILD
First, let’s see how can we package an application with Docker. In order to run it, you need to install it. Installation instruction can be found Here. One thing to keep in mind is that Docker is currently support 64-bits systems only.
Once you have it installed on your machine, you’ll need a DockerFile. DockerFile is a configuration file, where we can describe all the frameworks, languages and dependencies needed by our application. Normally, a DockerFile might look similar to this:
The FROM command allow you to base your image on other existing images. The RUN command will execute various commands in your new container. The EXPOSE command tells Docker which port should the container be listening to at runtime.
Once your DockerFile has been created, a DockerBuild command will help you to create an image based on it.
SHIP
It offers both private and public registries on DockerHub, in which you can create a repository, push, pull and run your DockerImage anywhere. For further information about DockerHub please go Here.
RUN
With Docker, you can run an image by typing this in the command line: docker run image-name, without any further configuration. First, they will try to find image-name in your local machine, if it fails, it will look for the image on DockerHub. An example provided: docker run hello-world. If your environment has been setup correctly, you’ll see this:
*Some useful commands
- DockerPs: list the containers that are running on your machine (dockerPs -a for showing all the containers on your machine)
- DockerImages: list all the images on your machine
- DockerStop / docker rm: stop / remove a container
- DockerRmi: remove an image from your machine
- DockerBuild: create an image from dockerFile
THE ARCHITECTURE
They use a client-server architecture. There is a daemon that will do the heavy jobs like building, running and distributing images. Docker client will talk to that daemon via sockets or a RESTful API. Both can be installed in the same system or communicating remotely.
In this article, you see the word ‘image’ and ‘container’ a lot. These are important terms if we are talking about Docker, so I would like to go further here.
IMAGES
An image is created by the DockerBuild command. Images can be built on top of each other and can be stored in registries such as DockerHub. Since an image’s size could be quite large, it was designed so that it comprises of multiple layers. This way, only a minimum amount of data will be transferred over the network when an image is updated.
CONTAINERS
A container is an instance / a runtime object of an image. Whenever an image is run, a container will be created for it. One image can have multiple running containers.
DOCKER VERSION 1.9
A few days ago, the new version: 1.9 was released, and it was big. There are some notable features delivered in this update such as multi-host networking and swarm in production.
Docker Swarm
Docker Swarm is the native clustering for Docker Engine. It allows you to define a cluster, join hosts to it and manage the hosts throughout that cluster.
Multi-host Networking
Docker networking allows you to connect containers regardless of where they are hosted. With this feature, you can create a virtual networks that could be attached to containers, allow them to communicate with each other.
Besides the 2 remarkable features, improvements were introduced in the new version, for example it added a build-arg flag for Docker build and improved the stop signals by adding a STOP SIGNAL Dockerfile instruction. To learn more about the new features, please go to this change log. You can also go here to learn about Docker Swarm, and here to learn about Docker Networking.