Over the past decade container technology has become a popular method for packaging applications in an effective way. Some developers believe is better than that offered by virtual machines and other technologies.
Container technology has been embraced by the big cloud computing providers including Microsoft Azure, Amazon Web Services, and Google’s Cloud platform.
Examples of the actual container software include the Apache Mesos, Docker, rkt (pronounced rocket), and Kubernetes.
But what is container technology?
Logically, it gets its name from shipping. Shipping containers standardize how goods are moved around. Goods get placed in steel shipping containers which can be picked up by cranes and fit into ships. They tend to have standard sizes.
By standardizing the process and keeping the items together, your container can be moved as a unit and it costs less to do it this way.
In computer terms, container technology is referred to as just a container: a method to package your applications so they can be run, with their dependencies, isolated from other processes.
Container technology decreases the potential for problems when developers move programs from server to server before the program is in a state where it is saleable.
When you use container technology to create an application, you can code everything using just one operating system and database. This makes the application quite easy to replicate as resources including memory and the central processing unit (CPU) are shared. This also makes your technology great for scaling and for working within the cloud.
Out with the old…
If you don’t use container technology, you can have a situation where a program runs well on one machine but has problems on your server. This common problem occurs when you move a program from a data server to a cloud server.
Many issues can happen because of variations in machine environments. These include differences between your operating system, secure sockets layer libraries, storage, and network topology.
So, computer container technology picks up all of your software and related parts which include dependencies, being libraries, binaries, and configuration files. They all get migrated as a unit, avoiding the differences between machines including operating system differences. This will also include underlying hardware that leads to incompatibilities and crashes.
And, importantly, containers also facilitate the deployment of your software to your server. Advocates of using container technology say it is a much better tech to use than that which preceded it – virtual machines.
In this case, one physical server would be used for multiple applications through visualization technology. Each virtual machine contains the entire operating system, as well as the application to run.
The physical server then runs several virtual machines, each with its own operating system, with a single hypervisor emulation layer on top. By running several operating systems simultaneously, you incur a lot of overheads on your server as resources get used.
…and in with the new
Container technology allows your server to run a single operating system because each container can share that system.
The parts of your operating system that are shared are read-only to not interfere with the other containers. Therefore, compared with virtual machines, containers require fewer resources of the server, and are much more efficient.
You can pack many more containers onto a single server. Each virtual machine may require you to have gigabytes of storage. But each container running a similar program may only need megabytes.
How do the containers operate?
Containers are set up in an architecture known as a container cluster. Then, in a container cluster, there is a single cluster master, with the other related containers set as nodes, that are your multiple worker machines. The cluster master schedules the workloads for your nodes, and also to manage their lifecycle, and their upgrades.
Containers allow programs to be broken down into smaller pieces, which are known as microservices.
A major advantage of having a program as component microservices is that different teams can work on each of the containers separately as long as the interactions between the different containers are maintained. This facilitates faster software development.
Containers are also flexible and can be orchestrated. Since the operating system would be already running your server, a container can be started and stopped in just a few seconds.
Some containers within architecture can be turned on during peak demand, and turned down when not needed.
The software can control this type of orchestration, and distribute the tasks among the container cluster.
The way forward with the tech
But is container technology overrated? Some people are concerned about the security around it.
Because multiple containers share the same operating system, there are growing concerns that container technology is less secure than virtual machines. If there is a security flaw in your host kernel it will affect your multiple containers.
Other software is being used to have more secure container technology. The use of isolated containers is, therefore, being constantly improved.