Technology contained

Over the past decade container technology has become a popular method for packaging applications in an effective way. Some developers believe is better than that offered by virtual machines and other technologies.

Container technology has been embraced by the big cloud computing providers including Microsoft Azure, Amazon Web Services, and Google’s Cloud platform.
Examples of the actual container software include the Apache Mesos, Docker, rkt (pronounced rocket), and Kubernetes.

But what is container technology?

Logically, it gets its name from shipping. Shipping containers standardize how goods are moved around. Goods get placed in steel shipping containers which can be picked up by cranes and fit into ships. They tend to have standard sizes.

By standardizing the process and keeping the items together, the container can be moved as a unit and it costs less to do it this way.
In computer terms, container technology is referred to as just a container: a method to package an application so it can be run, with its dependencies, isolated from other processes.

Container technology decreases the potential for problems when developers move programs from server to server before the program is in a state where it is saleable.

When you use container technology to create an application, you can code everything using just one operating system and database. This makes the application quite easy to replicate as resources including memory and the computer processing unit (CPU) are shared. This also makes the technology great for scaling and for working within the cloud.

Out with the old…

If you don’t use container technology, you can have a situation where a program runs well on one machine but has problems on another server. This common problem occurs when people move a program from a data server to a cloud server.

Many issues can thus occur because of variations in machine environments. These include differences between the operating system, secure sockets layer libraries, storage, and network topology.

So, computer container technology picks up all of the software and its related parts which include dependencies, being libraries, binaries, and configuration files, all together. They all get migrated as a unit, avoiding the differences between machines including operating system differences and the underlying hardware that lead to incompatibilities and crashes.

web scanner

And, importantly, containers also facilitate the deployment of software to a server. Advocates of using container technology say it is a much better tech to use than that which preceded it – virtual machines.

In this case, one physical server would be used for multiple applications through visualization technology. Each virtual machine contains the entire operating system, as well as the application to run.

The physical server then runs several virtual machines, each with its own operating system, with a single hypervisor emulation layer on top. By running several operating systems simultaneously, there is a lot of overhead on the server as resources get used.

…and in with the new

Container technology allows your server to run a single operating system because each container can share that system.

The parts of the operating system that are shared are read-only to not interfere with the other containers. Therefore, compared with virtual machines, containers require fewer resources of the server with lower overheads, and are much more efficient.

You can pack many more containers onto a single server. Each virtual machine may require gigabytes of storage but each container running a similar program may only need megabytes.

How do the containers operate?

Containers are set up in an architecture known as a container cluster. Then, in a container cluster, there is a single cluster master, with the other related containers set as nodes, that are the multiple worker machines. The cluster master schedules the workloads for the nodes, and also to manage their lifecycle, and their upgrades.

Containers also enable programs to be broken down into smaller pieces, which are known as microservices.

A major advantage of having the program as component microservices is that different teams can work on each of the containers separately as long as the interactions between the different containers are maintained, which facilitates developing software faster.

browsers

Containers are also flexible and can be orchestrated. Since the operating system is already running on the server, a container can be started and stopped in just a few seconds. Some containers within architecture can be turned on during peak demand, and turned down when not needed.
The software can control this type of orchestration, and distribute the tasks among the container cluster.

The way forward with the tech

But is container technology overrated? Some people are concerned about the security around it.

As multiple containers share the same operating system, there is a growing concern that container technology is less secure than a virtual machine. This is because if there is a security flaw in the host kernel it will affect multiple containers.

Other software is being used to have more secure container technology. The use of isolated containers is also being constantly improved.

We are currently working closely with a developer of a variant of the technology (isolated containers), that aims to address the shortcomings of container technology – Read more about it here.

Related
Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp