Back in the beginning, we couldn’t imagine computers being different from building-sized machines able to perform complex ballistic calculations for military use. But since then computers have developed a lot, and now it’s like we have several of those complex machines inside one machine that can fit inside our hand.
From those advances in computation, micro-electronics and other enabling technologies, we got into something more interesting and optimized when we think about squeezing every possible penny and capability from those computation machines: the containerized applications.
Let’s go a little slower
I think the old server-client architecture must not be new for anyone here: in a computer network, we have machines that serves network services through applications running on them called servers, and other machines that access and consumes those services called clients. When you think in a traditional computer network scenario, you have one large and powerful computer (in the old days, the mainframes) that were capable of processing a lot of requests for their services simultaneously while the dumb terminals, with very little processing capability, would be used by the users for remotely accessing this larger computer. In this scenario, every user would have the impression that they were alone using the server, while in reality they would be all using it at the same time in separated environments. This was the first idea of Virtualization we would have in computation.

Some time later, we would have a new premise to arise in technology: what if we could virtualize not only the environment where the user is, but also virtualize the whole hardware of a computer inside our system? That’s when started the concept that we use until nowadays: the Virtual Machine. In this scenario, the simple idea was: I’d virtualize my storage capability (usually through a file simulating a block device like a HDD), my memory (through a process memory zone) and a processing unit making the interface of the processes between my virtual machine running in my system and my system’s actual CPU. With those pieces together, it was possible to at least emulate the behavior of a real computer, but using only logical components.
Later, we would have a lot more of pieces coming together for making this scenario even more complete and complex: We have a virtual network for packets switching inside our system, a virtual console (terminal, serial, console, VGA, and others) for creating a virtual display, even virtual chipset architectures for interfacing with the actual expansion boards installed inside our computers! Having all those pieces, we are even able to install a whole new operating system inside our virtual machine, making it even a complete solution for something that the great providers would be struggling by the time: service scalability and portability through their computation infrastructure.

Think bigger! Orchestration!
Obviously, when you start thinking about this kind of solution in a small scenario, it’s as simple as it seems: a process running in a simple computer (we’re not entering in the aspects of VirtIO, CPU virtualization technonologies, SR-IOV, CoW disks and other performance-enhancing technologies for now!) that simulates a real system. But what happens when we start thinking about hundred of thousands of those running in thousands of differents computers with thousand different users and administrations using them? Well, it’s starts getting REALLY hard to manage all of that in the best way possible, right?
That’s when comes the concept of something important when we think about the hyperscale solution: the orchestration solution. An orchestration solution is responsible for giving an abstraction layer (by the way, I love the world abstraction, it’s basically the foundation of IT, so get used to it =P) of management for the operators of this solution, giving them the control over what to bring up or down where, how and with what configuration they want or need, making the whole solution look like an orchestra playing by the instructions of the maestro. In the context of Virtual Machines (VMs), this orchestration solution is usually called by the name of Hypervisor.
Ok, but what about “containers?”
Containerization is actually an older technology than it seems to be. In a world where our server applications used to be developed like a “whole in one” solution, with everything you need in one package doing everypart of what the system has to do, it seemed natural that putting it all together in a virtual machine and deploying it through and hypervisor would be enough.
But think about it for a moment: well, when I use a virtual machine, I need to install what we call a Guest OS, the operating system that will be running in our emulated hardware, and this guest system will have it’s own Kernel to manage the processes, memory and resources that our virtual hardware is providing, so I have a resource management layer on top of our system’s (or Host OS) management layer of the real hardware. Seems like a waste of resource and time, right? Why not let our Host OS manage the whole solutions’ resources?
That’s when came the old idea of the mainframes and dumb terminals came back to life: well, we have done something like that, actually! But not yet exactly what we needed. In this mainframe scenario, one user would be able to access the same resources that other users were using, leading to several performance, management and security problems. So we needed a better solution for isolating them. That’s when the developers came with ideas for isolation of resources, most precisely processes, memory and storage resources: Namespaces, cGroups and Chroot, respectively. This idea of isolating applications in the same kernel is what we call Containerization.

Obviously, a toy scenario with those would have been easy to deploy if you have enough knowledge of those topologies. Yet, always remember that: a toy scenario is never enough, you need to have tools that would enable you doing those things as efficient as possible, giving you full control of your resources and processes, so that’s when technologies like LXC, LXD, Docker and others came to make this administration more efficient.
That’s all for today, folks!
This was just an introductory article for the concept of virtualization and containers. In next articles I’ll be posting more about those tools, the real use of them, maybe a few things about the infrastructure of cloud computings that enable the business models used nowadays based in those technologies of virtualization.
Hope you enjoyed reading it up to here!
Cya!