top of page

Containers: No applications, no business!

Updated: May 21, 2020


I trust all of you would agree that sooner or later anyone who aims to "productionize" their work whether this is an application or deep learning model will have to face containers. The benefits of emerging technology are so apparent that there is almost no way to neglect it. On the one hand, it helps to reduce the costs of the infrastructure and reduce the time needed for maintenance, and on the other hand, it increases deployment efficiency. I will elaborate on them further in the article.


You have just met the analytics folks who are undoubtedly interested in this technology. Let's follow the same learning path they did to learn how we can benefit from introducing containers to our daily operations.


Introduction


If you're starting your journey with virtualization technologies or already have a pretty good handle on virtual machines (VM) and want to broaden the horizons by getting hands dirty with containers, I am sure this material gives you a great starting point. I will briefly go through the differences between VM and containers highlighting the major advantages of the later.


A lot of people tend to default containers to Docker of Kubernetes these days, but actually it started a bit earlier. Back in 2008 with the introduction of control groups by Linux kernel. That basically made container technology, such as Docker, successful as we know it presently.



Container vs. VM


Virtual machines are focused on infrastructure and used to be utilized for efficiency. But can we really decide how big or fast the underlying server should be? The most common answers to these questions are probably 'Nobody knows' or my favorite one 'It depends'. Very often because of a lack of clear answer the decision is taken to buy a bigger and faster one so that we can feel safe and nothing will go wrong. Which is, of course, understandable, because the last thing we want to happen is to have service down when it is critical to have it up and running. However, it is now clear that this approach is nothing but a waste of company resources.


Every time the business requires a new application someone from IT needs to go and get dedicated resources. It obviously generates CapEx and OpEx costs for the organization. CPU and RAM are not for free, right?! Not only processing power plays a significant role here - someone should take care of the OS licensing for each VM, updates, security patching, antivirus management, etc. Hypervisors solve a lot of issues but there still a lot of space for improvement left. That leads us to the CONTAINERS.


Source: www.redhat.com


Advantages of containerization


Imagine a software developer who wants to push Node.js application to production. Will can take this over - let's see how the right set of tools can make his life easier. Let's first take a look at the VM scenario and then compare it to the container-driven development.


Keep your focus on the left side of the diagram above for a minute. Obviously we've got hardware beneath a host operating system and a hypervisor on top of it. In simple terms, hypervisor allows us to spin up VMs. And at this point, we have part of our resources already consumed by at least host OS. Expectable stuff, right?! Now we'd like to push the app in. This would require a Linux VM. And inside this VM we need another operating system (guest OS) to be installed, as well as some libraries and binaries - here is where the inefficiency is coming from. Just think about it - to run even simple lightweight application you need to set up separate VM with guest OS and add-ins... It is already more than 350Mb, whereas the app itself is probably around 20Mb.

 

Having the app deployed the developer might want to scale it up. The business stakeholders will require the same - believe me! At this point, it is necessary to create a number of VMs to keep the performance on an acceptable level. We can assume that the resources of our hardware are exhausted after we have it done. I'd love to communicate the success, but you know what - our hero just noticed that there some issues in the app. It happens - that's life... But remind me please, why we are still using VMs?!


CI/CD (continuous integration & delivery) comes handy in this kind of situation. Of course, containers as well! There are basically 3 steps that you need to take to do anything related to containers and apps pushing.


  1. Container description/manifest - this is something that is describing the containers and all requirements for setting it up. With regard to Docker, this will be called a Docker File.

  2. Create a Docker Image.

  3. Spun up a Docker Container that contains and runs your app.


There are similarities between VM and Container architecture - both require hardware and host OS. The differenced and finally the advantages of the containers start from a runtime engine that replaces hypervisor from the former set-up. And this can be any containerization technology. Because of personal preferences, as you might already notice, I'll keep going with the Docker engine.

(I feel like I owe you a small side note here).

A container does not equal Docker! I used Docker because, in my opinion, it is convenient and relatively easy to start with. Essentially Docker is an open-source technology the main sponsor of which is company called Docker Inc. What a coincidence, right?!

A lot of people tend to default containers to Docker of Kubernetes these days, but actually, it started a bit earlier. Back in 2008 with the introduction of control groups by Linux kernel. That basically made container technology, such as Docker, successful as we know it presently. Hence the scaling is going to be much faster and less timeconsuming. Having the application up and running let's assume we have a new requirement to use third party API to say perform the text classification task. To do this on previously described VM set-up you would need to create, for example, Python application and put inside the same VM as our Node.js app. But you would be able to scale the app without scaling the Python application. The workaround could be to create a separate VM with our Python application, but it's not really cloud-native. To make it truly cloud-native we could put the Python app into a container and implement it with relevant libraries. Another big plus is that the containers in the same environment are sharing the entire hardware resources.



To wrap it up


Applications run business - this is the world we are living in, let's face it. It used too expensive, too long, and inefficient to let it continue. There a lot of initiatives being run across different organizations to move legacy apps into containers. But I would not hold my breath waiting that containers will replace the VMs. They will rather complement each other and the number of virtual machines will gradually decrease. Some call containers virtualization 2.0 and this terminology is appealing to me as well. Next posts will show in a bit more detailed and hands-on way how to get started with Docker. I see you out there. 


60 views0 comments

Recent Posts

See All

Comentarios


bottom of page