Archive May 2020

CONTAINERIZATION AND ORCHESTRATION

At present, the topic of Containers and Orchestration is generalized due to its growth in the world of information technologies, its advantages, and the great promises it raises. Different vendors and brands (Oracle, IBM, Microsoft, RedHat, AWS) are introducing new concepts related to these issues and therefore new products are emerging. But what is the usefulness of container technology? What do we gain from containment? Why is it a solution to some of the current problems in technology areas?

Most people have heard of Docker, and see it as the only Containerization technology, but it is not the only solution on the market, although it is true that to date it is the most successful and is the one that has allowed generalization. of the concept. Containerization technology already existed before Docker, it is a technical concept related to the construction of Linux kernels, and how they allow certain levels of isolation.

In the world of container technology we have:

  • Docker: It is the flagship product of the Docker Company
  • Cri-o: It is the technology that RedHat chose to be able to support its containers in OpenShift, which is its Orchestration platform
  • LXC: Native Linux-based Container Deployment

Orchestration platforms for these containers we have:

  • Docker Swarm,
  • Kubernetes a Google Product
  • RedHat Open Shift

CONTAINERIZATION

We all know virtualization technology, where Virtual Machines require hardware, a hypervisor, a base operating system for applications, and a level of resources for their operation. It is a very useful technology that is still used and that will continue to be used for a long time and longer in the cloud world.

Unlike virtualization, containers are a new way of encapsulating an instance, placing everything in a single product “Container” where inside is placed what the application needs to operate, the Operating System, the platform, and the application. Although it really is an abstraction, it does not require, for example, the entire Operating System. The container is not a virtual machine so I don’t need 100% of all items, just what the application actually needs, this makes it a very lightweight way to encapsulate.

Fig. 1. Disi, R 2020. Comparación VIrtualización Vs Contenerización, Diseño 3HTP

Unlike the traditional approach, where the developer shares everything that is necessary for the application, packages, patches, libraries, and that is installed in the QA and production environment and then the environment, configuration, version problems, etc. start. With the containers, these problems are avoided, since the image comes with everything that the application needs, and in this way, there is no difference between the environments since the developer delivers the container and it is exactly the same as what is sent to QA and Production.

The following graphic shows a Docker platform that will be taken as an example to explain how the container concept works, although all the products are similar and even have similar names.

Fig. 2 Cuervo. V (2019). Arquitectura Docker. [Figura 2]. Recuperado de http://www.arquitectoit.com/docker/arquitectura-docker/

Doker is easy to understand, it has a client-server architecture that is the Docker host where the “Docker” software is installed and where it runs from an image, an application, or an operating system, an instance called a container. This allows multiple containers of the same image to be installed, you could have an image that had a Java application, and you could have other instances with different names that would be on the same Docker host in the same environment.

On the other hand, you have the “Registry” which is a kind of repository that I publish my images centrally, the idea is that the same Registry is used and that there is a Docker Host for Development, one for QA, one for Production, etc. . In this way, the image of the same repository is always obtained, so it is ensured that the exact copy, Operating System, Platform and the environment is installed.

In addition, it has a client layer that allows the execution of commands that can be done locally and allows quickly and simply to execute different types of applications and different technologies, whether I can prepare it or I can obtain it from public repositories such as Docker Hub (https://hub.docker.com/), which is one of the best known. Currently, the different brands have already published certified Docker images, which allows developers to obtain them and speed up the development process.

Then the approach changes, the Container technology allows us to move or move with the image of the application through each environment, creating instances of the image because they physically correspond to the same object, revolutionizing the work cycle and the DevOps and automation world, well now The only concern is to obtain the image with the correct version in the Registry, making the work cycle much faster, since there are no longer such long installation steps or parameterization, since the application is already ready in its environment.

Fig. 3. Pedraza, I. 2020, Containerization Pipeline, Article Innovation in Technology, http://www.3htp.com/devops-y-contenedores-actores-importdamientos-de-la-agILIDAD/

However, there are still those tasks associated with connecting to databases, integrations with external applications or services, that is, the external is not solved with this technology. However, it is still a great advance in the world of development and automation for the deployment of applications, reducing risks associated with different environments.

Esta tecnología hace que la imagen sea agnóstica, se olvida completamente del entorno pues todo lo que requiere el aplicativo está en el contenedor, por eso no se recomiendan prácticas como la configuración de artefactos condicionados a cada ambiente, pues no tiene sentido en este nuevo escenario.

ORCHESTRATION

We have already briefly described what the concept of containers consists of, the benefits it brings and to some extent its limitation with respect to external connections and services. However, if I want to use containers in complex environments, where for example I have containerized applications, the application server is separated from the data layer and the web layer, both in different containers instead of having three VMs.

In this scenario, new problems appear, what happens if I want to scale, if I need to balance and have high availability, how to communicate those containers that need to be connected, how can I monitor the containers and see which one requires a greater or lesser number of resources at some point.

Fig. 4 Paunin, D. 2018, The best architecture with Docker and Kubernetes – myth or reality? https://medium.com/@dpaunin/the-best-architecture-with-docker-and-kubernetes-myth-or-reality-77b4f8f3804d

So how do we solve these questions, I need a tool that allows me to solve these problems. To respond to these needs, the Orchestration platforms emerge Kubernetes, Openshift, and Docker Swarm, which are similar solutions that allow me to work in a more elaborate way with containers.

Fig 5. Arquitectura Kubernetes

The image shown takes the Kubernetes structure as an example and shows the different elements and concepts that are similar in all Orchestration systems. They have business support, an infrastructure base system that saves time in the installation and configuration of the infrastructure to run a cluster and other elements such as the Master, the Workers, the latter is the one that executes the applications. It has clusters that are formed with more than one instance (pods) of containers and an Etcd repository, a database where all the configuration is stored among other components.

It also has a component, in this case, the kubetctl, which allows interacting with the master through command lines and it sends instructions to each of the workers, in such a way that the master allows me to scale horizontally or vertically, redistributing the load if a worker is added to it, this really allows us to have what is defined in the literature as a dynamic cluster.

 “The concepts are very similar to the architectures of a Java and J2E application server with the concepts of domain, node agents, cluster, and so on. Unlike J2E, where only business applications are allowed to run under the standard, in the case of container orchestration platforms, any type of technology-independent applications can be implemented “

Nowadays the different cloud providers have monitoring solutions that allow automating the scalability of the resources in the Orchestration platforms that allows, at a time, to manage the number of resources up or down depending on the demand.

In this sense we have two great strengths: with containers, you gain all the power to encapsulate applications independent of the technology in which it is developed, and with the Orchestration layer you get the power to manage to have high availability, scalability, elasticity, and sharing. environments, which allows the use of resources when required by those applications that at certain times generate increases in demand. All of the above is conceptually and technically possible. However, these concepts require a process of understanding and maturing. Both the container technology and the Orchestration layers require a large number of resources, for example, if we want a high availability solution for a master, cluster solutions of 3 – 5 – 9 are required, which means having resources, machines of a certain capacity.

IN WHICH SCENARIO CAN WE USE CONTAINMENT TECHNOLOGY?

Today we have two scenarios or focuses in which we turn to Containerization and Orchestration technology: the first focus is the modernization of existing applications. Specifically, business applications that run in a Unix environment or in an environment that lends itself to being portable, Java, J2E, etc. There are companies that are exiting the proprietary Unix world and opting for other variants. This modernization is given for a few reasons:

  • Accelerate the life cycle.
  • Improve application availability and scalability
  • Prepare for migration to a cloud platform

Many brands are already supporting their containerized platforms, many factory servers that already have container portability, not the database world that still has a long way to go. One of the modernization techniques used in this approach is Reshapes, a Cloud Adoption technique that allows with little modification to create a container.

Modernization faces some challenges, one is the homologation, the configuration and the dependency of the applications which has several techniques to solve these tasks. Another challenge is that some of the brands do not support old versions of the same products. Another aspect is the structure of the applications that you want to migrate and that are not designed or developed to allow elasticity and are not prepared to work with more than one instance. In addition, in this new environment, developers must know not only about Containerization, but they must also know about the development of traditional applications.

The other focus is design and development from the ground up for Containers and Orchestration. This technology gives me very innovative design elements to work in a modular way, where you can have components in different containers and that their sum conforms to a more complex, more elastic application, more traceability, and tolerance to different scenarios.

The native development in containers also offers possibilities to advance, there are brands that clearly support their products in this technology, there is a great variety of components for different problems that they present. On the other hand, some challenges remain, such as the knowledge that must still be acquired about this technology and breaking the inertia and paradigms in the design and development of applications, forgetting traditional architectures.

Containerization and Orchestration form one of the verticals and types of technologies within the Cloud Landscape levels that allows the development of applications in a more portable way allows correcting or reducing deficiencies that exist in the development and deployment processes of The solutions still have great lines of development and challenges to overcome, but it is one of the variants to take into account when making a decision in the world of information technology.

Fig. 6 Cloud Native Landscape, 2019.

Containerization is one of the technologies that are present in the Cloud world, it allows to greatly reduce many of the problems that exist today in the development, automation and deployment processes of an application.

We can say that containers are a portable way of defining software, and Orchestration is a portable definition of how that software should be executed, deployed and monitored.

Renzo Disi | Director 3HTP | Santiago de Chile | Talk on Containerization and Orchestration | 2020

Pijama Party | Containerization and Orchestration | Exhibitor. Renzo Disi. Director 3HTP | 05-12-2020.

LECTURE – AWS – Storage Lecture

3HTP Pajama Party Series | AWS Storage Gateway solutions.

Today, traditional storage solutions are not scalable or cost-effective. It’s time for your company’s IT team to dedicate its efforts to more strategic and innovative endeavors.

USE CASES

  • Extend the File Server | Extend your file storage without worrying about scalability or too many administrative tasks.
  • Volume Backup | Provide block storage for your applications or generate backup copies by managing their retention.
  • Tape Backup | Provide your backup application with an interface to the virtual tape library for all your critical information.


Video Lecture