News

Service Mesh is the New Black

Those of us who live immersed in the world of technology are used to the continuous wave of new terms and technologies, which in many cases are just noise and marketing. Fortunately, every once in a while the surf throws up some treasure. In this case, the Service Mesh.

Monolithic architectures have long been recognized as a problem in terms of improving the maintainability, scalability, and stability of applications, as well as making life easier for development teams. For 20 years, it has evolved in the direction of greater decoupling and autonomy of the components of an application, from SOA architectures, through message buses, to microservices.

In recent years, the emergence of container technologies has accelerated this evolution, providing the ecosystem for the development and exploitation of infrastructures and tools that facilitate and in some way push the adoption of microservices as building blocks of applications.

What was once a block has now become dozens of small autonomous components (the microservices), with very specific functionalities. Using a simple analogy, this design pattern is analogous to that used by the UNIX operating system since its inception: instead of providing complex and heavy programs with dozens of functionalities, the environment is endowed with small, highly specialized utilities for the performance of a task. very concrete and simple, and a series of interconnection mechanisms that allow combining them to solve more complex problems.

This operating system design pattern has been a success for the past 50 years, so it doesn’t seem like a bad example to consider to guide application architecture. The proliferation of container platforms, with Kubernetes At the helm, it has seeded the data centers of thousands of containers.

Now applications consist of dozens or hundreds of containers, whose communication pattern between them constitutes your nervous system, just like in a living organism. This communication is carried out over the existing network, in most cases the network of a cluster Kubernetes. And this is where the first difficulties appear.

If in the traditional operation of a simple classic application, with its typical layers of frontend Y backend, the traffic flows are well defined and easily traceable, imagine what happens in an application based on dozens of microservices, spread over several physical systems, or even in a hybrid infrastructure, where part is in a data center owner and the rest in a public cloud.

The number of communication flows skyrockets geometrically as the size of the application grows, and monitoring is a daunting task. Thinking about monitoring these flows, or solving a functional or performance problem, is hair-raising.

In addition to meeting this challenge, it soon became clear that the model lacked a number of functionalities that would greatly facilitate the application life cycle, such as:

  • Load balancing– Ability to spread traffic across multiple instances of the same microservice.
  • Smart routing: make routing decisions based on policies, based on periods of time, the status of other services, the type of traffic or its content, for example. This functionality is basic to adopt deployment models of type A / B, blue / green or canary.
  • Service discoveryConsidering the complexity that an application based on microservices can reach, it is very convenient to have a service discovery mechanism, so that a microservice that has to communicate with another knows where to find it.
  • Resilience: ease of re-routing traffic to a backup service in case of failure of the main one.
  • Observability– In the world of monolithic applications, interactions between their components can be traced using debugging tools and profiling. In the world of microservices, these interactions are highly complex, dynamic, network-level communications flows. It is desirable to have the ability to monitor and analyze these interactions to, for example, diagnose problems, optimize performance, or forecast capacity.
  • Security: The data between the different microservices should travel encrypted, and both ends validate each other through digital certificates, since there is no control (from the application layer) of the networks through which the data travels. In addition, it would be convenient to be able to manage the permissions, in such a way that all unauthorized communication flows are prevented, improving the security of the application considerably.

It does not seem reasonable to ask development teams to implement these functionalities in the microservices themselves, mainly due to the considerable increase in time and costs. What seems to make more sense is to create libraries that implement these functionalities, so that they can be incorporated into applications. This was the first approximation (Stubby from Google, Hysterix from Netflix or Finagle from Twitter), although it was soon found that the maintenance of these libraries was something very complex and expensive.

For example, a motivation for the use of microservices is that each of them can use the language that the development team in charge considers most appropriate, independently of the rest of the microservices. This diversity of development environments must be transferred to these libraries, forcing their developers to port the same functionalities to dozens of languages. On the other hand, when a vulnerability is fixed, or a problem is fixed, it is necessary to rebuild all the microservices, possibly in a new version, and a new deployment of the application.

It was reasonable, therefore, to separate these functionalities from the microservices themselves, which would have to be agnostic to the details of their implementation. This is achieved by using a proxy local to each microservice, which manages its incoming and outgoing communications.

From a microservice point of view, your only interface to the world is this proxyWhether you have to accept connections or need to communicate with another component of the application. Is this proxy the one that takes care of balancing, traffic management, security, etc., transparently for the application. Using container technology, the implementation of these proxies it is independent of the technology used in its associated microservice.

This network of proxies it is, de facto, the data plane of the application, which manages the communication between all its components. This data plane is configured and monitored by the corresponding control plane. Both planes, data and control, allow to establish a communications mesh, which we call service mesh. Examples of implementations are Linkerd, Istio or Consul Connect.

Conceptually, what you get is a network overlay on the existing network infrastructure. This type of network was born as a solution to satisfy functionalities that the network on which it relies (underlay) lacks. Some examples of this type of network are, for example:

  • Network Tor, which was created to guarantee the anonymity of users, something that the Internet cannot do natively.
  • The networks VPN, which are developed to provide security in the form of communication encryption and peer authentication.
  • Network CNI from Kubernetes, which provides a flat network between containers independent of the physical servers that make up a cluster, such as, Weave, Flannel, or Calico.

Usually the appearance of a overlay It is quite worrying to those responsible for communications and security of the organizations, since they escape their control. For example, a network overlay could interconnect services that in the underlay they would be isolated by security policies. It is also common that, over time, part of those functionalities that have motivated the creation of the overlay end up being implemented, much more efficiently, in the underlay. This is, for example, what has happened with overlays from Kubernetes and the SDN, What Cisco ACI.

The question that many organizations ask themselves in this scenario is: should I incorporate a service mesh to my environment and adapt my developments to make use of it ?. The answer is not easy. The benefits are obvious, but we must not forget some disadvantages when making the decision:

  • Immaturity: the technology to implement service mesh it is relatively recent, and some deployments still have low flight hours.
  • Equipment preparation– The learning curve, for both the development and operations profiles, is quite steep.

In most cases, the best approach will be that of a hybrid environment, in which applications coexist that can take advantage of the service mesh and more traditional applications that are not worth migrating to the new scheme. Over time, the ratio of applications over service mesh it will gradually increase.

In the next few years we will see how all these disadvantages are left behind and the service mesh it becomes an essential element in application architecture.

Signed: José Sánchez Seco, DevOps Senior Architect. Systems / Cloud Area Manager at SATEC

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *