As adoption of serverless, microservices and FaaS increases service mesh technology is being increasingly discussed. Service mesh is an important part of the design pattern around cloud-native applications. Understanding what role it plays to those getting into serverless computing is important for those that are developing and deploying microservices and functions as part of their infrastructure.
A service mesh is a configurable infrastructure layer for a microservices application. It’s responsible for the reliable delivery of requests through the complex topology of services related to a cloud-native application. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, and other capabilities.
First, let’s step back and look at how cloud native architectures have evolved. In the olden days (well maybe a decade or so ago) applications were monoliths, largely written in Java or .Net. There was also quite a bit of buzz around service-oriented architecture (SOA). Now the abundance of free software combined with readily available network and cloud resources makes this type of pattern much more tenable than when it was first presented in the 1990s.
Today’s cloud-native applications are an amalgamation of individual services that are triggered by events. These services collectively do complicated things, examples are at large scale at Netflix and Uber. As you break them down you see that what makes up the architecture are individual single-purpose microservices accomplish a single task, such as looking at an address or provide account information for a user of a service.
The complexity of these applications grows with the number of microservices. It then becomes necessary for them to communicate in a consistent fashion, using a standard method to cobble them into what appears to the end-user as a complex single application, this communication layer that makes the application experience possible is a service mesh. To accomplish this, a service mesh acts as an array of network proxies. Proxies for these services communicate with the service mesh, they are called sidecars because they run alongside the service rather than within them.
It is possible to build cloud-native applications without a service mesh. However, it would require you to build logic into a sidecar in each and every microservice, redundantly doing the same thing over and over again. When you implement a service mesh to make things more manageable.
The novelty around serverless or at least the advantage that we feel is driving serverless is that its on-demand and usage-based. So something happens and then the function executes and then when it’s done there’s no more usage, as compared to a dedicated VM running in the cloud. However, something needs to make the function run, that’s a message or an event that triggers the serverless function. A service mesh gives some level of standardization to the way events are shared. Those functions are sharing their messages via a sidecar proxy and software like service mesh creates a common source of messages.
TriggerMesh sees these messages and then allows you to execute them from one cloud to another and can execute the function in the TriggerMesh cloud or shares the results of TriggerMesh hosted functions with other clouds.
Thanks to a thriving open source community there is an abundance of software available to deploy service mesh in your environment or being leveraged among open source
Istio is a project that provides an open service mesh platform. It was launched by Google, IBM, and Lyft in 2016 and has been steadily becoming part of the cloud native toolbox.
Consul developed by Hashicorp is a distributed service mesh. Consul is open source and can be used to connect services across a distributed infrastructure.
Envoy is an open source service proxy that was designed for cloud native applications created by Lyft. Envoy is a high-performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Envoy is a self-contained server that runs alongside andy application language or framework. It can provide
One of the things that are often mentioned in the same breadth as service mesh is stream platforms, specifically Apache Kafka. Kafka is a way to aggregate and stream data from applications.
Linkerd is an ultralight service mesh for Kubernetes. It is a Cloud Native Computing Foundation project. Linkerd provides a consistent, uniform layer of instrumentation and control across services, Linkerd enables microservice owners to choose the language that is most appropriate for the service. This allows decoupling the mechanics of communication and the software.
Here is an extensive list of software that might be considered when talking about service meshes.
Service Mesh enables the combination of services that make a cloud-native application effectively work together. As cloud-native applications become more pervasive the need for a service mesh will grow. Today there is a lot of interest in Istio which is packaged with Knative and Kubernetes. Service mesh controls how different parts of an application share data with one another. It is a dedicated infrastructure layer built right into an app.
Find answers to common questions or reach out to our support
Connect services together to automate workflows and accelerate the flow of information across your organization.