What is Service Mesh in Microservices?
The microservice architecture involves breaking the application into small interconnected services, each performing a specific task. This breakdown enables developers to work on individual services without affecting the rest of the application, leading to more agility and easier scaling.
These services communicate through APIs and, as the number of services within an application increases, developers may introduce a microservice service mesh to control all the service-to-service communication.
This article discusses what a service mesh is, how it works and why it’s a standard component of the cloud-native stack.
Key takeaways:
- A service mesh is deployed to secure service-to-service communication in a microservice architecture.
- The service mesh uses local sidecar proxies rather than plugging directly into the service instances. Newer versions of service meshes are experimenting with sidecar-less architectures using eBPF.
- Deploying a service mesh has advantages such as traffic management, centralized security and faster application development and deployment.
Service mesh: Microservices security
A service mesh is a dedicated infrastructure layer that is deployed to control communication between services without having to make any changes to the application code. By deploying at the platform layer instead of the application layer, it increases interservice communication’s observability, security and reliability.
As enterprises scale their applications, a service mesh becomes a much-needed component. The global service mesh market value is expected to show a compound annual growth rate (CAGR) of 41.3% to reach a $1.44 billion evaluation in 2027, according to Business Research Insights.
How a service mesh works
The service mesh architecture comprises local proxy sidecars deployed alongside each service. In a containerized application, these sidecars attach to a container orchestration unit like a Kubernetes pod. Sidecars are used to decouple functionality, such as monitoring and security, from the service. The communication channels between these proxy sidecars are secured by mutual TLS or mutual transport layer security (mTLS). Newer versions of service meshes are experimenting with sidecar-less architectures using eBPF. This will move the sidecar processes into the kernel for native mesh support.
Regardless of architecture, the service mesh consists of two network planes:
- Data plane, which handles all the request traffic between service instances in an application. It contains all the proxies and handles functions such as load balancing, health checks, authentication and authorization.
- Control plane, which manages the data plane and handles tasks such as instance creation, network policy management and monitoring. The control plane can also be connected to a graphical user interface (GUI) for easier application management.
Service mesh vs API Gateway: What’s the difference?
Service meshes and API gateways are complementary components. While a service mesh can automatically apply to newly deployed services, an API gateway needs to be updated every time a service is added or removed from the application.
While the service mesh provides a better solution for securing interservice communication, the API gateway is more commonly used to secure communication between the application and client-facing APIs. Deploying these two components together increases security and scalability.
Why do you need a service mesh in microservices?
A service mesh provides the following benefits:
- Speed of development
The microservices market value is expected to reach $6.62 billion by 2030, according to Verified Market Research. In a microservice architecture, hard coding service-to-service communication logic into each component is challenging and becomes impractical as the number of components increases. A service mesh removes this worry from the developer teams, allowing them to focus on the business functions of the service they are responsible for instead.
Using a proxy sidecar to decouple functionalities from the service means you can reuse coding blocks, regardless of the programming language used within the service itself. These practices allow for faster application developments and quicker time-to-market.
- Observability
A service mesh provides observability in the application, allowing developers to troubleshoot issues quickly. It becomes easier to diagnose communication errors as they occur inside a dedicated infrastructure layer. The mesh also handles communication logging, distributed tracing and performance metrics. Teams can use these metrics logs to optimize the application.
- Traffic control
The service mesh manages and controls all request traffic between services. By sending all data plane traffic through proxies, a service mesh enables granular control over request traffic without needing to make any changes to the rest of the application.
- Security
A service mesh improves security by providing encryption, authentication and authorization functions. Mutual TLS is commonly used to handle authentication. Two services must verify each other’s identity certificate before a request is allowed. Public keys unique to each request are created for encryption purposes.
Service-to-service authorization can be offloaded from service meshes using Open Policy Agent (OPA), a policy engine designed to handle authorization policies in cloud-native environments. OPA can be used to simplify policy lifecycle management, removing another layer of complexity from microservices. An O’Reilly survey discovered that the complexity of the microservice architecture was the biggest development challenge for 56% of respondents.
Read our solution brief to learn more about using OPA with a service mesh.
- Service-level configuration
Platform engineers can easily configure service-level properties, such as circuit breakers and retries, for the entire mesh from a central control plane. The mesh also handles service discovery and detects new services as they are added.
The control plane allows you to apply fine-grained control over your traffic. For example, you can easily apply a load-balancing policy to traffic from a particular group of services. Control plane features, such as load balancing and fault injection, simplify traffic management processes and make the inter-service network more resilient.
Service mesh authorization with Styra DAS for OPA
Enterprises can install the OPA policy engine alongside every service mesh proxy sidecar to decouple policy decisions from the service instances and mesh. Using Styra Declarative Authorization Service (DAS) as a control plane for OPA grants you granular access control over the service mesh and microservice application. You can set authorization policies to evaluate various dynamic attributes and real-world contextual information before allowing or denying access within the service network.
Deploying Styra DAS with a service mesh implementation enables enterprises to strictly control microservice application traffic from external-facing APIs (north-south traffic) and internal service-to-service requests (east-west traffic). Businesses can also monitor multiple service meshes for policy violations, making meeting compliance and security requirements effortless.
Decoupled policy as code for the entire application, including service mesh and individual services, allows for updates from a single point, removing the need for any business code changes that may cause the application to stop functioning or crash.
Schedule a demo to see Strya DAS Enterprise in action.
FAQs
What is a sidecar in microservices?
A sidecar in microservices is a separate service that lives alongside the “main” service . This can be used to separate functionalities, such as authorization, authentication and logging, from the service. Sidecars reduce the overall complexity of the microservice application and enable scaling and centralized management.
What is load balancing in microservices?
Load balancing in microservices handles how traffic is distributed across the application and ensures that a single service isn’t overwhelmed by high traffic. Load balancing improves application performance, responsiveness and availability.