DEV Community

Cover image for Combining Edge Stack & Istio to Unlock the Full Potential of Your K8s Microservices
Ambassador
Ambassador

Posted on • Originally published at getambassador.io

Combining Edge Stack & Istio to Unlock the Full Potential of Your K8s Microservices

There is a key challenge with microservices architecture: communication.

Your service needs to communicate with the outside world, but every service must also communicate with each other. From there, you end up with a whole host of other questions. How do you route all the traffic effectively? How do you do this securely? How do you track whether everything is working correctly?

The answer for Kubernetes microservices lies within not one but two separate but related services that work better together: API gateways and service meshes. With Edge Stack as your API gateway and an Istio service mesh (or other relevant service meshes like Linkerd), each will handle a specific part of communication, and each comes with an array of other features to ensure secure, reliable, and observable interactions both within the cluster and with the outside world.

What’s a Service Mesh vs. an API Gateway?

Let's think about how microservice architectures work. You have numerous small, independently deployable services, each focusing on a specific capability.

Traffic needs to be routed to the appropriate service. That traffic can be “north-south” traffic from an external client or “east-west” traffic from other services. API gateways handle the former, while service meshes handle the latter.

Service Mesh vs. API Gateway

**Route North-South Traffic With An API Gateway
**An API Gateway is the entry point for external client requests into a microservices architecture. It handles the "north-south" traffic between clients and the backend services. Edge Stack is an example of a modern API gateway that provides these capabilities in a Kubernetes-native way. In a Kubernetes environment, an API Gateway serves several vital functions:

Routing and composition: The API Gateway routes incoming requests to the appropriate backend services based on URL paths, headers, or other criteria. It can also aggregate responses from multiple services to fulfill a single client request.
Protocol translation: The API Gateway can translate between different protocols used by clients and backend services, such as HTTP/REST, gRPC, or WebSocket.
Security: API Gateways often handle authentication, authorization, and rate limiting for external client requests, providing an additional layer of protection for the backend services.
API management: API Gateways enable centralized management of APIs, including versioning, documentation, and lifecycle management.
The API gateway provides a unified interface for clients to interact with the microservices, abstracting away the internal service architecture and exposing well-defined APIs.

Route East-West Traffic With A Service Mesh

A service mesh is a dedicated infrastructure layer that handles service-to-service communication within a microservices architecture. It manages the internal "east-west" traffic between services within a single cluster. Istio is a popular open-source service mesh with a rich feature set for managing and securing microservices communication in a Kubernetes environment.

In a Kubernetes environment, services are dynamically scheduled across a cluster, making managing communication challenging. A service mesh is implemented by deploying lightweight network proxies (sidecars) alongside each service instance. It addresses this complexity by providing a consistent and transparent way to handle service-to-service communication. It offers several benefits:

Service discovery: A service mesh automatically discovers services and tracks their locations, allowing services to communicate with each other without having to hardcode network details.
Traffic management: Fine-grained control over traffic routing, allowing for canary deployments, A/B testing, and traffic splitting based on weights or percentages.
Resilience: Features like circuit breaking, retries, and timeouts help improve the resilience of inter-service communication, preventing cascading failures and ensuring graceful degradation.
Security: Service meshes like Istio can enforce mutual TLS (mTLS) authentication and encryption for all service-to-service communication, enhancing security within the cluster.
Observability: By capturing detailed metrics, logs, and traces for all service interactions, service meshes provide deep visibility into the behavior and performance of the microservices architecture.
A service mesh enables all these features without requiring changes to the application code.

How API Gateways and Service Mesh Work Together

API gateways and service meshes complement each other. They work together to provide a comprehensive solution for managing and securing traffic in a Kubernetes microservices architecture.

The value of having both an API gateway and a service mesh lies in their ability to address different aspects of communication within a microservices architecture. By leveraging the strengths of each technology, you can achieve a more secure, reliable, and observable system.

Security

API gateways act as the first line of defense for external client requests, handling authentication, authorization, and rate limiting. They validate JWT tokens, API keys, or OAuth credentials to ensure only authorized clients can access the backend services. The API Gateway can protect against common security threats like denial-of-service (DoS) attacks.

An API gateway can also enforce access control policies for external client requests, determining which clients can access specific APIs or services. It can apply role-based access control (RBAC) or attribute-based access control (ABAC) based on client identities, scopes, or permissions. The API Gateway can also implement IP allowlisting or blocklisting to restrict access from specific network locations.

A service mesh provides security for inter-service communication within the cluster. It can also apply fine-grained access control policies based on service identities and attributes and enforce least-privilege access, ensuring that services can only communicate with the necessary dependencies and limiting the blast radius in case of a security breach.

Resilience

API gateways implement resilience patterns like timeouts, retries, and circuit breakers to handle failures and latency issues when communicating with backend services. They can route requests to healthy service instances and prevent cascading failures.

Service meshes then provide advanced resilience features for inter-service communication. It can automatically detect and handle service failures, perform load balancing across service instances, and implement circuit breaking and fault injection. The service mesh ensures the system can gracefully handle and recover from failures without impacting overall functionality.

Observability

An API gateway captures and logs all incoming client requests and outgoing responses, providing visibility into the usage and performance of the exposed APIs. It can generate detailed access logs, including request metadata, response status codes, and latency metrics. The API Gateway can also integrate with centralized logging and monitoring solutions to enable real-time analytics and alerting.

A service mesh provides deep observability of inter-service communication within the cluster. It captures fine-grained metrics, distributed traces, and logs for all service-to-service interactions. The service mesh can generate detailed telemetry data for performance monitoring, troubleshooting, and anomaly detection.

If you must prioritize between implementing an API Gateway or a service mesh, starting with the API Gateway is recommended. The API Gateway acts as the entry point for external client requests, and implementing the API Gateway first provides essential security, access control, and traffic management capabilities at the edge of your system. For more on how they work together, watch our recent webinar for a demo.

How Edge Stack API Gateway and Istio’s Service Mesh Work Together
If you have already implemented Edge Stack, Istio’s service mesh is one option that layers your existing application transparently. Its key capabilities are precisely those of an ideal service mesh above, including:

Secure service-to-service communication with mutual TLS encryption, strong identity-based authentication and authorization
Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic
Fine-grained traffic control with rich routing rules, retries, failovers, and fault injection
Pluggable policy layer and configuration API supporting access controls, rate limits, and quotas
Automatic metrics, logs, and traces for all traffic within a cluster
Under the hood, Istio, like Edge Stack, is built on the Envoy Proxy, making coordination between the two services seamless. Istio is implemented by deploying an Envoy sidecar proxy alongside each service instance in the mesh. The sidecars intercept all network communication between services and are managed by Istio's control plane.

*Edge Stack API Gateway and Istio integration
*

Edge Stack is the Ingress point and API Gateway, handling north-south traffic from external clients into the Kubernetes cluster. Istio handles east-west traffic between services within the mesh.

When external traffic comes in:

Edge Stack authenticates the request and applies configured edge policies like rate limiting.
It routes the request to the appropriate backend service based on URL path, headers, etc.
The Istio sidecar next to that service receives the request, applies Istio traffic management rules, and forwards it to the service container.
The sidecar intercepts the service's outbound requests for other services and applies relevant Istio policies before routing them over mTLS to the destination service's sidecar.
Metrics and traces are collected at both the Edge Stack and Istio layers and can be exported to Prometheus or Jaeger.
Once Edge Stack routes the external request to the appropriate backend service, Istio takes over the traffic management.

Istio maintains a service registry that tracks all services in the mesh and their locations. It automatically discovers services and updates the registry as they are added, removed, or scaled. Services can communicate with each other using logical service names instead of IP addresses.

Configuration is handled through Istio's Custom Resource Definitions (CRDs):

Traffic routing rules are configured using such as VirtualService. These rules allow fine-grained control over traffic routing, including canary deployments, A/B testing, and traffic mirroring.
Load balancing can be configured using Istio's DestinationRule CRD, specifying the load balancing algorithm (e.g., round-robin, least-request) and any circuit breaking or outlier detection settings.

//example DestinationRule CRD with LEAST_REQUEST load balancer
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST

Fine-grained access control policies can be applied using Istio's AuthorizationPolicy CRD, defining which services can communicate with each other based on attributes like service identity, namespace, or labels.
Together, Edge Stack and Istio provide defense-in-depth for the entire application. Edge Stack handles north-south edge security concerns like authenticating external requests and DDoS protection. Istio secures service-to-service east-west traffic with automatic mTLS encryption and fine-grained identity-based access policies.

Failures are isolated and recoverable at both layers. Edge Stack applies resilience policies to traffic entering the cluster. Istio enables client-side load balancing, circuit breaking, retries, and fault injection for inter-service communication.

Edge Stack and Istio, in concert, give you end-to-end observability and the ability to visualize service dependencies. Edge Stack collects detailed telemetry at the edge on north-south traffic. Istio generates granular metrics, distributed traces, and access logs for all east-west service interactions.

Installing Istio’s Service Mesh and Edge Stack for Maximum Results
Implementing an API gateway like Edge Stack with a service mesh like Istio represents a mature and advanced approach to managing microservices architectures. It enables you to handle the intricacies of inter-service communication, enforce consistent policies, and gain deep visibility into your system's behavior. This powerful combination empowers development teams to confidently build and deploy microservices, knowing that their applications are secure, reliable, and observable at every level.

As the complexity of modern applications continues to grow, adopting an API gateway and service mesh becomes increasingly crucial. By embracing Edge Stack and Istio, organizations can future-proof their Kubernetes deployments, enabling them to easily scale and evolve their microservices architectures. This winning combination provides a solid foundation for building robust, resilient, and observable applications. For more, check out Edge Stack in action.

Top comments (0)