DEV Community

Cover image for Where Is My API Gateway?
Chintan Thakker
Chintan Thakker

Posted on • Originally published at getenroute.io

Where Is My API Gateway?

As the monolithic application gets broken up into several smaller parts, the infrastructure to support it has to change.

Applications are changing and so is the infrastructure required to support these applications. Earlier, as applications were developed and deployed as monoliths, so was the network infrastructure around it. A monolith needed services from the perimeter proxy like security and monitoring. But as the monolithic application gets broken up into several smaller parts, the infrastructure to support it has to change.

At the center of this change is how the proxy has adapted to providing infrastructure for services which are smaller pieces of the monolithic application. The culmination of services has also created a service mesh pattern where typical application functions that were baked into an application (like traceability library integrated with application) have moved to the proxy.

So far, the community has broken up network infrastructure into ones that serve either North-South traffic needs or East-West traffic needs. The East-West traffic needs are dictated by functionality that used to be a part of an application.

What’s a Gateway?

Gateway is typically associated with a proxy that manages North-South traffic. Typically your traditional perimeter proxy is the gateway proxy. For a monolith, all your network function is implemented at this point of enforcement.

What’s a Mesh?

Mesh is associated with a fleet of proxies that manage East-West traffic. Services (or applications or APIs) call other services (or applications or APIs) to provide a business function. This east-west communication has been the point of choice to implement traceability, observability and also an encryption/decryption point to enforce security.

A Proxy With a Network Function Injection Point

Both a gateway and a service proxy provide a way to inject network function. Can the functionality implemented today, at the edge, be run next to a service? Can it be run for a set of services and not the perimeter?

The boundary between these proxies is gradually diminishing. Consider rate-limiting as a network function typically associated with API gateways. There are several reasons an API may be rate-limited, it could be some business logic to meter service or protect a service from abuse.

Rate limiting network function can be implemented on the proxy typically at the perimeter or may be embedded as a library in the service.

The Perimeter-Proxy/Cloud-Proxy, the Node Side-Car, the Ingress Controller Proxy, and the Pod Side-Car Proxy

A microservice run from a Kubernetes cluster in a cloud environment can potentially go through three network proxies.

One is the perimeter proxy or the cloud provider proxy also called the edge proxy. This is typically created by a LoadBalancer type of service inside the Kubernetes cluster.

Next is the Kubernetes ingress controller that is responsible for getting traffic inside the cluster. This is typically associated with the Ingress resource inside Kubernetes.

Some service mesh implementations like linkerd can run a proxy per node or the node side-car to implement mesh functionality. Encryption/decryption of traffic is functionality implemented at this level.

And then there is the side-car service proxy that is implemented in the pod along with the actual service. This typically primarily runs functions like encryption/decryption.

Every application is different and the choices provided are generally too opinionated or complex.

Opinionated solutions come with assumptions to run network functions at a specific location. This comes at the cost of flexibility and simplicity. A team cannot make decisions on what level of services it needs to run its infrastructure and if often forced to swallow all the complexity even when its needs are minimal.

Melting Boundaries

Proxies are a way to implement network function. There should be a clear path to running a network function at a place that is most convenient for service. Some functions today that are typically run for north-south traffic maybe eventually pushed closer and next to services using a side-car pattern. Network functions associated with API gateway could be a candidate for such a transition.

The underlying network infrastructure choices should not constrain the flexibility available to the operator when adopting microservices, cloud, service mesh and which proxy to use to provide network function.

API Gateway Features

Enroute Universal Gateway

We built the Enroute Universal Gateway to simplify network function injection points along the traffic path for a service (or an application or an API). The flexibility is critical when breaking up the monolith into microservices, adopting DevOps processes, moving to a cloud or running Kubernetes in the organization.

Enroute Universal Gateway supports advanced rate-limiting and Lua scripting function at perimeter proxy, Kubernetes ingress or a side-car proxy. It can run at Kubernetes ingress without any support of an external database or with a database to run elsewhere. It is completely containerized and can support non-REST protocols such as gRPC. Since it is built on top of Envoy Proxy, it provides the necessary performance for all the above use-cases.

The boundaries between different proxies, load balancers, and application delivery controllers are shrinking and there needs to be a solution to simplify and unite all these technologies. A unified solution will simplify complexity in an organization and provide flexibility to realize the digital transformation objectives of an organization.

Top comments (0)