DEV Community

kelseyevans for Datawire

Posted on • Originally published at blog.getambassador.io

Building Ambassador, an Open Source API Gateway on Kubernetes and Envoy

API Gateways are a popular pattern for exposing your service endpoints to the consumer. At Datawire, we wanted to expose a number of our cloud services to our end users via an API Gateway. All of our cloud services run in Kubernetes, so we wanted to deploy the API gateway on Kubernetes as well. And finally, we wanted something that was open source.

We took a look at Tyk, Kong, and a few other open source API Gateways, and found that they had a common architecture — a persistent data store (e.g., Cassandra, Redis, MongoDB), a proxy server to do the actual traffic management, and REST APIs for configuration. While all of these seemed like they would work, we asked ourselves if there was a simpler, more Kubernetes-native approach.

Our API Gateway wish list

We scribbled on a whiteboard of requirements:

  • Reliability, availability, scalability. Duh.
  • Declarative configuration. We had committed to the declarative model of configuration (here's an article on the contrast), and didn't like the idea of mixing imperative configuration (via REST) and declarative configuration for our operational infrastructure.
  • Easy introspection. When something didn't work, we wanted to be able to introspect the gateway to figure out what didn't work.
  • Easy to use.
  • Authentication.
  • Performance.
  • All the features you need for a modern distributed application, e.g., rate limiting, circuit breaking, gRPC, observability, and so forth.

We realized that Kubernetes gave us the reliability, availability, and scalability. And we knew that the Envoy Proxy gave us the performance and features we wanted. So we asked ourselves if we could just marry Envoy and Kubernetes, and then fill in the remaining gaps.

Ambassador = Envoy + Kubernetes

We started to writing some prototype code, shared this with the Kubernetes community, and iterated on the feedback. We ended up with the open source Ambassador API Gateway. At its core, Ambassador has one basic function: it watches for configuration changes to your Kubernetes manifests, and then safely passes the necessary configuration changes to Envoy. All the L7 networking is performed directly by Envoy, and Kubernetes takes care of reliability, availability, and scalability.

local diagram amb on kube

To that core function, we’ve added a few other core features: introspection via a diagnostics UI (see above), and a single Docker image that integrates Envoy and all the necessary bits to get it running in production (as of 0.23, it’s a 113MB Alpine Linux based image).

Ambassador diagnostics interface

The combination of Envoy and Kubernetes has enabled Ambassador to be production ready in a short period of time.

  • Ambassador uses Kubernetes for persistence, so there is no need to run, scale, or maintain a database. (And as such, we don't need to test or tune database queries.)
  • Scaling Ambassador is done by Kubernetes, so you can use a horizontal pod autoscaler or just add replicas as needed.
  • Ambassador uses Kubernetes liveness and readiness probes, so Kubernetes automatically restarts Ambassador if it detects a problem.
  • All of the actual L7 routing is done by Envoy, so our performance is the same as Envoy. (In fact, you could actually delete the Ambassador code from the pod, and your Envoy instance would keep on routing traffic.)

And thanks to Envoy's extremely robust feature set, we've been able to add features such as rate limiting, gRPC support, web sockets, and more in a short period of time.

What about Ingress?

We considered making Ambassador an ingress controller. In this model, the user would define Ingress resources in Kubernetes, which would then be processed by the Ambassador ingress controller. After investigating this approach a bit further, we decided against this method because Ingress resources have a very limited set of features. In particular, Ingress resources can only define basic HTTP routing rules. Many of the features we wanted to use in Envoy (e.g., gRPC, timeouts,rate limiting, CORS support, routing based on HTTP method, etc.) are not possible to express with Ingress resources.

Using Ambassador

Our goal with Ambassador is to make it idiomatic with Kubernetes. Installing Ambassador is a matter of creating a Kubernetes deployment (e.g., kubectl apply -f [https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml](https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml) if you're using RBAC) and creating a Kubernetes service that points to the deployment.

Once that's done, configuration is done via Kubernetes annotations. One of the advantages of this approach is that the actual metadata about how your service is published is all in one place — your Kubernetes service object.

Ambassador supports a rich set of annotations that map to various features of Envoy. The weight annotation used above will route 10% of incoming to this particular version of the service. Other useful annotations include method, which lets you define the HTTP method for mapping; grpc, for gRPC-based services; and tls which tells Ambassador to contact the service over TLS. The full list is in the configuration documentation.

What's next for Ambassador

At KubeCon NA 2017, one of the themes was how do applications best take advantage of all that Kubernetes has to offer. Kubernetes is evolving at an incredibly fast rate, and introducing new APIs and abstractions. While Ingress resources didn't fit our use cases, we're exploring Custom Resources and the Operator pattern as a potentially interesting place to take Ambassador as this would address the limitations we encountered with ingress. More generally, we're interested in understanding how people would like to build cloud-native applications with Ambassador.

We're also adding more features as more and more users deploy Ambassador in production such as adding arbitrary request headers, single namespace support, WebSockets, and more. Finally, some of our users are deploying Ambassador with Istio. In this set up, Ambassador is configured to handle so-called "north/south" traffic and Istio handles "east/west" traffic.

If you're interested, we'd love to hear from you and contribute. Join our Gitter chat, open a GitHub issue, or just try Ambassador (it's just one line to try it locally with Docker!).

Top comments (0)