<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kelseyevans</title>
    <description>The latest articles on DEV Community by kelseyevans (@kelseyevans).</description>
    <link>https://dev.to/kelseyevans</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kelseyevans"/>
    <language>en</language>
    <item>
      <title>Eliminating Local Resource Constraints for Building Cloud Native Applications</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Wed, 10 Feb 2021 22:44:13 +0000</pubDate>
      <link>https://dev.to/kelseyevans/eliminating-local-resource-constraints-for-building-cloud-native-applications-1fja</link>
      <guid>https://dev.to/kelseyevans/eliminating-local-resource-constraints-for-building-cloud-native-applications-1fja</guid>
      <description>&lt;p&gt;Is Minikube melting your laptop? Are your local integration tests suffering because you can’t run dependencies on your development machine?&lt;/p&gt;

&lt;p&gt;As organizations adopt Kubernetes and cloud native architectures, development teams will often run into resource constraints as their architectures get more complex. Additionally, Kubernetes presents new challenges for configuring local development environments in comparison with legacy monolithic applications.&lt;/p&gt;

&lt;p&gt;With external dependencies and complex testing infrastructures, application developers are forced to either make sacrifices on realistic testing or wait long periods of time to get their local environments configured properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up development environments for Kubernetes
&lt;/h2&gt;

&lt;p&gt;The processes for developing traditional monolithic web applications has been improved and optimized and improved over many years. However, microservices and cloud native architectures still present challenges for developers looking to configure realistic development environments efficiently.&lt;/p&gt;

&lt;p&gt;Some common challenges for developers working on Kubernetes applications include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend developers can't run Kubernetes on their typically resource-limited development machines. Oftentimes, they will have to buy a more powerful laptop or SSH to a remote server that can run the tools they need&lt;/li&gt;
&lt;li&gt;Frontend developers can't run the entire backend stack on their laptops because it’s overly resource intensive. Instead they have to rely on accessing/consuming backend services that are running in a shared remote (staging) environment. This creates challenges with contention on the remote environment (sharing environments can increase latency on service calls etc), challenges with managing state (e.g. two or more developers mutating the same data store), and the environment being out of sync with production (or out of sync with assumptions/expectations)&lt;/li&gt;
&lt;li&gt;Backend developers can't run all of the dependencies locally that are required for integration testing. Instead, they often create mocks/stubs/virtualised services that codify (potentially incorrect) assumptions about the real systems&lt;/li&gt;
&lt;li&gt;Backend developers have to selectively spin up and spin down local dependencies that can't be run at the same time (due to resource issues) in order to integration test the system piece by piece. It’s impossible to test the entire system running locally, and the piece by piece tests may hide a bigger integration problem that will only be found once code is pushed to production&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solutions to Address Local Resource Constraints
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Buy More Powerful Hardware
&lt;/h3&gt;

&lt;p&gt;When resources are the limitation, there’s always the option to buy more powerful hardware. This can get expensive, however, especially when considering upgrading the resources of an entire team. You’ll also have to account for the time lost as developers get used to new machines. As your organization grows your systems will continue to become more complex, and it’s easy to find yourself in need of more powerful hardware yet again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrate to Developing Code Remotely with a Cloud-Based IDE
&lt;/h3&gt;

&lt;p&gt;There are an increasing number of open source and commercial cloud-based IDE products that can be used to effectively negate any local development machine resource constraints. The underlying hardware powering the cloud IDE can be scaled vertically, and the integrated cluster networking allows easy horizontal scaling.&lt;/p&gt;

&lt;p&gt;This challenges with this approach often relate to customizability, with limited access to the underlying OS and hardware, and also to cost, which is recurring and correlated with the size of the development team.Ambassador Cloud + Your Local Tools for Kubernetes&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Infinite-Scale Development Environments with Ambassador Cloud &amp;amp; Telepresence
&lt;/h3&gt;

&lt;p&gt;Ambassador Cloud is powered by Telepresence, an OSS tool focused on improving the developer workflow for single Kubernetes developers. Telepresence uses a smart proxy to create a copy of your service running in the cloud and intercepts it to your local machine. With this model, you can run and develop any service—no matter its size—on your local machine.&lt;/p&gt;

&lt;p&gt;This demo shows a Java microservice running in Kubernetes. Thanks to the power of Telepresence, the developer can make changes to their services without having to run all of the dependencies on their local machines. Even better, they can see the changes immediately! &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/W_a3aErN3NU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If this is a problem you’ve faced while adopting cloud native technologies, we’d love to hear about your story and how you’ve addressed it. Please drop us a line at &lt;a href="https://www.twitter.com/ambassadorlabs"&gt;@ambassadorlabs&lt;/a&gt; on Twitter or &lt;a href="http://d6e.co/slack"&gt;join our Slack channel&lt;/a&gt; to share your story.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn More
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Check out &lt;a href="https://www.getambassador.io/products/telepresence"&gt;Ambassador Cloud powered by Telepresence&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Learn more about &lt;a href="https://www.getambassador.io/use-case/local-kubernetes-development"&gt;fast, efficient development of Kubernetes microservices&lt;/a&gt; ​&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.getambassador.io/cloud/preview"&gt;Get started today&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>development</category>
    </item>
    <item>
      <title>From Monolith to Service Mesh, via a Front Proxy-- Learnings from stories of building the Envoy Proxy</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Fri, 24 Aug 2018 18:53:42 +0000</pubDate>
      <link>https://dev.to/datawireio/from-monolith-to-service-mesh-via-a-front-proxy---learnings-from-stories-of-building-the-envoy-proxy-2gda</link>
      <guid>https://dev.to/datawireio/from-monolith-to-service-mesh-via-a-front-proxy---learnings-from-stories-of-building-the-envoy-proxy-2gda</guid>
      <description>&lt;p&gt;The concept of a “&lt;a href="https://youtu.be/t_hfoAKMgOo"&gt;service mesh&lt;/a&gt;” is getting a lot of traction within the microservice and container ecosystems. This technology promise to homogenise internal network communication between services and provide cross-cutting nonfunctional concerns such as observability and fault-tolerance. However, the underlying proxy technology that powers a service mesh can also provide a lot of value at the edge of your systems — the point of ingress — particularly within an API gateway like the &lt;a href="https://www.getambassador.io"&gt;open source Kubernetes-native Ambassador gateway&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The State of SOA Networking
&lt;/h2&gt;

&lt;p&gt;In a talk last year by &lt;a href="https://twitter.com/mattklein123"&gt;Matt Klein&lt;/a&gt;, one of the creators of the Envoy Proxy, he described the state of service-oriented architecture (SOA) and microservice networking in 2013 as “&lt;a href="https://www.microservices.com/talks/lyfts-envoy-monolith-service-mesh-matt-klein/"&gt;a really big and confusing mess&lt;/a&gt;”. Debugging was difficult or impossible, with each application exposing different statistics and logging, and providing no ability to trace how requests were handled throughout the entire services call stack that took part in generating a response. There was also limited visibility into infrastructure components such as hosted load balancers, caches and network topologies.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s a lot of pain. I think most companies and most organizations know that SOA [microservices] is kind of the future and that there’s a lot of agility that comes from actually doing it, but on a rubber meets the road kind of day in and day out basis, people are feeling a lot of hurt. That hurt is mostly around debugging.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Maintaining reliability and high-availability of distributed web-based applications was a core challenge for large-scale organisations. Solutions to the challenges frequently included either multiple or partial implementations of retry logic, timeouts, rate limiting and circuit breaking. Many custom and open source solutions used a language-specific (and potentially even framework-specific) solution that meant engineers inadvertently locked themselves into a technology stack “essentially forever”. Klein and his team at Lyft thought there must be a better way.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ultimately, robust observability and easy debugging are everything. As SOAs become more complicated, it is critical that we provide a common solution to all of these problems or developer productivity grinds to a halt (and the site goes down… often)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ultimately the &lt;a href="https://www.envoyproxy.io"&gt;Envoy proxy&lt;/a&gt; was created to be this better way, and this project was released at open source by Matt and the Lyft team in &lt;a href="https://eng.lyft.com/announcing-envoy-c-l7-proxy-and-communication-bus-92520b6c8191"&gt;September 2016&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evolution of Envoy
&lt;/h2&gt;

&lt;p&gt;I’ve talked about the core features of Envoy in a previous post that covers another of Matt’s talks, but here I want to touch on the advanced load balancing. The proxy implements “zone aware least request load balancing”, and provides Envoy metrics per zone. As the Buoyant team have stated in their blog post “&lt;a href="https://blog.buoyant.io/2016/03/16/beyond-round-robin-load-balancing-for-latency/"&gt;Beyond Round Robin: Load Balancing for Latency&lt;/a&gt;”, performing load balancing at this point in the application/networking stack allows for more advanced algorithms than have typically been seen within SOA networking. Envoy also provides traffic shadowing, which can be used to fork (and clone) traffic to a test cluster, which is proving to be a popular approach for &lt;a href="https://medium.com/@copyconstruct/testing-in-production-the-safe-way-18ca102d0ef1"&gt;testing microservice-based applications in production&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--go-frNcO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datawire.io/wp-content/uploads/2018/08/service-mesh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--go-frNcO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datawire.io/wp-content/uploads/2018/08/service-mesh.png" alt="lyft today"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A core feature offered by Layer 7 (L7) proxies like Envoy is the ability to provide intelligent deployment control by basing routing decisions on application-specific data, such as HTTP headers. This allows a relatively easy implementation of blue/green deployments and canary testing, which also have the benefit of being controllable at near real time speed (in comparison with, say, an approach that uses the deployment mechanism to initialise and decommission VMs or pods to determine what services serve traffic).&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability, Observability, Observability
&lt;/h2&gt;

&lt;p&gt;Matt states in the talk that observability is by far the most important thing that Envoy provides. Having all service traffic transit through Envoy provides a single place where you can: produce consistent statistics for every hop; create an propagate a stable request identifier (which also required a lightweight application library to fully implement); and provide consistent logging and distributed tracing.&lt;/p&gt;

&lt;p&gt;Being built around Envoy the &lt;a href="https://www.getambassador.io"&gt;Ambassador API gateway&lt;/a&gt; embraces the &lt;a href="https://www.getambassador.io/reference/statistics"&gt;same principles&lt;/a&gt;. Metrics are exposed via the ubiquitous and well-tested &lt;a href="https://github.com/etsy/statsd"&gt;StatsD&lt;/a&gt; protocol, and Ambassador automatically sends statistics information to a Kubernetes service called statsd-sink using typical StatsD protocol settings, UDP to port 8125. The popular &lt;a href="https://www.getambassador.io/reference/statistics#prometheus"&gt;Prometheus&lt;/a&gt; open source monitoring system is also supported, and a StatsD exporter can be deployed as a sidecar on each Ambassador pod. More details are provided in the Datawire blog “&lt;a href="https://www.datawire.io/faster/ambassador-prometheus/"&gt;Monitoring Envoy and Ambassador on Kubernetes with the Prometheus Operator&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;Creating effective dashboards is an art in itself, and Matt shared several screenshots of dashboards that he and his team have created to show Envoy data at Lyft. If you want to explore a real world example of this type of dashboard, &lt;a href="https://www.linkedin.com/in/alexandregervais/"&gt;Alex Gervais&lt;/a&gt;, staff software developer at AppDirect and author of “&lt;a href="https://www.appdirect.com/blog/evolution-of-the-appdirect-kubernetes-network-infrastructure"&gt;Evolution of the AppDirect Kubernetes Network Infrastructure&lt;/a&gt;”, recently shared the AppDirect team’s &lt;a href="https://grafana.com/dashboards/4698"&gt;Grafana dashboard for Ambassador&lt;/a&gt; via the Grafana website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X0QxI8wP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datawire.io/wp-content/uploads/2018/08/envoy-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X0QxI8wP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.datawire.io/wp-content/uploads/2018/08/envoy-dashboard.png" alt="envoy_dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of Envoy
&lt;/h2&gt;

&lt;p&gt;The best place to learn about the future direction of Envoy is the &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/"&gt;Envoy documentation&lt;/a&gt; itself. In the talk that I’ve covered in this post Matt hinted at several future directions that has since been realised. This includes more &lt;a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/configuration/rate_limit"&gt;rate limiting options&lt;/a&gt; (be sure to check both the v1 and v2 APIs), and an &lt;a href="https://github.com/lyft/ratelimit"&gt;open source Go-based rate limit service&lt;/a&gt;. The Datawire team have followed suite with a series on how to implement &lt;a href="https://blog.getambassador.io/from-monolith-to-service-mesh-via-a-front-proxy-learnings-from-stories-of-building-the-envoy-333711bfd60c"&gt;rate limiting on the Ambassador API gateway&lt;/a&gt; (effectively an Envoy front proxy), and also released demonstration open source code for a &lt;a href="https://github.com/danielbryantuk/ambassador-java-rate-limiter"&gt;Java rate limiting service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Undeniably the community has evolved at a fantastic pace since Matt gave the talk. The communities for &lt;a href="https://www.envoyproxy.io/community.html"&gt;Envoy&lt;/a&gt;, &lt;a href="https://istio.io/community/"&gt;Istio&lt;/a&gt; and &lt;a href="https://blog.getambassador.io/growing-the-ambassador-api-gateway-community-7fa7ed064c9c"&gt;Ambassador&lt;/a&gt; (and several other Envoy-based services) are extremely active and helpful. So, what are you waiting for? Get involved and help steer the future of what are looking to be core components of modern cloud native application architectures. You can join the conversation &lt;a href="https://blog.getambassador.io/from-monolith-to-service-mesh-via-a-front-proxy-learnings-from-stories-of-building-the-envoy-333711bfd60c"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://blog.getambassador.io"&gt;Ambassador blog&lt;/a&gt; by &lt;a href="https://www.twitter.com/danielbryantuk"&gt;Daniel Bryant&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Using API Gateways to Facilitate Your Transition from Monolith to Microservices</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Fri, 08 Jun 2018 20:22:35 +0000</pubDate>
      <link>https://dev.to/datawireio/using-api-gateways-to-facilitate-your-transition-from-monolith-to-microservices-4n4g</link>
      <guid>https://dev.to/datawireio/using-api-gateways-to-facilitate-your-transition-from-monolith-to-microservices-4n4g</guid>
      <description>&lt;p&gt;In my consulting working I bump into a lot of engineering teams that are migrating from a monolithic application to a microservices-based application. “So what?” you may say, “and the sky is blue”, and yes, while I understand that this migration pattern is almost becoming a cliche, there are often aspects of a migration get forgotten. I’m keen to talk about one of these topics today — the role of an edge gateway, or API gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrating to Microservices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Typically at the start of a migration the obvious topics are given plenty of attention: domain modelling via defining &lt;a href="https://www.infoq.com/articles/ddd-contextmapping" rel="noopener noreferrer"&gt;Domain-Driven Design&lt;/a&gt; inspired "&lt;a href="https://martinfowler.com/bliki/BoundedContext.html" rel="noopener noreferrer"&gt;bounded contexts&lt;/a&gt;", the creation of &lt;a href="https://continuousdelivery.com/" rel="noopener noreferrer"&gt;continuous delivery&lt;/a&gt; pipelines, automated &lt;a href="http://amzn.to/2Iq1HlU" rel="noopener noreferrer"&gt;infrastructure provisioning&lt;/a&gt;, enhanced &lt;a href="http://amzn.to/2IlxHr8" rel="noopener noreferrer"&gt;monitoring and logging&lt;/a&gt;, and sprinkling in some shiny new technology (&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, and perhaps currently a &lt;a href="https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/" rel="noopener noreferrer"&gt;service mesh&lt;/a&gt; or &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;two&lt;/a&gt;?). However, the less obvious aspects can cause a lot of pain if they are ignored. A case in point is how to orchestrate the evolution of the system and the migration of the existing user traffic. Although you want to refactor the existing application architecture and potentially bring in some new technology, you do not want to disrupt your end users.&lt;/p&gt;

&lt;p&gt;As I wrote in a previous article "&lt;a href="https://blog.getambassador.io/continuous-delivery-how-can-an-api-gateway-help-or-hinder-1ff15224ec4d" rel="noopener noreferrer"&gt;Continuous Delivery: How Can an API Gateway Help (or Hinder)&lt;/a&gt;", patterns like the "dancing skeleton" can greatly help in proving the end-to-end viability of new applications and infrastructure. However, the vast majority of underlying customer interaction is funneled via a single point within your system — the ingress or edge gateway — and therefore to enable experimentation and evolution of the existing systems, you will need to focus considerable time and effort here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every (User) Journey Begins at the Edge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm obviously not the first person to talk about the need for an effective edge solution when moving towards a microservices-based application. In fact, in Phil Calcado's proposed extension of Martin Fowler's original &lt;a href="https://martinfowler.com/bliki/MicroservicePrerequisites.html" rel="noopener noreferrer"&gt;Microservices Prerequisites&lt;/a&gt; article — &lt;a href="http://philcalcado.com/2017/06/11/calcados_microservices_prerequisites.html" rel="noopener noreferrer"&gt;Calcado's Microservices Prerequisites&lt;/a&gt; — his fifth prerequisite is "&lt;a href="http://philcalcado.com/2017/06/11/calcados_microservices_prerequisites.html#5-easy-access-to-the-edge" rel="noopener noreferrer"&gt;easy access to the edge&lt;/a&gt;". Phil talks based on his experience that many organisation's first foray into deploying a new microservice alongside their monolith consists of simply exposing the service directly to the internet. This can work well for a single (simple) service, but the approach tends not to scale, and can also force the calling clients to jump through hoops in regards to authorization or aggregation of data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_8gJ0imO9IXdidzb7h4QL9Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_8gJ0imO9IXdidzb7h4QL9Q.png" alt="image1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is possible to use the existing monolithic application as a gateway, and if you have complex and highly-coupled authorization and authentication code, then this can be the only viable solution until the security components are refactored out into a new module or service. This approach has obvious downsides, including the requirement that you must "update" the monolith with any new routing information (which can involve a full redeploy), and the fact that all traffic must pass through the monolith. This latter issue can be particularly costly if you are deploying your microservices to a separate new fabric or platform (such as Kubernetes), as now any request that comes into your application has to be routed through the old stack before it even touches the new stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_IjQyv_orZeYqvtxv33S-RA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_IjQyv_orZeYqvtxv33S-RA.png" alt="image2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may already be using an edge gateway or reverse proxy — for example, NGINX or HAProxy — as these can provide many advantages when working with any type of backend architecture. Features provided typically include transparent routing to multiple backend components, header rewriting, TLS termination etc, and crosscutting concerns regardless of how the requests are ultimately being served. The question to ask in this scenario is whether you want to keep using this gateway for your microservices implementation, and if you do, should it be used in the same way?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From VMs to Containers (via Orchestration)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As I mentioned in the introduction to this article, many engineering teams also make the decision to migrate to new infrastructure at the same time as changing the application architecture. The benefits and challenges for doing this are heavily context-dependent, but I see many teams migrating away from VMs and pure Infrastructure as a Service (IaaS) to containers and Kubernetes.&lt;/p&gt;

&lt;p&gt;Assuming that you are deciding to package your shiny new microservices within containers and deploy these into Kubernetes, what challenges do you face in regards to handling traffic at the edge? In essence there are three choices, one of which you have already read about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use the existing monolithic application to act as an edge gateway that routes traffic to either the monolith or new services. Any kind of routing logic can be implemented here (because all requests are travelling via the monolith) and calls to authn/authz can be made in process&lt;/li&gt;
&lt;li&gt;  Deploy and operate an edge gateway in your existing infrastructure that routes traffic based on URIs and headers to either the monolith or new services. Authn and authz is typically done via calling out to the monolith or a refactored security service.&lt;/li&gt;
&lt;li&gt;  Deploy and operate an edge gateway in your new Kubernetes infrastructure that routes traffic based on URIs and headers to either the monolith or new services. Authn and authz is typically done via calling out to a refactored security service running in Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The choice of where to deploy and operate your edge gateway involves tradeoffs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_mioM6QEThogmQjwfkfBNiQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_mioM6QEThogmQjwfkfBNiQ.png" alt="comparison_table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have made your choice on how to implement the edge gateway, the next decision you will have make is how to evolve your system. Broadly speaking, you can either try and "strangle" the monolith as-is, or you put the "monolith-in-a-box" and start chipping away here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strangling the Monolith&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Martin Fowler has written a great article about the principles of the &lt;a href="https://www.martinfowler.com/bliki/StranglerApplication.html" rel="noopener noreferrer"&gt;Strangler Application Pattern&lt;/a&gt;, and even though the writing is over ten years old, the same guidelines apply when attempting to migrate functionality out from a monolith into smaller services. The pattern at its core describes that functionality should be extracted from the monolith in the form of services that interact with the monolith via RPC or REST-like "&lt;a href="http://amzn.to/2pdQCvc" rel="noopener noreferrer"&gt;seams&lt;/a&gt;" or via &lt;a href="https://www.infoq.com/news/2018/03/asynchronous-event-architectures" rel="noopener noreferrer"&gt;messaging and events&lt;/a&gt;. Over time, functionality (and associated code) within the monolith is retired, which leads to the new microservices "strangling" the existing codebase. The main downside with this pattern is that you will still have to maintain your existing infrastructure alongside any new platform you are deploying your microservices to, for as long as the monolith is still in service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_7ifdA0aqe2seM_1DcpEkBQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_7ifdA0aqe2seM_1DcpEkBQ.png" alt="image4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_F0f-vipeVyOqs33fFtGgNw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_F0f-vipeVyOqs33fFtGgNw.png" alt="image5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the first companies to talk in-depth about using this pattern with microservices was Groupon, back in 2013, with "&lt;a href="https://engineering.groupon.com/2013/misc/i-tier-dismantling-the-monoliths/" rel="noopener noreferrer"&gt;I-Tier: Dismantling the Monolith&lt;/a&gt;". There are many lessons to be learnt from their work, but we definitely don't need to write a custom NGINX module in 2018, as Groupon originally did with "Grout". Now modern open source API gateways like &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;Ambassador&lt;/a&gt; and &lt;a href="https://traefik.io/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; exist, which provide this functionality using simple declarative configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monolith-in-a-Box: Simplifying Continuous Delivery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An increasingly common pattern I am seeing with teams migrating to microservices and deploying onto Kubernetes is what I refer to as a "monolith-in-a-box". I talked about this alongside &lt;a href="https://twitter.com/sheriffjackson?lang=en" rel="noopener noreferrer"&gt;Nic Jackson&lt;/a&gt; when we shared the story of migrating notonthehighstreet.com's monolithic Ruby on Rails application — affectionately referred to as the &lt;a href="https://www.slideshare.net/dbryant_uk/containersched-2015-our-journey-to-world-gifting-domination-how-notonthehighstreetcom-embraced-docker/10" rel="noopener noreferrer"&gt;MonoNOTH&lt;/a&gt; — towards a microservice-based architecture back in 2015 at the ContainerSched conference.&lt;/p&gt;

&lt;p&gt;In a nutshell, this migration pattern consists of packaging your existing monolithic application within a container and running it like any other new service. If you are implementing a new deployment platform, such as Kubernetes, then you will run the monolith here too. The primary benefits of this pattern is the homogenisation of your continuous delivery pipeline — each application and service may require customised build steps (or a build container) in order to compile and package the code correctly, but after the runtime container has been created, then all of the other steps in the pipeline can use the container abstraction as the deployment artifact.&lt;/p&gt;

&lt;p&gt;The ultimate goal of the monolith-in-a-box pattern is to deploy your monolith to your new infrastructure, and gradually move all of your traffic over to this new platform. This allows you to decommission your old infrastructure before completing the full decomposition of the monolith. If you are following this pattern then I would argue that running your edge gateway within Kubernetes makes even more sense, as this is ultimately where all of the traffic will be routed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_fobilwK9fJRNSoIflWIuTQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F06%2F1_fobilwK9fJRNSoIflWIuTQ.png" alt="image6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parting Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When moving from Virtual Machine (VM)-based infrastructure to a cloud native platform like Kubernetes it is well worth investing time in implementing an effective edge/ingress solution to help with the migration. You have multiple options to implement this: using the existing monolithic application as a gateway; deploying or using an edge gateway in your existing infrastructure to route traffic between the current and new services; or deploying an edge gateway within your new Kubernetes platform.&lt;/p&gt;

&lt;p&gt;Deploying an edge gateway within Kubernetes can provide more flexibility when implementing migration patterns like the "monolith-in-a-box", and can make the transition towards a fully microservice-based application much more rapid.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on the &lt;a href="https://blog.getambassador.io/using-api-gateways-to-facilitate-your-transition-from-monolith-to-microservices-5e630da24717" rel="noopener noreferrer"&gt;Ambassador blog&lt;/a&gt; by &lt;a href="https://www.twitter.com/danielbryantuk" rel="noopener noreferrer"&gt;Daniel Bryant&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>gRPC and the open source Ambassador API Gateway</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Fri, 23 Feb 2018 22:46:29 +0000</pubDate>
      <link>https://dev.to/datawireio/grpc-and-the-open-source-ambassador-api-gateway--55k5</link>
      <guid>https://dev.to/datawireio/grpc-and-the-open-source-ambassador-api-gateway--55k5</guid>
      <description>&lt;p&gt;&lt;a href="https://grpc.io/"&gt;gRPC&lt;/a&gt; is a high performance RPC protocol built on HTTP/2. We're seeing more and more companies &lt;a href="https://www.sajari.com/blog/grpc-and-displacement-of-rest-apis"&gt;move their APIs&lt;/a&gt; from HTTP/JSON to gRPC. gRPC offers numerous benefits over HTTP/JSON:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Performance. gRPC uses HTTP/2, with support for streaming, multiplexing, and more (&lt;a href="https://imagekit.io/demo/http2-vs-http1"&gt;see the difference in action&lt;/a&gt;). In addition, gRPC has native support for protobuf, which is much faster at serialization / deserialization than JSON.&lt;/li&gt;
&lt;li&gt;  Streaming. Despite its name, gRPC also supports streaming, which opens up a much wider range of use cases.&lt;/li&gt;
&lt;li&gt;  Code generation of endpoints. gRPC uses code generation from API definitions (&lt;code&gt;.proto&lt;/code&gt; files) to stub out your endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;gRPC does have a few downsides, the biggest of which is probably that it's much newer than HTTP/REST, and as such, the maturity of tools and libraries around gRPC isn't anywhere close to HTTP.&lt;/p&gt;

&lt;p&gt;One of these gaps is in API Gateways. Many API Gateways don't support gRPC. Fortunately, &lt;a href="https://www.getambassador.io/"&gt;Ambassador&lt;/a&gt; does &lt;a href="https://www.getambassador.io/how-to/grpc"&gt;support gRPC&lt;/a&gt;, thanks to the fact that it uses &lt;a href="https://www.envoyproxy.io/"&gt;Envoy&lt;/a&gt; as its core proxying engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Gateways and gRPC
&lt;/h3&gt;

&lt;p&gt;An API Gateway implements cross-cutting functionality such as authentication, logging, rate limiting, and load balancing. By using an API Gateway with your gRPC APIs, you are able to deploy this functionality outside of your core gRPC service(s). Moreover, Ambassador is able to provide this functionality for both your HTTP and gRPC services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ambassador and gRPC
&lt;/h3&gt;

&lt;p&gt;Deploying a gRPC service with Ambassador is straightforward. After &lt;a href="https://www.getambassador.io/user-guide/getting-started"&gt;installing Ambassador&lt;/a&gt;, create an Ambassador mapping for your gRPC service. An example mapping would look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--------
apiVersion: ambassador/v0
kind: Mapping
name: grpc_mapping
grpc: true
prefix: /helloworld.Greeter/
rewrite: /helloworld.Greeter/
service: grpc-greet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that gRPC services are not routed like HTTP services. Instead, gRPC requests include the package and service name. This information is used to route the request to the appropriate service. We set the &lt;code&gt;prefix&lt;/code&gt; and &lt;code&gt;rewrite&lt;/code&gt;fields accordingly based on the &lt;a href="https://github.com/grpc/grpc-go/blob/master/examples/helloworld/helloworld/helloworld.proto"&gt;.proto&lt;/a&gt; file. Also, note that we set &lt;code&gt;grpc: true&lt;/code&gt;to tell Ambassador that the service speaks gRPC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying a gRPC service
&lt;/h3&gt;

&lt;p&gt;We can deploy a simple Hello, World gRPC service to illustrate all this functionality. Copy-and-paste the full YAML below into a file called &lt;code&gt;helloworld.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--------
apiVersion: v1
kind: Service
metadata:
  labels:
    service: grpc-greet
  name: grpc-greet
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind: Mapping
      name: grpc_mapping
      grpc: true
      prefix: /helloworld.Greeter/
      rewrite: /helloworld.Greeter/
      service: grpc-greet
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: grpc-greet
    targetPort: grpc-api
  selector:
    service: grpc-greet
--------
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grpc-greet
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: grpc-greet
    spec:
      containers:
      - name: grpc-greet
        image: enm10k/grpc-hello-world
        ports:
        - name: grpc-api
          containerPort: 9999
        env:
          - name: PORT
            value: "9999"
        command:
          - greeter_server
      restartPolicy: Always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, run &lt;code&gt;kubectl apply -f helloworld.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hello World client
&lt;/h3&gt;

&lt;p&gt;In order to test that Ambassador and the service are working properly, we'll need to run a gRPC client. First, get the external IP address of Ambassador. You can do this with &lt;code&gt;kubectl get svc ambassador&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'll use a Docker image that already contains the appropriate Hello World gRPC client. Type, where $AMBASSADOR_IP is set to the external IP address above, and $AMBASSADOR_PORT is set to 80 (for HTTP-based Ambassador) or 443 (for a TLS-configured Ambassador).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -e ADDRESS=${AMBASSADOR_IP}:${AMBASSADOR_PORT} enm10k/grpc-hello-world greeter_client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -e ADDRESS=35.29.51.15:80 enm10k/grpc-hello-world greeter_client
2018/02/02 20:34:35 Greeting: Hello world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.getambassador.io/"&gt;Ambassador&lt;/a&gt; makes it easy to publish gRPC services to your consumers, thanks to Envoy's &lt;a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/intro/arch_overview/grpc.html"&gt;robust gRPC support&lt;/a&gt;. By publishing your services through Ambassador, you're able to add authentication, rate limiting, and other functionality to your gRPC services.&lt;/p&gt;

&lt;p&gt;If you're interested in using Ambassador with gRPC, join our &lt;a href="https://gitter.im/datawire/ambassador"&gt;Gitter chat&lt;/a&gt; or visit &lt;a href="https://www.getambassador.io./"&gt;https://www.getambassador.io.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>showdev</category>
      <category>programming</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Building Ambassador, an Open Source API Gateway on Kubernetes and Envoy</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Thu, 22 Feb 2018 22:12:09 +0000</pubDate>
      <link>https://dev.to/datawireio/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy--2n4l</link>
      <guid>https://dev.to/datawireio/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy--2n4l</guid>
      <description>&lt;p&gt;API Gateways are a popular pattern for exposing your service endpoints to the consumer. At &lt;a href="https://www.datawire.io/" rel="noopener noreferrer"&gt;Datawire&lt;/a&gt;, we wanted to expose a number of our cloud services to our end users via an API Gateway. All of our cloud services run in Kubernetes, so we wanted to deploy the API gateway on Kubernetes as well. And finally, we wanted something that was open source.&lt;/p&gt;

&lt;p&gt;We took a look at &lt;a href="https://github.com/TykTechnologies/tyk-kubernetes" rel="noopener noreferrer"&gt;Tyk&lt;/a&gt;, &lt;a href="https://getkong.org/install/kubernetes/" rel="noopener noreferrer"&gt;Kong&lt;/a&gt;, and a few other open source API Gateways, and found that they had a common architecture — a persistent data store (e.g., Cassandra, Redis, MongoDB), a proxy server to do the actual traffic management, and REST APIs for configuration. While all of these seemed like they would work, we asked ourselves if there was a simpler, more Kubernetes-native approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our API Gateway wish list
&lt;/h3&gt;

&lt;p&gt;We scribbled on a whiteboard of requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Reliability, availability, scalability. Duh.&lt;/li&gt;
&lt;li&gt;  Declarative configuration. We had committed to the declarative model of configuration (here's &lt;a href="https://ttboj.wordpress.com/2017/05/05/declarative-vs-imperative-paradigms/" rel="noopener noreferrer"&gt;an article&lt;/a&gt; on the contrast), and didn't like the idea of mixing imperative configuration (via REST) and declarative configuration for our operational infrastructure.&lt;/li&gt;
&lt;li&gt;  Easy introspection. When something didn't work, we wanted to be able to introspect the gateway to figure out what didn't work.&lt;/li&gt;
&lt;li&gt;  Easy to use.&lt;/li&gt;
&lt;li&gt;  Authentication.&lt;/li&gt;
&lt;li&gt;  Performance.&lt;/li&gt;
&lt;li&gt;  All the features you need for a modern distributed application, e.g., rate limiting, circuit breaking, gRPC, observability, and so forth.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We realized that Kubernetes gave us the reliability, availability, and scalability. And we knew that the &lt;a href="https://www.envoyproxy.io/" rel="noopener noreferrer"&gt;Envoy Proxy&lt;/a&gt; gave us the performance and features we wanted. So we asked ourselves if we could just marry Envoy and Kubernetes, and then fill in the remaining gaps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ambassador = Envoy + Kubernetes
&lt;/h3&gt;

&lt;p&gt;We started to writing some prototype code, shared this with the Kubernetes community, and iterated on the feedback. We ended up with the &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;open source Ambassador API Gateway&lt;/a&gt;. At its core, Ambassador has one basic function: it watches for configuration changes to your Kubernetes manifests, and then safely passes the necessary configuration changes to Envoy. All the L7 networking is performed directly by Envoy, and Kubernetes takes care of reliability, availability, and scalability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fbuilding-ambassador-photo-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fbuilding-ambassador-photo-1.png" alt="local diagram amb on kube"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To that core function, we’ve added a few other core features: introspection via a &lt;a href="https://www.getambassador.io/user-guide/running#diagnostics" rel="noopener noreferrer"&gt;diagnostics&lt;/a&gt; UI (see above), and a single Docker image that integrates Envoy and all the necessary bits to get it running in production (as of &lt;code&gt;0.23&lt;/code&gt;, it’s a 113MB Alpine Linux based image).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fbuilding-ambassador-photo-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fbuilding-ambassador-photo-2.png" alt="Ambassador diagnostics interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The combination of Envoy and Kubernetes has enabled Ambassador to be production ready in a short period of time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Ambassador uses Kubernetes for persistence, so there is no need to run, scale, or maintain a database. (And as such, we don't need to test or tune database queries.)&lt;/li&gt;
&lt;li&gt;  Scaling Ambassador is done by Kubernetes, so you can use a &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;horizontal pod autoscaler&lt;/a&gt; or just add replicas as needed.&lt;/li&gt;
&lt;li&gt;  Ambassador uses Kubernetes liveness and readiness probes, so Kubernetes automatically restarts Ambassador if it detects a problem.&lt;/li&gt;
&lt;li&gt;  All of the actual L7 routing is done by Envoy, so our performance is the same as Envoy. (In fact, you could actually delete the Ambassador code from the pod, and your Envoy instance would keep on routing traffic.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And thanks to Envoy's extremely robust feature set, we've been able to add features such as rate limiting, gRPC support, web sockets, and more in a short period of time.&lt;/p&gt;

&lt;h3&gt;
  
  
  What about Ingress?
&lt;/h3&gt;

&lt;p&gt;We considered making Ambassador an ingress controller. In this model, the user would define &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress resources&lt;/a&gt; in Kubernetes, which would then be processed by the Ambassador ingress controller. After investigating this approach a bit further, we decided against this method because Ingress resources have a &lt;em&gt;very&lt;/em&gt; limited set of features. In particular, Ingress resources can only define basic HTTP routing rules. Many of the features we wanted to use in Envoy (e.g., gRPC, timeouts,rate limiting, CORS support, routing based on HTTP method, etc.) are not possible to express with Ingress resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Ambassador
&lt;/h3&gt;

&lt;p&gt;Our goal with Ambassador is to make it idiomatic with Kubernetes. Installing Ambassador is a matter of creating a Kubernetes deployment (e.g., &lt;code&gt;kubectl apply -f [https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml](https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml)&lt;/code&gt; if you're using RBAC) and creating a Kubernetes service that points to the deployment.&lt;/p&gt;

&lt;p&gt;Once that's done, configuration is done via Kubernetes annotations. One of the advantages of this approach is that the actual metadata about how your service is published is all in one place — your Kubernetes service object.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Ambassador supports a rich set of annotations that map to various features of Envoy. The &lt;code&gt;weight&lt;/code&gt; annotation used above will route 10% of incoming to this particular version of the service. Other useful annotations include &lt;code&gt;method&lt;/code&gt;, which lets you define the HTTP method for mapping; &lt;code&gt;grpc&lt;/code&gt;, for gRPC-based services; and &lt;code&gt;tls&lt;/code&gt; which tells Ambassador to contact the service over TLS. The full list is in the &lt;a href="https://www.getambassador.io/reference/configuration" rel="noopener noreferrer"&gt;configuration documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's next for Ambassador
&lt;/h3&gt;

&lt;p&gt;At KubeCon NA 2017, one of the themes was how do applications best take advantage of all that Kubernetes has to offer. Kubernetes is evolving at an incredibly fast rate, and introducing new APIs and abstractions. While Ingress resources didn't fit our use cases, we're exploring &lt;a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/" rel="noopener noreferrer"&gt;Custom Resources&lt;/a&gt; and the &lt;a href="https://coreos.com/blog/introducing-operators.html" rel="noopener noreferrer"&gt;Operator&lt;/a&gt; pattern as a potentially interesting place to take Ambassador as this would address the limitations we encountered with ingress. More generally, we're interested in understanding how people would like to build cloud-native applications with Ambassador.&lt;/p&gt;

&lt;p&gt;We're also adding more features as more and more users deploy Ambassador in production such as adding &lt;a href="https://github.com/datawire/ambassador/pull/226" rel="noopener noreferrer"&gt;arbitrary request headers&lt;/a&gt;, single namespace support, WebSockets, and more. Finally, some of our users are deploying &lt;a href="https://www.getambassador.io/user-guide/with-istio" rel="noopener noreferrer"&gt;Ambassador with Istio&lt;/a&gt;. In this set up, Ambassador is configured to handle so-called "north/south" traffic and Istio handles "east/west" traffic.&lt;/p&gt;

&lt;p&gt;If you're interested, we'd love to hear from you and contribute. Join our &lt;a href="https://gitter.im/datawire/ambassador" rel="noopener noreferrer"&gt;Gitter chat&lt;/a&gt;, &lt;a href="https://github.com/datawire/ambassador/issues" rel="noopener noreferrer"&gt;open a GitHub issue&lt;/a&gt;, or just &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;try Ambassador&lt;/a&gt; (it's just one line to try it locally with Docker!).&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>opensource</category>
      <category>kubernetes</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Deploying Java Apps with Kubernetes and the Ambassador API Gateway</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Tue, 20 Feb 2018 18:57:03 +0000</pubDate>
      <link>https://dev.to/datawireio/deploying-java-apps-with-kubernetes-and-the-ambassador-api-gateway--6pn</link>
      <guid>https://dev.to/datawireio/deploying-java-apps-with-kubernetes-and-the-ambassador-api-gateway--6pn</guid>
      <description>&lt;p&gt;In this article you’ll learn how to deploy three simple Java services into Kubernetes (running locally via the new Docker for Mac/Windows integration), and expose the frontend service to end-users via the Kubernetes-native Ambassador API Gateway. So, grab your caffeinated beverage of choice and get comfy in front of your terminal!&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Recap: Architecture and Deployment
&lt;/h2&gt;

&lt;p&gt;In October last year I extended my simple Java microservice-based "&lt;a href="https://github.com/danielbryantuk/oreilly-docker-java-shopping" rel="noopener noreferrer"&gt;Docker Java Shopping&lt;/a&gt;" container deployment demonstration with &lt;a href="https://www.oreilly.com/ideas/how-to-manage-docker-containers-in-kubernetes-with-java" rel="noopener noreferrer"&gt;Kubernetes support&lt;/a&gt;. If you found the time to complete the tutorial you would have packaged three simple Java services — the shopfront and stockmanager Spring Boot services, and the product catalogue Java EE DropWizard service — within Docker images, and deployed the resulting containers into a local &lt;a href="https://github.com/kubernetes/minikube" rel="noopener noreferrer"&gt;minikube-powered&lt;/a&gt; Kubernetes cluster. I also showed you how to open the shopfront service to end-users by mapping and exposing a Kubernetes cluster port using a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noopener noreferrer"&gt;NodePort Service&lt;/a&gt;. Although this was functional for the demonstration, many of you asked how you could deploy the application behind an API Gateway. This is a great question, and accordingly I was keen to add another article in this tutorial series with the goal of deploying the "Docker Java Shopping" Java application behind the open source Kubernetes-native &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;Ambassador API Gateway&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fambassador-tutorial.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Fambassador-tutorial.png" alt="Docker Java Shopping app"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Figure 1. “Docker Java Shopping” application deployed with Ambassador API Gateway&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Quick Aside: Why Use an API Gateway?
&lt;/h2&gt;

&lt;p&gt;I'm confident that many of you will have used (or at least bumped into) the concept of an API Gateway before. Chris Richardson has written a good overview of the details at &lt;a href="http://microservices.io/patterns/apigateway.html" rel="noopener noreferrer"&gt;microservices.io&lt;/a&gt;, and the team behind the creation of the Ambassador API Gateway, &lt;a href="https://www.datawire.io/" rel="noopener noreferrer"&gt;Datawire&lt;/a&gt;, have also talked about the benefits of using a &lt;a href="https://www.getambassador.io/about/why-ambassador" rel="noopener noreferrer"&gt;Kubernetes-native API Gateway&lt;/a&gt;. In short, an API Gateway allows you to centralise a lot of the cross-cutting concerns for your application, such as load balancing, security and rate-limiting. Running a Kubernetes-native API Gateway also allows you to offload several of the operational issues associated with deploying and maintaining a gateway — such as implementing resilience and scalability — to Kubernetes itself.&lt;/p&gt;

&lt;p&gt;There are many API Gateway choices for Java developers, such as the open source &lt;a href="https://github.com/Netflix/zuul" rel="noopener noreferrer"&gt;Netflix's Zuul&lt;/a&gt;, &lt;a href="https://cloud.spring.io/spring-cloud-gateway/" rel="noopener noreferrer"&gt;Spring Cloud Gateway&lt;/a&gt;, and &lt;a href="https://getkong.org/" rel="noopener noreferrer"&gt;Mashape's Kong&lt;/a&gt;; there are cloud vendors' implementations (such as &lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;Amazon's API Gateway&lt;/a&gt;); and of course the traditional favourites of &lt;a href="https://www.nginx.com/" rel="noopener noreferrer"&gt;NGINX&lt;/a&gt; and &lt;a href="http://www.haproxy.org/" rel="noopener noreferrer"&gt;HAProxy&lt;/a&gt;; and finally, also the more modern variants like &lt;a href="https://traefik.io/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt;. Choosing the best API Gateway for your use case can involve a lot of work — this is a critical piece of your infrastructure, and it will touch every bit of traffic coming into your application. As with any critical tech choice, there are many tradeoffs to be considered. In particular, watch out for potential high-coupling points — for example, I've seen the ability to &lt;a href="https://github.com/Netflix/zuul/wiki/zuul-simple-webapp" rel="noopener noreferrer"&gt;dynamically deploy "Filters"&lt;/a&gt; (Groovy scripts) into Netflix's Zuul enables business logic to become spread (coupled) between the service and the gateway — and also the need to deploy complicated datastores as the end-user traffic increases — for example, Kong requires a &lt;a href="https://getkong.org/about/faq/#how-does-it-work" rel="noopener noreferrer"&gt;Cassandra cluster or Postgres installation&lt;/a&gt; to scale horizontally.&lt;/p&gt;

&lt;p&gt;For the sake of simplicity in this article I'm going to use the open source Kubernetes-native Ambassador API Gateway. I like Ambassador because the simplicity of the implementation reduces the ability to accidentally couple any business logic to it, and the fact that I can specify service routing via a declarative approach (which I use for all of my other Kubernetes config) feels more "cloud native" — I can also store the routes easily in version control, and send this down the CI/CD build pipeline with all the other code changes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started: NodePorts and LoadBalancers 101
&lt;/h2&gt;

&lt;p&gt;First, ensure you are starting with a fresh (empty) Kubernetes cluster. Because I like to embrace my inner-hipster every once in a while, I am going to run this demonstration using the new Kubernetes integration within Docker for Mac. If you want to follow along you will need to ensure that you have installed the Edge version of &lt;a href="https://blog.docker.com/2018/01/docker-mac-kubernetes/" rel="noopener noreferrer"&gt;Docker for Mac&lt;/a&gt; or &lt;a href="https://blog.docker.com/2018/01/docker-windows-desktop-now-kubernetes/" rel="noopener noreferrer"&gt;Docker for Windows&lt;/a&gt;, and also enabled Kubernetes support by following the instructions within the &lt;a href="https://docs.docker.com/docker-for-mac/#kubernetes" rel="noopener noreferrer"&gt;Docker Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next clone my &lt;a href="https://github.com/danielbryantuk/oreilly-docker-java-shopping" rel="noopener noreferrer"&gt;"Docker Java Shopfront" GitHub repository&lt;/a&gt;. If you want to explore the directory structure and learn more about each of the three services that make up the application, then I recommend having a look at the &lt;a href="https://www.oreilly.com/ideas/how-to-manage-docker-containers-in-kubernetes-with-java" rel="noopener noreferrer"&gt;previous article&lt;/a&gt; in this series or the associated mini-book "&lt;a href="https://www.nginx.com/resources/library/containerizing-continuous-delivery-java/" rel="noopener noreferrer"&gt;Containerizing Continuous Delivery in Java&lt;/a&gt;" that started all of this. When the repo has been successfully cloned you can navigate into the kubernetes directory. If you are following along with the tutorial then you will be making modifications within this directory, and so you are welcome to fork your own copy of the repo and create a branch that you can push your work to. I don't recommend skipping ahead (or cheating), but the &lt;a href="https://github.com/danielbryantuk/oreilly-docker-java-shopping/tree/master/kubernetes-ambassador" rel="noopener noreferrer"&gt;kubernetes-ambassador&lt;/a&gt; directory contains the complete solution, in case you want to check your work!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone git@github.com:danielbryantuk/oreilly-docker-java-shopping.git
$ cd oreilly-docker-java-shopping/kubernetes
(master) kubernetes $ ls -lsa
total 24
0 drwxr-xr-x   5 danielbryant  staff  160  5 Feb 18:18 .
0 drwxr-xr-x  18 danielbryant  staff  576  5 Feb 18:17 ..
8 -rw-r--r--   1 danielbryant  staff  710  5 Feb 18:22 productcatalogue-service.yaml
8 -rw-r--r--   1 danielbryant  staff  658  5 Feb 18:11 shopfront-service.yaml
8 -rw-r--r--   1 danielbryant  staff  677  5 Feb 18:22 stockmanager-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you open up the &lt;a href="https://github.com/danielbryantuk/oreilly-docker-java-shopping/blob/master/kubernetes/shopfront-service.yaml" rel="noopener noreferrer"&gt;shopfront-service.yaml&lt;/a&gt; in your editor/IDE of choice, you will see that I am exposing the shopfront service as a NodePort accessible via TCP port 8010. This means that the service can be accessed via port 8010 on any of the cluster node IPs that are made public (and not blocked by a firewall).&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
 name: shopfront
 labels:
 app: shopfront
spec:
 type: NodePort
 selector:
 app: shopfront
 ports:
 — protocol: TCP
 port: 8010
 name: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When running this service via minikube, NodePort allows you to access the service via the cluster external IP. When running the service via Docker, NodePort allows you to access the service via localhost and the Kubernetes allocated port. Assuming that Docker for Mac or Windows has been configured to run Kubernetes successfully you can now deploy this service:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master) kubernetes $ kubectl apply -f shopfront-service.yaml
service "shopfront" created
replicationcontroller "shopfront" created
(master) kubernetes $
(master) kubernetes $ kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP          19h
shopfront    NodePort    10.110.74.43   &amp;lt;none&amp;gt;        8010:31497/TCP   0s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can see the shopfront service has been created, and although there is no external-ip listed, you can see that the port specified in the stockmanager-service.yaml (8010) has been mapped to port 31497 (your port number may differ here). If you are using Docker for Mac or Windows you can now curl data from localhost (as the Docker app works some magic behind the scenes), and if you are using minikube you can get the cluster IP address by typing &lt;code&gt;minikube ip&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;Assuming you are using Docker, and that you have only deployed the single shopfront service you should see this response from a curl using the port number you can see from the &lt;code&gt;kubectl get svc&lt;/code&gt; command (31497 for me):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master) kubernetes $ curl -v localhost:31497
* Rebuilt URL to: localhost:31497/
* Trying ::1…
* TCP_NODELAY set
* Connected to localhost (::1) port 31497 (#0)
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: localhost:31497
&amp;gt; User-Agent: curl/7.54.0
&amp;gt; Accept: */*
&amp;gt;
&amp;lt; HTTP/1.1 500
&amp;lt; X-Application-Context: application:8010
&amp;lt; Content-Type: application/json;charset=UTF-8
&amp;lt; Transfer-Encoding: chunked
&amp;lt; Date: Tue, 06 Feb 2018 17:20:19 GMT
&amp;lt; Connection: close
&amp;lt;
* Closing connection 0
{“timestamp”:1517937619690,”status”:500,”error”:”Internal Server Error”,”exception”:”org.springframework.web.client.ResourceAccessException”,”message”:”I/O error on GET request for \”http://productcatalogue:8020/products\": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue”,”path”:”/”}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You’ll notice that you are getting an HTTP 500 error response with this curl, and this is to be expected as you haven’t deployed all of the supporting services yet. However, before you deploy the rest of the services you’ll want to change the NodePort configuration to ClusterIP for all of your services. This means that each services will only be accessible other the network within the cluster. You could of course use a firewall to restrict a service exposed by NodePort, but by using ClusterIP with our local development environment you are forced not to cheat to access our services via anything other than the API gateway we will deploy.&lt;/p&gt;

&lt;p&gt;Open shopfront-service.yaml in your editor, and change the NodePort to ClusterIP. You can see the relevant part of the file contents below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
 name: shopfront
 labels:
 app: shopfront
spec:
 type: ClusterIP
 selector:
 app: shopfront
 ports:
 — protocol: TCP
 port: 8010
 name: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now you can modify the services contained with the productcatalogue-service.yaml and stockmanager-service.yaml files to also be ClusterIP.&lt;/p&gt;

&lt;p&gt;You can also now delete the existing shopfront service, ready for the deployment of the full stack in the next section of the tutorial.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ kubectl delete -f shopfront-service.yaml
service “shopfront” deleted
replicationcontroller “shopfront” deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Deploying the Full Stack
&lt;/h2&gt;

&lt;p&gt;With a once again empty Kubernetes cluster, you can now deploy the full three-service stack and the get the associated Kubernetes information on each service:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ kubectl apply -f .
service "productcatalogue" created
replicationcontroller "productcatalogue" created
service "shopfront" created
replicationcontroller "shopfront" created
service "stockmanager" created
replicationcontroller "stockmanager" created
(master *) kubernetes $
(master *) kubernetes $ kubectl get services
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes         ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP    20h
productcatalogue   ClusterIP   10.106.8.35     &amp;lt;none&amp;gt;        8020/TCP   1s
shopfront          ClusterIP   10.98.189.230   &amp;lt;none&amp;gt;        8010/TCP   1s
stockmanager       ClusterIP   10.96.207.245   &amp;lt;none&amp;gt;        8030/TCP   1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can see that the port that was declared in the service is available as specified (i.e. 8010, 8020, 8030) — each pod running gets its own cluster IP and associated port range (i.e. each pods gets its own “network namespace”). We can’t access this port outside of the cluster (like we can with NodePort), but within the cluster everything works as expected.&lt;/p&gt;

&lt;p&gt;You can also see that using ClusterIP does not expose the service externally by trying to curl the endpoint (this time you should receive a “connection refused”):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ curl -v localhost:8010
* Rebuilt URL to: localhost:8010/
* Trying ::1…
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8010 failed: Connection refused
* Trying 127.0.0.1…
* TCP_NODELAY set
* Connection failed
* connect to 127.0.0.1 port 8010 failed: Connection refused
* Failed to connect to localhost port 8010: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8010: Connection refused
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Deploying the Ambassador API Gateway
&lt;/h2&gt;

&lt;p&gt;Now is the time to deploy the Ambassador API gateway in order to expose your shopfront service to end-users. The other two services can remain private within the cluster, as they are supporting services, and don’t have to be exposed publicly.&lt;/p&gt;

&lt;p&gt;First, create a LoadBalancer service that uses Kubernetes annotations to route requests from outside the cluster to the appropriate services. Save the following content within a new file named &lt;code&gt;ambassador-service.yaml&lt;/code&gt;. Note the &lt;code&gt;getambassador.io/config&lt;/code&gt; annotation. You can use &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="noopener noreferrer"&gt;Kubernetes annotations&lt;/a&gt; to attach arbitrary non-identifying metadata to objects, and clients such as Ambassador can retrieve this metadata. Can you figure out what this annotation is doing?&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The Ambassador annotation is key to how the gateway works — how it routes “ingress” traffic from outside the cluster (e.g. an end-user request) to services within the cluster. Let’s break this down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“getambassador.io/config: |” — specifies that this annotation is for Ambassador&lt;/li&gt;
&lt;li&gt;“ — -” — simply declares how much you love YAML!&lt;/li&gt;
&lt;li&gt;“ apiVersion: ambassador/v0” — specifies the Ambassador API/schema version&lt;/li&gt;
&lt;li&gt;“ kind: Mapping” — specifies that you are creating a “mapping” (routing) configuration&lt;/li&gt;
&lt;li&gt;“ name: shopfront” —is the name for this mapping (which will show up in the debug UI)&lt;/li&gt;
&lt;li&gt;“ prefix: /shopfront/” — is the external prefix of the URI that you want to route internally&lt;/li&gt;
&lt;li&gt;“ service: shopfront:8010” — is the Kubernetes service (and port) you want to route to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a nutshell, this annotation states that any request to the external IP of the LoadBalancer service (which will be “localhost” in your Docker for Mac/Windows example) with the prefix &lt;code&gt;/shopfront/&lt;/code&gt; will be routed to the Kubernetes shopfront service running on the (ClusterIP) port 8010. In your example, when you enter &lt;a href="http://localhost/shopfront/" rel="noopener noreferrer"&gt;http://localhost/shopfront/&lt;/a&gt; in your web browser you should see the UI provided by the shopfront service. Hopefully this all makes sense, but if it doesn’t then please visit the &lt;a href="https://gitter.im/datawire/ambassador" rel="noopener noreferrer"&gt;Ambassador Gitter&lt;/a&gt; and ask any questions, or ping me on twitter!&lt;/p&gt;

&lt;p&gt;With your newfound understanding of Ambassador routing (and world domination of all API Gateways merely a few steps away), you can deploy the Ambassador service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ kubectl apply -f ambassador-service.yaml
service “ambassador” created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will also need to deploy the Ambassador Admin service (and associated pods/containers) that are responsible for the heavy-lifting associated with the routing. It's worth noting that the routing is conducted by a "sidecar" proxy, which in this case is the &lt;a href="https://www.envoyproxy.io/" rel="noopener noreferrer"&gt;Envoy proxy&lt;/a&gt;. Envoy is responsible for all of the production network traffic within Lyft, and it's creator, &lt;a href="https://twitter.com/mattklein123?lang=en" rel="noopener noreferrer"&gt;Matt Klein&lt;/a&gt;, has written lots of &lt;a href="https://eng.lyft.com/envoy-7-months-later-41986c2fd443" rel="noopener noreferrer"&gt;very interesting&lt;/a&gt; &lt;a href="https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc" rel="noopener noreferrer"&gt;content&lt;/a&gt; about the &lt;a href="https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310" rel="noopener noreferrer"&gt;details&lt;/a&gt;. You may have also heard about the emerging "&lt;a href="https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/" rel="noopener noreferrer"&gt;service mesh&lt;/a&gt;" technologies, and the popular &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; project also uses Envoy.&lt;/p&gt;

&lt;p&gt;Anyway, back to the tutorial! You can find a pre-prepared Kubernetes config file for &lt;a href="https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml" rel="noopener noreferrer"&gt;Ambassador Admin&lt;/a&gt; on the &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;getambassador.io&lt;/a&gt; website (for this demo you will be using the "no RBAC" version of the service, but you can also find an RBAC-enabled version of the &lt;a href="https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml" rel="noopener noreferrer"&gt;config file&lt;/a&gt; if you are running a Kubernetes cluster with Role-Based Access Control (RBAC) enabled. You can download a copy of the config file and look at it before applying, or you apply the service directly via the Interwebs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
service “ambassador-admin” created
deployment “ambassador” created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you issue a &lt;code&gt;kubectl get svc&lt;/code&gt; you can see that your Ambassador LoadBalancer and Ambassador Admin services have been deployed successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(master *) kubernetes $ kubectl get svc
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
ambassador         LoadBalancer   10.102.81.42    &amp;lt;pending&amp;gt;     80:31053/TCP     5m
ambassador-admin   NodePort       10.105.58.255   &amp;lt;none&amp;gt;        8877:31516/TCP   1m
kubernetes         ClusterIP      10.96.0.1       &amp;lt;none&amp;gt;        443/TCP          20h
productcatalogue   ClusterIP      10.106.8.35     &amp;lt;none&amp;gt;        8020/TCP         22m
shopfront          ClusterIP      10.98.189.230   &amp;lt;none&amp;gt;        8010/TCP         22m
stockmanager       ClusterIP      10.96.207.245   &amp;lt;none&amp;gt;        8030/TCP         22m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will notice on the ambassador service that the external-ip is listed as  and this is a &lt;a href="https://www.datawire.io/docker-mac-kubernetes-ingress/" rel="noopener noreferrer"&gt;known bug with Docker for Mac/Windows&lt;/a&gt;. You can still access a LoadBalancer service via localhost — although you may need to wait a minute or two while everything deploys successfully behind the scenes.&lt;/p&gt;

&lt;p&gt;Let’s try and access the shopfront this now using the `/shopfront/ route you configured previously within the Ambassador annotations. You can curl localhost/shopfront/ (with no need to specify a port, as you configured the Ambassador LoadBalancer service to listen on port 80):&lt;/p&gt;

&lt;p&gt;{% gist &lt;a href="https://gist.github.com/kelseyevans/1ad64d89409c1deeb5ee985b7f30a1aa" rel="noopener noreferrer"&gt;https://gist.github.com/kelseyevans/1ad64d89409c1deeb5ee985b7f30a1aa&lt;/a&gt; %}&lt;/p&gt;

&lt;p&gt;That’s it! You are now accessing the shopfront service that is hidden away in the Kubernete cluster via Ambassador. You can also visit the shopfront UI via your browser, and this provides a much more friendly view!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2F0_bYOzcSjda6cSBmNT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2F0_bYOzcSjda6cSBmNT.png" alt="shopfront"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Ambassador Diagnostics
&lt;/h2&gt;

&lt;p&gt;If you want to look at the Ambassador Diagnostic UI then you can use port-forwarding. I’ll explain more about how to use this in a future post, but for the moment you can have a look around by yourself. First you will need to find the name of an ambassador pod:&lt;/p&gt;

&lt;p&gt;{% gist &lt;a href="https://gist.github.com/kelseyevans/a8fd8d73dcbc97191ec71b55514b7d90" rel="noopener noreferrer"&gt;https://gist.github.com/kelseyevans/a8fd8d73dcbc97191ec71b55514b7d90&lt;/a&gt; %}&lt;/p&gt;

&lt;p&gt;Here I’ll pick &lt;code&gt;ambassador-6d9f98bc6c-5sppl&lt;/code&gt;. You can now port-forward from your local network adapter to inside the cluster and expose the Ambassador Diagnostic UI that is runing on port 8877.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
(master *) kubernetes $ kubectl port-forward ambassador-6d9f98bc6c-5sppl 8877:8877&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can now visit &lt;a href="http://localhost:8877/ambassador/v0/diag" rel="noopener noreferrer"&gt;http://localhost:8877/ambassador/v0/diag&lt;/a&gt; in your browser and have a look around!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2F0_DrzPvjuOoUpAO_jd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2F0_DrzPvjuOoUpAO_jd.png" alt="ambassador diagnostic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you are finished you can exit the port-forward via ctrl-c. You can also delete all of the services you have deployed into your Kubernetes cluster by issuing a &lt;code&gt;kubectl delete -f .&lt;/code&gt; within the kubernetes directory. You will also need to delete the ambassador-admin service you have deployed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
(master *) kubernetes $ kubectl delete -f .&lt;br&gt;
service "ambassador" deleted&lt;br&gt;
service "productcatalogue" deleted&lt;br&gt;
replicationcontroller "productcatalogue" deleted&lt;br&gt;
service "shopfront-canary" deleted&lt;br&gt;
replicationcontroller "shopfront-canary" deleted&lt;br&gt;
service "shopfront" deleted&lt;br&gt;
replicationcontroller "shopfront" deleted&lt;br&gt;
service "stockmanager" deleted&lt;br&gt;
replicationcontroller "stockmanager" deleted&lt;br&gt;
(master *) kubernetes $&lt;br&gt;
(master *) kubernetes $ kubectl delete -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml&lt;br&gt;
service "ambassador-admin" deleted&lt;br&gt;
deployment "ambassador" deleted&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;I'm planning on creating another article soon that discusses how to canary launch/test a service, as Ambassador makes this very easy. Other topics I'm keen to explore is integrating all of this into a CD pipeline, and also exploring how best to set up a local development workflow. Closely related to this, I'm also keen to look into debugging Java applications deployed via Kubernetes.&lt;/p&gt;

&lt;p&gt;You can also read more details on Ambassador itself via the docs, including adding &lt;a href="https://www.getambassador.io/user-guide/auth-tutorial" rel="noopener noreferrer"&gt;auth/security&lt;/a&gt;, &lt;a href="https://www.getambassador.io/how-to/grpc" rel="noopener noreferrer"&gt;gRPC support&lt;/a&gt;, and &lt;a href="https://www.getambassador.io/how-to/tls-termination" rel="noopener noreferrer"&gt;TLS termination&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article originally appeared on the &lt;a href="https://blog.getambassador.io/deploying-java-apps-with-kubernetes-and-the-ambassador-api-gateway-c6e9d9618f1b" rel="noopener noreferrer"&gt;Ambassador blog&lt;/a&gt; written by &lt;a href="https://www.twitter.com/danielbryantuk" rel="noopener noreferrer"&gt;Daniel Bryant&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>java</category>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>opensource</category>
    </item>
    <item>
      <title>In search of an effective developer experience with Kubernetes</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Thu, 15 Feb 2018 17:14:06 +0000</pubDate>
      <link>https://dev.to/datawireio/in-search-of-an-effective-developer-experience-with-kubernetes--42nn</link>
      <guid>https://dev.to/datawireio/in-search-of-an-effective-developer-experience-with-kubernetes--42nn</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.datawire.io/" rel="noopener noreferrer"&gt;Datawire&lt;/a&gt; we are helping many organisations with &lt;a href="https://www.datawire.io/faster/" rel="noopener noreferrer"&gt;deploying applications&lt;/a&gt; to Kubernetes. Often our most important input is working closely alongside development teams helping them to build effective continuous integration and continuous delivery (CI/CD) pipelines. This is primarily because creating an effective &lt;a href="https://www.datawire.io/faster/dev-workflow-intro/" rel="noopener noreferrer"&gt;developer workflow&lt;/a&gt; on Kubernetes can be challenging -- the ecosystem is still evolving, and not all the platform components are plug-and-play -- and also because many engineering teams fail to realise that in order to "close the loop" on business ideas and hypotheses, you also need to instrument applications for observability. We often argue that the first deployment of an application into production through the pipeline is only the start of the continuous delivery process, not the end as some think.&lt;/p&gt;

&lt;p&gt;All of us are creating software to support the &lt;a href="https://itrevolution.com/book/the-art-of-business-value/" rel="noopener noreferrer"&gt;delivery of value&lt;/a&gt; to our customers and to the business, and therefore the &lt;a href="https://www.infoq.com/news/2017/07/remove-friction-dev-ex" rel="noopener noreferrer"&gt;"developer experience" (DevEx)&lt;/a&gt; -- from idea generation to running (and observing) in production -- must be fast, reliable and provide good feedback. As we have helped our customer create effective continuous delivery pipelines for Kubernetes (and the associated workflows), we have seen several patterns emerge. We are keen to share our observations on these patterns, and also explain how we have captured some of best patterns within a &lt;a href="https://www.datawire.io/reference-architecture/" rel="noopener noreferrer"&gt;collection of open source tools&lt;/a&gt; for deploying applications to Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Freference-architecture-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F02%2Freference-architecture-diagram.png" alt="reference-architecture-diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From idea to (observable) value
&lt;/h2&gt;

&lt;p&gt;Everything we do as engineers begins with an idea. From this idea a hypothesis emerges -- for example, modifying the layout of a web form will improve conversion, or improving the site's p99 latency will result in more revenue -- and we can extract appropriate metrics for observation -- conversion and page load latency in our example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building and packaging for Kubernetes
&lt;/h2&gt;

&lt;p&gt;Once we have agreed our hypothesis and metrics we can then begin to write code and package this ready for deployment on Kubernetes. We have created the open source &lt;a href="https://forge.sh/" rel="noopener noreferrer"&gt;Forge&lt;/a&gt; framework to assist with the entire development process, from automatically creating and managing boilerplate Kubernetes configuration, to allowing us to parameterise runtime properties and resources that can facilitate deploying applications to Kubernetes with a &lt;a href="https://forge.sh/docs/tutorials/quickstart#deploy-a-service" rel="noopener noreferrer"&gt;single CLI instruction&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If we are working on a hypothesis that requires "exploration" -- for example, refactoring existing functionality, or solving a technical integration issue -- we often whiteboard ideas and begin coding using techniques like Test-Driven Development (TDD), taking care to design observability (business metrics, monitoring and logging etc) in as we go. If we are working on a hypothesis that requires "experimentation" -- for example, a new business feature -- we typically define Behaviour-Driven Development (BDD)-style tests in order to help keep us focused towards building functionality "outside-in".&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing friction from the code-deploy-experiment cycle
&lt;/h2&gt;

&lt;p&gt;We attempt to develop within environments that are as production-like as possible, and so frequently we build local services that interact with a more-complete remote Kubernetes cluster deployment. We have created the open source tool &lt;a href="https://www.telepresence.io/" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; that allows us to execute and debug a local service that acts as if it is part of the remote environment (effectively two-way proxying from our local development machine to the remote Kubernetes cluster).&lt;/p&gt;

&lt;p&gt;We like to "release early, and release often", and so favour running tests in production using &lt;a href="https://www.getambassador.io/about/microservices-api-gateways#testing-and-updates" rel="noopener noreferrer"&gt;canary and dark launches&lt;/a&gt;. This way we can expose new functionality to small amounts of real users, and observe their behaviour in relation to our hypothesis. As with any deployment to a production environment, there is obvious a certain amount of risk, and we mitigate this by introducing alerting and automated rollbacks when serious issues are detected. We have created the open source API gateway &lt;a href="https://www.getambassador.io/" rel="noopener noreferrer"&gt;Ambassador&lt;/a&gt; for this purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smart routing and monitoring with Ambassador API Gateway
&lt;/h2&gt;

&lt;p&gt;Ambassador is built using the popular &lt;a href="https://www.envoyproxy.io/" rel="noopener noreferrer"&gt;Envoy proxy&lt;/a&gt; that emerged from the work by &lt;a href="https://twitter.com/mattklein123?lang=en" rel="noopener noreferrer"&gt;Matt Klein&lt;/a&gt; and &lt;a href="https://eng.lyft.com/envoy-7-months-later-41986c2fd443" rel="noopener noreferrer"&gt;his team at Lyft&lt;/a&gt;. Ambassador allows "smart routing" of traffic when deploying applications to Kubernetes, and the underlying technology has been proven to operate at-scale within Lyft. Once an application is receiving production traffic we can observe metrics based on our earlier hypothesis. We typically use &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; to collect data and &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; to display the results via dashboards. We've created the open source &lt;a href="https://github.com/datawire/prometheus-ambassador" rel="noopener noreferrer"&gt;prometheus-ambassador&lt;/a&gt; project to enable the easy export of metrics from Ambassador, for example latency and the number of 5xx HTTP responses codes returned.&lt;/p&gt;

&lt;h2&gt;
  
  
  And the cycle begins again
&lt;/h2&gt;

&lt;p&gt;Once we have analysed our metrics the development cycle can begin again, either iterating on our existing solution and running additional canary experiments, or if we have proven (or disproven) our original hypothesis we can generate another new idea and hypothesis.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article originally appeared as part of the &lt;a href="https://www.datawire.io/faster" rel="noopener noreferrer"&gt;Code Faster Guides&lt;/a&gt; on &lt;a href="https://www.datawire.io/" rel="noopener noreferrer"&gt;Datawire.io&lt;/a&gt; written by &lt;a href="https://twitter.com/danielbryantuk" rel="noopener noreferrer"&gt;Daniel Bryant&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Why your development workflow is so important for microservices</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Thu, 25 Jan 2018 20:10:36 +0000</pubDate>
      <link>https://dev.to/datawireio/why-your-development-workflow-is-so-important-for-microservices-3c9m</link>
      <guid>https://dev.to/datawireio/why-your-development-workflow-is-so-important-for-microservices-3c9m</guid>
      <description>&lt;p&gt;Your development workflow is the process by which your organization develops software. A typical development workflow starts with product definition, and then moves through development, testing, release, and production stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stability vs velocity tradeoff
&lt;/h2&gt;

&lt;p&gt;Organizations tune this workflow for their given business needs and application. Typically, this involves optimizing the workflow to provide the right balance of stability versus velocity. As the application becomes more popular, insuring that updates don't negatively impact users becomes more important. More stringent release criteria, better testing, and development reviews are typical strategies that improve stability. Yet these strategies aren't free, as they reduce velocity.&lt;/p&gt;

&lt;p&gt;(Haven't you ever said "we used to ship software so much faster, and now it's slowed down even though we have twice as many engineers?")&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling your development workflow
&lt;/h2&gt;

&lt;p&gt;The problem with the development workflow is that no amount of optimization can overcome the fact that there is no single development workflow that works for every part of the application.&lt;/p&gt;

&lt;p&gt;The reality is that some parts of your application demand stability, while other parts of your application require velocity. What you really need is multiple workflows that work for different parts of your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices
&lt;/h2&gt;

&lt;p&gt;Microservices is distributed development workflow, enabled by splitting your application up into smaller services. By splitting your application into smaller components, you're able to run independent development workflows for each of your services.&lt;/p&gt;

&lt;p&gt;You want to run a prototyping workflow for early feature development. As your service matures, you'll want a workflow that supports rapid updates in production. And as it becomes a mission critical service that other services or users really depend on, you'll need a workflow that insures rock-solid stability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a workflow is both easy and hard
&lt;/h2&gt;

&lt;p&gt;The challenge of building a microservices workflow is that you need a standard set of tools and processes that support all these different modes of development. You don't want one set of tools for prototyping, and another set of tools and workflow for production.&lt;/p&gt;

&lt;p&gt;In addition, a microservices architecture, the development teams are typically responsible for a service and not just the code. This implies that the development teams need operational skills and capabilities, e.g., monitoring and deployment.&lt;/p&gt;

&lt;p&gt;Luckily, Kubernetes has become a de facto standard for running cloud native applications. The Kubernetes ecosystem provides a robust set of operational infrastructure with which to run microservices. So you don't need to start from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Tools
&lt;/h2&gt;

&lt;p&gt;The following open source tools may be helpful for optimizing your development workflow on Kubernetes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kubernetes.io/"&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.telepresence.io/"&gt;Telepresence&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.envoyproxy.io/"&gt;Envoy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://forge.sh/"&gt;Forge&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io/"&gt;Ambassador&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What do you think?
&lt;/h2&gt;

&lt;p&gt;What does your development workflow look like?  What tools do your developers use, and what pain points do they face? &lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Monitoring Envoy and Ambassador on Kubernetes with the Prometheus Operator</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Thu, 18 Jan 2018 21:09:25 +0000</pubDate>
      <link>https://dev.to/kelseyevans/monitoring-envoy-and-ambassador-on-kubernetes-with-the-prometheus-operator-j86</link>
      <guid>https://dev.to/kelseyevans/monitoring-envoy-and-ambassador-on-kubernetes-with-the-prometheus-operator-j86</guid>
      <description>&lt;p&gt;In the Kubernetes ecosystem, one of the emerging themes is how applications can best take advantage of the various capabilities of Kubernetes. The Kubernetes community has also introduced new concepts such as &lt;a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/" rel="noopener noreferrer"&gt;Custom Resources&lt;/a&gt; to make it easier to build Kubernetes-native software.&lt;/p&gt;

&lt;p&gt;In late 2016, CoreOS introduced the &lt;a href="https://coreos.com/blog/introducing-operators.html" rel="noopener noreferrer"&gt;Operator pattern&lt;/a&gt; and released the &lt;a href="https://coreos.com/operators/prometheus/docs/latest/" rel="noopener noreferrer"&gt;Prometheus Operator&lt;/a&gt; as a working example of the pattern. The Prometheus Operator automatically creates and manages Prometheus monitoring instances.&lt;/p&gt;

&lt;p&gt;The operator model is especially powerful for cloud-native organizations deploying multiple services. In this model, each team can deploy their own &lt;a href="https://www.prometheus.io" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; instance as necessary, instead of relying on a central SRE team to implement monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Envoy, Ambassador, and Prometheus
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we'll show how the Prometheus Operator can be used to monitor an Envoy proxy deployed at the edge. &lt;a href="https://www.envoyproxy.io" rel="noopener noreferrer"&gt;Envoy&lt;/a&gt; is an open source L7 proxy. One of the (many) reasons for Envoy's growing popularity is its emphasis on observability. Envoy uses &lt;a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/intro/arch_overview/statistics.html" rel="noopener noreferrer"&gt;statsd as its output format&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of using Envoy directly, we'll use &lt;a href="https://www.getambassador.io" rel="noopener noreferrer"&gt;Ambassador&lt;/a&gt;. Ambassador is a Kubernetes-native API Gateway built on Envoy. Similar to the Prometheus Operator, Ambassador configures and manages Envoy instances in Kubernetes, so that the end user doesn't need to do that work directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This tutorial assumes you're running Kubernetes 1.8 or later, with RBAC enabled.&lt;/p&gt;

&lt;p&gt;Note: If you're running on Google Kubernetes Engine, you'll need to grant &lt;code&gt;cluster-admin&lt;/code&gt; privileges to the account that will be installing Prometheus and Ambassador. You can do this with the commands below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud info | grep Account
Account: [username@example.org]
$ kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=username@example.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Deploy the Prometheus Operator
&lt;/h2&gt;

&lt;p&gt;The Prometheus Operator is configured as a Kubernetes &lt;code&gt;deployment&lt;/code&gt;. We'll first deploy the Prometheus operator.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f prom-operator.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We'll also want to create an additional &lt;code&gt;ServiceAccount&lt;/code&gt;s for the actual Prometheus instances.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f prom-rbac.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The Operator functions as your virtual SRE. At all times, the Prometheus operator insures that you have a set of Prometheus servers running with the appropriate configuration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy Ambassador
&lt;/h2&gt;

&lt;p&gt;Ambassador also functions as your virtual SRE. At all times, Ambassador insures that you have a set of Envoy proxies running the appropriate configuration.&lt;/p&gt;

&lt;p&gt;We're going to deploy Ambassador into Kubernetes. On each Ambassador pod, we'll also deploy an additional container that runs the &lt;a href="https://github.com/prometheus/statsd_exporter" rel="noopener noreferrer"&gt;Prometheus statsd exporter&lt;/a&gt;. The exporter will collect the statsd metrics emitted by Envoy over UDP, and proxy them to Prometheus over TCP in Prometheus metrics format.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ambassador-rbac.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Ambassador is typically deployed as an API Gateway at the edge of your network. We'll deploy a service to map to the Ambassador &lt;code&gt;deployment&lt;/code&gt;. Note: if you're not on AWS or GKE, you'll need to update the service below to be a &lt;code&gt;NodePort&lt;/code&gt; instead of a &lt;code&gt;LoadBalancer&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ambassador.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should now have a working Ambassador and StatsD/Prometheus exporter that is accessible from outside your cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Configure Prometheus
&lt;/h2&gt;

&lt;p&gt;We now have Ambassador/Envoy running, along with the Prometheus Operator. How do we hook this all together? Logically, all the metrics data flows from Envoy to Prometheus in the following way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F01%2FMonitoring_Envoy_and_Ambassador_on_Kubernetes_with_the_Prometheus_Operator___Faster_Guides_by_Datawire.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.datawire.io%2Fwp-content%2Fuploads%2F2018%2F01%2FMonitoring_Envoy_and_Ambassador_on_Kubernetes_with_the_Prometheus_Operator___Faster_Guides_by_Datawire.png" alt="statsd-logical-flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far, we've deployed Envoy and the StatsD exporter, so now it's time to deploy the other components of this flow.&lt;/p&gt;

&lt;p&gt;We'll first create a Kubernetes &lt;code&gt;service&lt;/code&gt; that points to the StatsD exporter. We'll then create a &lt;code&gt;ServiceMonitor&lt;/code&gt; that tells Prometheus to add the service as a target.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f statsd-sink-svc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next, we need to tell the Prometheus Operator to create a Prometheus cluster for us. The Prometheus cluster is configured to collect data from any &lt;code&gt;ServiceMonitor&lt;/code&gt; with the &lt;code&gt;ambassador:monitoring&lt;/code&gt; label.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f prometheus.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally, we can create a service to expose Prometheus to the rest of the world. Again, if you're not on AWS or GKE, you'll want to use a &lt;code&gt;NodePort&lt;/code&gt; instead.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f prom-svc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;We've now configured Prometheus to monitor Envoy, so now let's test this out. Get the external IP address for Prometheus.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get services
NAME                  CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
ambassador            10.11.255.93    35.221.115.102   80:32079/TCP     3h
ambassador-admin      10.11.246.117   &amp;lt;nodes&amp;gt;          8877:30366/TCP   3h
ambassador-monitor    None            &amp;lt;none&amp;gt;           9102/TCP         3h
kubernetes            10.11.240.1     &amp;lt;none&amp;gt;           443/TCP          3h
prometheus            10.11.254.180   35.191.39.173    9090:32134/TCP   3h
prometheus-operated   None            &amp;lt;none&amp;gt;           9090/TCP         3h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the example above, this is &lt;code&gt;35.191.39.173&lt;/code&gt;. Now, go to http://$PROM_IP:9090 to see the Prometheus UI. You should see a number of metrics automatically populate in Prometheus.&lt;/p&gt;
&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;If the above doesn't work, there are a few things to investigate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure all your pods are running (&lt;code&gt;kubectl get pods&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Check the logs on the Prometheus cluster (&lt;code&gt;kubectl logs $PROM_POD prometheus&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Check &lt;a href="https://www.getambassador.io/user-guide/running#diagnostics" rel="noopener noreferrer"&gt;Ambassador diagnostics&lt;/a&gt; to verify Ambassador is working correctly&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Get a service running in Envoy
&lt;/h2&gt;

&lt;p&gt;The metrics so far haven't been very interesting, since we haven't routed any traffic through Envoy. We'll use Ambassador to set up a route from Envoy to the &lt;a href="http://httpbin.org" rel="noopener noreferrer"&gt;httpbin&lt;/a&gt; service. Ambassador is configured using Kubernetes annotations, so we'll do that here.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f httpbin.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we get the external IP address of Ambassador, we can route requests through Ambassador to the httpbin service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get services
NAME                  CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
ambassador            10.11.255.93    35.221.115.102   80:32079/TCP     3h
ambassador-admin      10.11.246.117   &amp;lt;nodes&amp;gt;          8877:30366/TCP   3h
ambassador-monitor    None            &amp;lt;none&amp;gt;           9102/TCP         3h
kubernetes            10.11.240.1     &amp;lt;none&amp;gt;           443/TCP          3h
prometheus            10.11.254.180   35.191.39.173    9090:32134/TCP   3h
prometheus-operated   None            &amp;lt;none&amp;gt;           9090/TCP         3h

$ curl http://35.221.115.102/httpbin/ip
{
  "origin": "35.214.10.110"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run a &lt;code&gt;curl&lt;/code&gt; command a few times, as shown above. Going back to the Prometheus dashboard, you'll see that a bevy of new metrics that contain &lt;code&gt;httpbin&lt;/code&gt; have appeared. Pick any of these metrics to explore further. For more information on Envoy stats, Matt Klein has written a &lt;a href="https://blog.envoyproxy.io/envoy-stats-b65c7f363342" rel="noopener noreferrer"&gt;detailed overview&lt;/a&gt; of Envoy's stats architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Microservices, as you know, are distributed systems. The key to scaling distributed systems is creating loose coupling between each of the components. In a microservices architecture, the most painful source of coupling is actually &lt;em&gt;organizational&lt;/em&gt; and not architectural. Design patterns such as the Prometheus Operator enable teams to be more self-sufficient, and reduce organizational coupling, enabling teams to code faster.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Ambassador and Istio: Edge proxy and service mesh</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Fri, 12 Jan 2018 16:15:41 +0000</pubDate>
      <link>https://dev.to/datawireio/ambassador-and-istio-edge-proxy-and-service-mesh-nla</link>
      <guid>https://dev.to/datawireio/ambassador-and-istio-edge-proxy-and-service-mesh-nla</guid>
      <description>&lt;p&gt;&lt;a href="https://www.getambassador.io"&gt;Ambassador&lt;/a&gt; is a Kubernetes-native API Gateway for microservices. Ambassador is deployed at the edge of your network, and routes incoming traffic to your internal services (aka "north-south" traffic).  &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt; is a service mesh for microservices, and designed to add L7 observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and Ambassador are built using &lt;a href="https://www.envoyproxy.io"&gt;Envoy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ambassador and Istio can be deployed together on Kubernetes. In this configuration, incoming traffic from outside the cluster is first routed through Ambassador, which then routes the traffic to Istio. Ambassador handles authentication, edge routing, TLS termination, and other traditional edge functions.&lt;/p&gt;

&lt;p&gt;This allows the operator to have the best of both worlds: a high performance, modern edge service (Ambassador) combined with a state-of-the-art service mesh (Istio). Istio's basic &lt;a href="https://istio.io/docs/tasks/traffic-management/ingress.html"&gt;ingress controller&lt;/a&gt;, the ingress controller is very limited, and has no support for authentication or many of the other features of Ambassador.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Ambassador working with Istio
&lt;/h2&gt;

&lt;p&gt;Getting Ambassador working with Istio is straightforward. In this example, we'll use the &lt;code&gt;bookinfo&lt;/code&gt; sample application from Istio.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Istio on Kubernetes, following &lt;a href="https://istio.io/docs/setup/kubernetes/quick-start.html"&gt;the default instructions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Next, install the Bookinfo sample application, following the &lt;a href="https://istio.io/docs/guides/bookinfo.html"&gt;instructions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Verify that the sample application is working as expected.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By default, the Bookinfo application uses the Istio ingress. To use Ambassador, we need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Ambassador. See the &lt;a href="https://www.getambassador.io/user-guide/getting-started"&gt;quickstart&lt;/a&gt; guide.&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;bookinfo.yaml&lt;/code&gt; manifest to include the necessary Ambassador annotations. See below.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Optionally, delete the Ingress controller from the &lt;code&gt;bookinfo.yaml&lt;/code&gt; manifest by typing &lt;code&gt;kubectl delete ingress gateway&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test Ambassador by going to &lt;code&gt;$AMBASSADOR_IP/productpage/&lt;/code&gt;. You can get the actual IP address for Ambassador by typing &lt;code&gt;kubectl get services ambassador&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Automatic sidecar injection
&lt;/h2&gt;

&lt;p&gt;Newer versions of Istio support Kubernetes initializers to &lt;a href="https://istio.io/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection"&gt;automatically inject the Istio sidecar&lt;/a&gt;. With Ambassador, you don't need to inject the Istio sidecar -- Ambassador's Envoy instance will automatically route to the appropriate service(s). If you're using automatic sidecar injection, you'll need to configure Istio to not inject the sidecar automatically for Ambassador pods. There are several approaches to doing this that are &lt;a href="https://istio.io/docs/setup/kubernetes/sidecar-injection.html#configuration-options"&gt;explained in the documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>Tutorial: Getting started with Ambassador - a Kubernetes-native API gateway for microservices</title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Wed, 03 Jan 2018 19:48:25 +0000</pubDate>
      <link>https://dev.to/datawireio/tutorial-getting-started-with-ambassador---a-kubernetes-native-api-gateway-for-microservices-1849</link>
      <guid>https://dev.to/datawireio/tutorial-getting-started-with-ambassador---a-kubernetes-native-api-gateway-for-microservices-1849</guid>
      <description>&lt;p&gt;Ambassador is a Kubernetes-native API gateway for microservices built on the &lt;a href="https://www.envoyproxy.io/"&gt;Envoy Proxy&lt;/a&gt;. Ambassador is designed for self-service. Developers should be able to manage basic aspects of Ambassador without requiring operations. Ambassador accomplishes this by enabling developers to configure it through Kubernetes annotations. This allows developers to easily manage Ambassador using their existing Kubernetes deployment workflow.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll do a quick tour of Ambassador with a demo configuration before walking through how to deploy Ambassador in Kubernetes with a custom configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Running the demo configuration
&lt;/h2&gt;

&lt;p&gt;By default, Ambassador uses a demo configuration to show some of its basic features. Get it running with Docker, and expose Ambassador on port 8080:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ambassador &lt;span class="nt"&gt;--rm&lt;/span&gt; datawire/ambassador:&lt;span class="o"&gt;{&lt;/span&gt;VERSION&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--demo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  2. Ambassador's Diagnostics
&lt;/h2&gt;

&lt;p&gt;Ambassador provides live diagnostics viewable with a web browser. While this would normally not be exposed to the public network, the Docker demo publishes the diagnostics service at the following URL:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:8080/ambassador/v0/diag/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Some of the most important information - your Ambassador version, how recently Ambassador's configuration was updated, and how recently Envoy last reported status to Ambassador - is right at the top. The diagnostics overview can show you what it sees in your configuration map, and which Envoy objects were created based on your configuration.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. The Quote of the Moment service
&lt;/h2&gt;

&lt;p&gt;Since Ambassador is an API gateway, its primary purpose is to provide access to microservices. The demo is preconfigured with a mapping that connects the &lt;code&gt;/qotm/&lt;/code&gt; resource to the "Quote of the Moment" service -- a demo service that supplies quotations. You can try it out here:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/qotm/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This request will route to the &lt;code&gt;qotm&lt;/code&gt; service at &lt;code&gt;demo.getambassador.io&lt;/code&gt;, and return a quote in a JSON object.&lt;/p&gt;

&lt;p&gt;You can also see the mapping by clicking the &lt;code&gt;mapping-qotm.yaml&lt;/code&gt; link from the diagnostic overview, or by opening&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:8080/ambassador/v0/diag/mapping-qotm.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Authentication
&lt;/h2&gt;

&lt;p&gt;On the diagnostic overview, you can also see that Ambassador is configured to do authentication -- click the &lt;code&gt;auth.yaml&lt;/code&gt; link, or open&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:8080/ambassador/v0/diag/auth.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;for more here. Ambassador uses a demo authentication service at &lt;code&gt;demo.getambassador.io&lt;/code&gt; to mediate access to the Quote of the Moment: simply getting a random quote is allowed without authentication, but to get a specific quote, you'll have to authenticate:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-v&lt;/span&gt; http://localhost:8080/qotm/quote/5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;will return a 401, but&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; username:password http://localhost:8080/qotm/quote/5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;will succeed. (Note that that's literally "username" and "password" -- the demo auth service is deliberately not very secure!)&lt;/p&gt;

&lt;p&gt;Note that it's up to the auth service to decide what needs authentication -- teaming Ambassador with an authentication service can be as flexible or strict as you need it to be.&lt;/p&gt;
&lt;h2&gt;
  
  
  5. Ambassador in Kubernetes
&lt;/h2&gt;

&lt;p&gt;So far, we've used a demo configuration, and run everything in our local Docker instance. We'll now switch to Kubernetes, using service annotations to configure Ambassador to map &lt;code&gt;/httpbin/&lt;/code&gt; to &lt;code&gt;httpbin.org&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  5.1 Defining the Ambassador Service
&lt;/h3&gt;

&lt;p&gt;Ambassador is deployed as a Kubernetes service. Create the following YAML and put it in a file called &lt;code&gt;ambassador-service.yaml&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Then, apply it to the Kubernetes with &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ambassador-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The YAML above does several things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It creates a Kubernetes service for Ambassador, of type &lt;code&gt;LoadBalancer&lt;/code&gt;. Note that if you're not deploying in an environment where &lt;code&gt;LoadBalancer&lt;/code&gt; is a supported type, you'll need to change this to a different type of service, e.g., &lt;code&gt;NodePort&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;It creates a test route that will route traffic from &lt;code&gt;/httpbin/&lt;/code&gt; to the public &lt;code&gt;httpbin.org&lt;/code&gt; service. In Ambassador, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in &lt;a href="https://www.datawire.io/faster/canary-workflow/"&gt;this more advanced example&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, note that we are using the &lt;code&gt;host_rewrite&lt;/code&gt; attribute for the &lt;code&gt;httpbin_mapping&lt;/code&gt; -- this forces the HTTP &lt;code&gt;Host&lt;/code&gt; header, and is often a good idea when mapping to external services. Ambassador supports &lt;a href="https://www.getambassador.io/reference/configuration"&gt;many different configuration options&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  5.2 Deploying Ambassador
&lt;/h3&gt;

&lt;p&gt;Once that's done, we need to get Ambassador actually running. It's simplest to use the YAML files we have online for this (though of course you can download them and use them locally if you prefer!). If you're using a cluster with RBAC enabled, you'll need to use:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Without RBAC, you can use:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When Ambassador starts, it will notice the &lt;code&gt;getambassador.io/config&lt;/code&gt; annotation on its own service, and use the &lt;code&gt;Mapping&lt;/code&gt; contained in it to configure itself. (There's no restriction on what kinds of Ambassador configuration can go into the annotation, but it's important to note that Ambassador only looks at annotations on Kubernetes &lt;code&gt;service&lt;/code&gt;s.)&lt;/p&gt;

&lt;p&gt;Note: If you're using Google Kubernetes Engine with RBAC, you'll need to grant permissions to the account that will be setting up Ambassador. To do this, get your official GKE username, and then grant &lt;code&gt;cluster-admin&lt;/code&gt; Role privileges to that username:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud info | grep Account
Account: [username@example.org]
$ kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=username@example.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  5.3 Testing the Mapping
&lt;/h3&gt;

&lt;p&gt;To test things out, we'll need the external IP for Ambassador (it might take some time for this to be available):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-o&lt;/span&gt; wide ambassador
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Eventually, this should give you something like:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME         CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
ambassador   10.11.12.13     35.36.37.38     80:31656/TCP   1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should now be able to use &lt;code&gt;curl&lt;/code&gt; to &lt;code&gt;httpbin&lt;/code&gt; (don't forget the trailing &lt;code&gt;/&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl 35.36.37.38/httpbin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  6. Adding a Service
&lt;/h2&gt;

&lt;p&gt;You can add a service just by deploying it with an appropriate annotation. For example, we can deploy the QoTM service locally in this cluster and automatically map it through Ambassador by creating &lt;code&gt;qotm.yaml&lt;/code&gt; with the following:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;and then applying it with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f qotm.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few seconds after the QoTM service is running, Ambassador should be configured for it. Try it with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl 35.36.37.38/qotm/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. The Diagnostics Service in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Note that we did not expose the diagnostics port for Ambassador, since we don't want to expose it on the Internet. To view it, we'll need to get the name of one of the ambassador pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
ambassador-3655608000-43x86   1/1       Running   0          2m
ambassador-3655608000-w63zf   1/1       Running   0          2m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forwarding local port 8877 to one of the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward ambassador-3655608000-43x86 8877
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;will then let us view the diagnostics at &lt;a href="http://localhost:8877/ambassador/v0/diag/"&gt;http://localhost:8877/ambassador/v0/diag/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Next
&lt;/h2&gt;

&lt;p&gt;We've just done a quick tour of some of the core features of Ambassador: diagnostics, routing, configuration, and authentication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Join us on &lt;a href="https://gitter.im/datawire/ambassador"&gt;Gitter&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;Learn how to &lt;a href="https://www.getambassador.io/user-guide/auth-tutorial"&gt;add authentication&lt;/a&gt; to existing services; or&lt;/li&gt;
&lt;li&gt;Learn how to &lt;a href="https://www.getambassador.io/how-to/grpc"&gt;use gRPC with Ambassador&lt;/a&gt;; or&lt;/li&gt;
&lt;li&gt;Read about &lt;a href="https://www.getambassador.io/reference/configuration"&gt;configuring Ambassador&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Learn how to &lt;a href="https://www.getambassador.io/user-guide/with-istio"&gt;use Ambassador with Istio&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A development workflow for Kubernetes services </title>
      <dc:creator>kelseyevans</dc:creator>
      <pubDate>Tue, 21 Nov 2017 18:46:02 +0000</pubDate>
      <link>https://dev.to/datawireio/a-development-workflow-for-kubernetes-services-7fo</link>
      <guid>https://dev.to/datawireio/a-development-workflow-for-kubernetes-services-7fo</guid>
      <description>&lt;p&gt;A basic development workflow for Kubernetes services lets a developer write some code, commit it, and get it running on Kubernetes. It's also important that your development environment be as similar as possible to production, since having two different environments will inevitably introduce bugs. In this tutorial, we'll walk through a basic development workflow that is built around Kubernetes, Docker, and Envoy/Ambassador.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your cloud infrastructure
&lt;/h2&gt;

&lt;p&gt;This tutorial relies on two components in the cloud, Kubernetes and Ambassador. If you haven't already, go ahead and set them up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.getambassador.io"&gt;Ambassador&lt;/a&gt;, a self-service API Gateway for Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. A development environment for Kubernetes services
&lt;/h2&gt;

&lt;p&gt;You need a development environment for Kubernetes services. We recommend the following approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A containerized &lt;em&gt;build/runtime&lt;/em&gt; environment, where your service is always run and built. Containerizing your environment helps insure environmental parity across different development and production environments. It also simplifies the onboarding process for new developers.&lt;/li&gt;
&lt;li&gt;Developing your microservice locally, outside of the cluster. You want a fast code/build/test cycle. If you develop remotely, the additional step of deploying to a Kubernetes cluster introduces significant latency.&lt;/li&gt;
&lt;li&gt;Deploying your service into Kubernetes once you need to share your service with others (e.g., canary testing, internal development, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll need the following tools installed on your laptop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;git, for source control&lt;/li&gt;
&lt;li&gt;Docker, to build and run your containers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt;, to manage your deployment&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://forge.sh"&gt;Forge&lt;/a&gt;, for deploying your service into Kubernetes&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.telepresence.io"&gt;Telepresence&lt;/a&gt;, for locally developing your service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go ahead and install them now, if you haven't already.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Deploy service to Kubernetes
&lt;/h2&gt;

&lt;p&gt;In a traditional application, the release / operations team manages the deployment of application updates to production. In a microservices architecture, the team is responsible for deploying service updates to production.&lt;/p&gt;

&lt;p&gt;We're going to deploy and publish a microservice, from source, into Kubernetes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We've created a simple Python microservice that you can use as a template for your service. This template includes:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;code&gt;Dockerfile&lt;/code&gt; that specifies how your development environment and runtime environment are configured and built.&lt;/li&gt;
&lt;li&gt;a &lt;code&gt;service.yaml&lt;/code&gt; file that customizes deployments for different scenarios (e.g., production, canary, development).&lt;/li&gt;
&lt;li&gt;a Kubernetes manifest (&lt;code&gt;k8s/deployment.yaml&lt;/code&gt;) that defines how the service is run in Kubernetes. It also contains the annotations necessary to configure Ambassador for the given service.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   git clone https://github.com/datawire/hello-world-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;We're going to use Forge to automate and template-ize the deployment process. Run the Forge configuration process:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   forge setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The process of getting a service running on a Kubernetes cluster involves a number of steps: building a Docker image, pushing the image to a repository, instantiating a Kubernetes manifest to point to the image, and applying the manifest to the cluster. Forge automates this entire process of deployment:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   cd hello-world-python
   forge deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now, we're going to test the service. Get the external IP address of Ambassador:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   kubectl get services ambassador
   NAME         CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
   ambassador   10.11.250.208   35.190.189.139   80:31622/TCP   4d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Access the service via Ambassador:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   curl 35.190.189.139/hello/
   Hello World (Python)! (up 0:03:13)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Live coding
&lt;/h2&gt;

&lt;p&gt;When developing, you want a fast feedback cycle. You'd like to make a code change, and immediately be able to build and test your code. The deployment process we just went through adds latency into the process, since building and deploying a container with your latest changes takes time. Yet, running a service in Kubernetes lets that service access other cloud resources (e.g., other services, databases, etc.).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.telepresence.io"&gt;Telepresence&lt;/a&gt; lets you develop your service locally, while creating a bi-directional proxy to a remote Kubernetes cluster.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You'd like for your development environment to be identical to your runtime environment. We're going to do that by using the exact same Dockerfile we use for production to build a development image. Make sure you're in the &lt;code&gt;hello-world-python&lt;/code&gt; directory, and type:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   docker build . -t hello-world-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now, we can swap the existing &lt;code&gt;hello-world&lt;/code&gt; service on Kubernetes for a version of the same service, running in a local container.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   telepresence --swap-deployment hello-world-stable --docker-run \
    --rm -it -v $(pwd):/service hello-world-dev:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Note that Forge has automatically appended a &lt;code&gt;stable&lt;/code&gt; suffix to the deployment name to indicate that the service has been deployed with the &lt;code&gt;stable&lt;/code&gt; profile specified in the &lt;code&gt;service.yaml&lt;/code&gt;.) &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Telepresence invokes &lt;code&gt;docker run&lt;/code&gt; to start the container. It also mounts the local filesystem containing the Python source tree into the container. Change the "Hello World" message in &lt;code&gt;app.py&lt;/code&gt; to a different value:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   def root(): 
    return "Hello World via Telepresence! (up %s)\n" % elapsed()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now, if we test our service via Ambassador, we'll see that we're now routing to the modified version of our service.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   curl 35.190.189.139/hello/
   Hello World via Telepresence! (up 0:04:13)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Want to learn more?
&lt;/h2&gt;

&lt;p&gt;This article originally appeared on &lt;a href="https://www.datawire.io/"&gt;Datawire&lt;/a&gt;'s &lt;a href="https://www.datawire.io/faster/"&gt;Code Faster Guides&lt;/a&gt;. Check out the other tutorials in this series: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.datawire.io/faster/why-workflow/"&gt;Why your development workflow is so important for microservices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datawire.io/faster/canary-workflow/"&gt;Canary deployments, A/B testing, and microservices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datawire.io/faster/shared-dev/"&gt;Shared development models and multi-service applications&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or try out the open source projects mentioned in this tutorial: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io"&gt;Ambassador&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://forge.sh"&gt;Forge&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.telepresence.io/"&gt;Telepresence&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions, reach out to us on &lt;a href="https://gitter.im/datawire/home"&gt;Gitter&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>introduction</category>
    </item>
  </channel>
</rss>
