<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eduardo Romero</title>
    <description>The latest articles on DEV Community by Eduardo Romero (@foxteck).</description>
    <link>https://dev.to/foxteck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/foxteck"/>
    <language>en</language>
    <item>
      <title>Serverless Architectural Patterns</title>
      <dc:creator>Eduardo Romero</dc:creator>
      <pubDate>Mon, 04 Mar 2019 15:18:52 +0000</pubDate>
      <link>https://dev.to/foxteck/serverless-architectural-patterns-5ge1</link>
      <guid>https://dev.to/foxteck/serverless-architectural-patterns-5ge1</guid>
      <description>

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zcgYT0wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Awz3dUQYEOulqAI-A9H4-wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zcgYT0wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Awz3dUQYEOulqAI-A9H4-wg.png" alt=""&gt;&lt;/a&gt;Black and white clouds, atmosphere — Photo by &lt;a href="https://unsplash.com/photos/LzJZOJZtgmc?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Bryan Minear&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’re building more and more complex platforms, at the same time that we are trying to address ever-changing business requirements, and delivering them on time to an increasingly large number of users.&lt;/p&gt;

&lt;p&gt;What we deliver is inherently unique. A combination of different services, technologies, features, and teams with their own contexts and competing priorities.&lt;/p&gt;

&lt;p&gt;Sometimes it just feels like you’re designing and building an aircraft carrier, while navigating in rough ocean, occupied, and under attack.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
&lt;div class="ltag__twitter-tweet__media"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kI1ls8zl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/C-XkCunXoAU-Se1.jpg"&gt;&lt;/div&gt;
&lt;div class="ltag__twitter-tweet__main"&gt;
&lt;div class="ltag__twitter-tweet__header"&gt;
&lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--MmfS78fS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/979937791290195968/lyg6WFps_normal.jpg"&gt;&lt;div class="ltag__twitter-tweet__full-name"&gt;Eduardo Romero&lt;/div&gt;
&lt;div class="ltag__twitter-tweet__username"&gt;@foxteck&lt;/div&gt;
&lt;div class="ltag__twitter-tweet__twitter-logo"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kX-SksTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-eb8b335b75231c6443385ac04fdfcaed8ca5423c3990e89dc0178a4090ac1908.svg"&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div class="ltag__twitter-tweet__body"&gt;"Writing software is like designing and building a new type of aircraft carrier while it is in rough ocean, occupied, and under attack." &lt;/div&gt;
&lt;div class="ltag__twitter-tweet__date"&gt;21:14 PM - 26 Apr 2017&lt;/div&gt;
&lt;div class="ltag__twitter-tweet__actions"&gt;
&lt;a href="https://twitter.com/intent/tweet?in_reply_to=857342043017891840" class="ltag__twitter-tweet__actions__button"&gt;&lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/intent/retweet?tweet_id=857342043017891840" class="ltag__twitter-tweet__actions__button"&gt;&lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;&lt;/a&gt;0&lt;a href="https://twitter.com/intent/like?tweet_id=857342043017891840" class="ltag__twitter-tweet__actions__button"&gt;&lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;&lt;/a&gt;1&lt;/div&gt;
&lt;/div&gt;
&lt;/blockquote&gt;

&lt;p&gt;Continuously delivering quality software fast is a core business advantage for our clients. Accelerating the development process with modern architectures, frameworks, and practices is strategic.&lt;/p&gt;

&lt;p&gt;The serverless paradigm it’s great at enabling fast, continuous software delivery. You don’t have to think about managing infrastructure, provisioning or planning for demand and scale.&lt;/p&gt;

&lt;p&gt;And with the Function as a Service model we structure our code in smaller, simpler units, that are easy to understand, change and deploy to production. Allowing us to deliver business value and iterate quickly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Serverless is a great enabler for experimenting, learning, and out-experimenting your competition.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Architectural Patterns are powerful way to promote best practices, robust solutions, and a shared architectural vision across our engineering organization.&lt;/p&gt;

&lt;p&gt;Working on different projects leveraging serverless, I’ve seen patterns that we can adopt to solve common problems found in modern cloud architectures. While this is not an exhaustive collection, but it can be used as a catalog of architectural level building blocks for the next platform we build.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Architectural Patterns Catalog
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Simple Web Service.&lt;/li&gt;
&lt;li&gt;Decoupled Messaging.&lt;/li&gt;
&lt;li&gt;Robust API.&lt;/li&gt;
&lt;li&gt;Aggregator.&lt;/li&gt;
&lt;li&gt;Pub/Sub.&lt;/li&gt;
&lt;li&gt;Strangler.&lt;/li&gt;
&lt;li&gt;Queue-based Load Leveling.&lt;/li&gt;
&lt;li&gt;Read-heavy reporting engine.&lt;/li&gt;
&lt;li&gt;Streams and Pipelines.&lt;/li&gt;
&lt;li&gt;Fan-in and Fan-out.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Simple Web Service
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“A client needs to consume a service via a Public or Internal API.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A simple web service is the most standard use-case for AWS Lambda as a backend service. It represents the &lt;em&gt;logic&lt;/em&gt; or &lt;em&gt;domain&lt;/em&gt; layer of a &lt;a href="https://en.wikipedia.org/wiki/Multitier_architecture"&gt;&lt;em&gt;n-tiered&lt;/em&gt;&lt;/a&gt; or &lt;a href="https://martinfowler.com/bliki/PresentationDomainDataLayering.html"&gt;&lt;em&gt;layered&lt;/em&gt;&lt;/a&gt; &lt;em&gt;architecture&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N-1rPFDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/882/1%2AFBgxD-qR66DTnjHA1ASJEA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N-1rPFDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/882/1%2AFBgxD-qR66DTnjHA1ASJEA.png" alt=""&gt;&lt;/a&gt;Public API — Lambda Functions exposed via HTTP(s)&lt;/p&gt;

&lt;p&gt;For Public APIs, API Gateway is exposing lambda functions via HTTPs. API Gateway can handle authorization, authentication, routing, and versioning.&lt;/p&gt;

&lt;p&gt;For internal APIs, clients invoke lambda functions directly from within the client app using &lt;a href="https://aws.amazon.com/tools/#sdk"&gt;AWS’ SDK&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E-hH8qqf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/858/1%2AiPIAqcjx6gwXO2uvdh67Ag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E-hH8qqf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/858/1%2AiPIAqcjx6gwXO2uvdh67Ag.png" alt=""&gt;&lt;/a&gt;Internal API — The client invokes Lambda functions via the AWS SDK of your favorite language.&lt;/p&gt;

&lt;p&gt;AWS Lambda scales automatically, and can handle fluctuating loads. Execution time is limited to the maximum allowed by API Gateway (&lt;em&gt;29 seconds)&lt;/em&gt;, or the hard limit of &lt;em&gt;15 minutes&lt;/em&gt; if invoked with the SDK.&lt;/p&gt;

&lt;p&gt;It’s highly recommended that functions are &lt;a href="https://read.acloud.guru/lambda-for-alexa-skills-7-tips-from-the-trenches-684c963e6ad1#5c39"&gt;stateless&lt;/a&gt;, &lt;a href="https://serverless.com/blog/strategies-implementing-user-authentication-serverless-applications/"&gt;sessions&lt;/a&gt; are &lt;a href="https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-and-in-memory-caching-c9cd0844e072"&gt;cached&lt;/a&gt; and connections to downstream services are &lt;a href="https://www.jeremydaly.com/reuse-database-connections-aws-lambda/"&gt;handled appropriately&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Decoupled Messaging
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“As break out the monolith or continue to build services for our platform, different services have to interact. We want to avoid bottlenecks, synchronous I/O, and shared state.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Asynchronous messaging is the foundation for most service integrations. It’s proven to be the &lt;a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/toc.html"&gt;best strategy for enterprise architectures&lt;/a&gt;. It allows building loosely-coupled architectures that overcome the limits of remote service communication, like latency and unreliability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZKJzg5zF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM0nPEazhCK9DiKurpjw9CQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZKJzg5zF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM0nPEazhCK9DiKurpjw9CQ.png" alt=""&gt;&lt;/a&gt;Decoupled Messaging with SNS&lt;/p&gt;

&lt;p&gt;In the example above Service A is triggering events and SNS distributes these events to other services. When new services are deployed they can subscribe to the channel, and start getting messages as events happen.&lt;/p&gt;

&lt;p&gt;Messaging infrastructure is reliable. Offering better encapsulation than a shared database. Perfect for asynchronous, event-based communication.&lt;/p&gt;

&lt;p&gt;Different technologies have different constraints and offer specific guarantees. It’s important to be aware of these trade-offs. The most common services for messaging on AWS are &lt;a href="https://aws.amazon.com/sns/faqs/"&gt;SNS&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html"&gt;SQS&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html"&gt;Kinesis&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations&lt;/em&gt;&lt;/strong&gt; : Besides the known limitation and guarantees of each messaging service there should be conscious considerations for message duplication, message ordering, poisonous messages, sharding, and data retention. Microsoft’s “&lt;a href="https://docs.microsoft.com/en-us/previous-versions/msp-n-p/dn589781(v=pandp.10)"&gt;Asynchronous Messaging Primer&lt;/a&gt;” article discuses &lt;a href="http://considerations-for-implementing-asynchronous-messaging"&gt;these topics in detail&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Robust API
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“As the number of services of a platform grows clients need to interact with more and more services. Because each service might have different endpoints, belong to different teams, and have different release cycles, having the client manage these connections can be challenging.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a this pattern we have a single point of entry that exposes a well defined API, and routes requests to downstream services based on endpoints, routes, methods, and &lt;a href="https://blog.apisyouwonthate.com/api-versioning-has-no-right-way-f3c75457c0b7#0f4b"&gt;client features&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---KGRBPTo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkN_ykbVbX1Ln57wnPdtUqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---KGRBPTo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkN_ykbVbX1Ln57wnPdtUqg.png" alt=""&gt;&lt;/a&gt;A Robust API composed of different downstream services, exposed via API Gateway.&lt;/p&gt;

&lt;p&gt;The downstream services could be lambda functions, external third-party APIs, &lt;a href="https://aws.amazon.com/fargate/"&gt;Fargate&lt;/a&gt; containers, full-blown microservices, even internal APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/api-gateway/"&gt;API Gateway&lt;/a&gt;, &lt;a href="https://aws.amazon.com/elasticloadbalancing/features/#Details_for_Elastic_Load_Balancing_Products"&gt;App Load Balancer&lt;/a&gt; or &lt;a href="https://aws.amazon.com/appsync/"&gt;AppSync&lt;/a&gt; can be leveraged to implement it as the routing engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IoIe9gqR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/747/1%2AcY03nWEJaD4RysmM6YLIMw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IoIe9gqR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/747/1%2AcY03nWEJaD4RysmM6YLIMw.png" alt=""&gt;&lt;/a&gt;AppSync (GraphQL) can be seen as an implementation of the Robust API and Aggregator patterns.&lt;/p&gt;

&lt;p&gt;Having an abstraction layer between clients and downstream services facilitates incremental updates, rolling releases, and parallel versioning. Downstream services can be owned by different teams, decoupling release cycles, reducing the need for cross-team coordination, and improving API lifecycle and &lt;a href="https://blog.apisyouwonthate.com/api-evolution-for-rest-http-apis-b4296519e564"&gt;evolvability&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This pattern is also known as the &lt;a href="https://microservices.io/patterns/apigateway.html"&gt;API Gateway&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/gateway-routing"&gt;Gateway Router&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations&lt;/em&gt;&lt;/strong&gt; : The Gateway can be a single point of failure. Managing resources and updating the client-facing interface can be tricky. It could become a cross-team bottleneck if it’s not managed via code automation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Aggregator
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“A client might need to make multiple calls to various backend services to perform a single operation. This chattines impacts performance and scale.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Microservices have made communication overhead a common problem. Applications need to communicate with many smaller services, having a higher amount of cross-service calls.&lt;/p&gt;

&lt;p&gt;A service centralizes client requests to reduce the impact of communication overhead. It decomposes and makes the requests to downstream services, collects and stores responses as they arrive, aggregates them, and returns it to the caller as one response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jeYOHE_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/934/1%2AcHSxPOODaR1itN4A_W8Mcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jeYOHE_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/934/1%2AcHSxPOODaR1itN4A_W8Mcg.png" alt=""&gt;&lt;/a&gt;The invoice lambda function aggregating downstream services.&lt;/p&gt;

&lt;p&gt;Microsoft calls this pattern &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/gateway-aggregation"&gt;Gateway Aggregation&lt;/a&gt;. It can be implemented as a service with &lt;em&gt;some&lt;/em&gt; business logic, that is able to cache responses and knows what to do when downstream services fail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations&lt;/em&gt;&lt;/strong&gt; : Calls to the API should work as one operation. The API of the aggregator should not expose details of the services behind it to the client. The aggregator can be a single point of failure, and if it’s not &lt;em&gt;close enough&lt;/em&gt; other services it can cause performance issues. The aggregator is responsible for handling retries, circuit breaking, cache, tracing, and logging.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pub/Sub
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Services rarely live in isolation. The platform grows and services proliferate. We need services to interact without creating interdependency.&lt;/p&gt;

&lt;p&gt;Asynchronous messaging enables services to announce events to multiple interested consumers without coupling.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/publisher-subscriber"&gt;Publisher-Subscriber&lt;/a&gt; services publish events through a channel as &lt;em&gt;messages&lt;/em&gt;. Multiple interested consumers listen to the events by subscribing to these channels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v_YCAeFB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1016/1%2A-EPTUG9Umkl2UZTw7XqYDw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v_YCAeFB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1016/1%2A-EPTUG9Umkl2UZTw7XqYDw.png" alt=""&gt;&lt;/a&gt;The Orders service interacting with the Video Wall, Leaderboard, and Recommendations Engine.&lt;/p&gt;

&lt;p&gt;In the example the Orders service will publish an event when an order is created in the mobile application.&lt;/p&gt;

&lt;p&gt;The VideoWall service is listening to OrderCreated events. It will take the order, break apart the items in the order — what &lt;em&gt;needs&lt;/em&gt; preparation like a v60, or &lt;em&gt;does not&lt;/em&gt; like cookies, and update the screen so our &lt;em&gt;Baristas&lt;/em&gt; see which coffee to start brewing next.&lt;/p&gt;

&lt;p&gt;The Leaderboard service will receive the same event, and will update the tally — Who is the &lt;em&gt;Coffee Afficionado&lt;/em&gt; of the month, and what are the &lt;em&gt;Top Selling Coffee Origins, Methods&lt;/em&gt; and &lt;em&gt;Goodies&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The Recommendations service will keep track of who is drinking what, things that seem to go great together, and which is the next method you should try. — &lt;em&gt;Hint&lt;/em&gt;, try the Japanese Syphon:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/c5ae351f5c6bbf46de15cc3c5fa8f498/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/c5ae351f5c6bbf46de15cc3c5fa8f498/href"&gt;https://medium.com/media/c5ae351f5c6bbf46de15cc3c5fa8f498/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Services are decoupled. They work together by observing and reacting to the environment, and each other — like rappers &lt;a href="https://www.youtube.com/watch?v=1qR8zFJ2vHI&amp;amp;t=36s"&gt;freestyling&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When new services and features are available they can subscribe, get events, and evolve independently.&lt;/p&gt;

&lt;p&gt;Teams can focus on delivering value improving their core capabilities, without having to be focus on the complexity of the platform as a whole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations:&lt;/em&gt;&lt;/strong&gt; Publisher / Subscriber is a great match for event-driven architectures. With lots of different options for messaging: &lt;a href="https://aws.amazon.com/sns/"&gt;SNS&lt;/a&gt;, &lt;a href="https://aws.amazon.com/kinesis/"&gt;Kinesis&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/services/service-bus/"&gt;Azure’s Service Bus&lt;/a&gt;, &lt;a href="https://cloud.google.com/pubsub/"&gt;Google’s Cloud Pub/Sub&lt;/a&gt;, &lt;a href="https://kafka.apache.org/"&gt;Kafka&lt;/a&gt;, &lt;a href="https://pulsar.apache.org/"&gt;Pulsar&lt;/a&gt;, etc. These messaging services will take care of the infrastructure part of pub/sub, but given the asynchronous nature of messaging all the issues discussed previously — message ordering, duplication, expiration, idempotency, and eventual consistency, should be considered in the implementation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Strangler
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Systems grow and evolve over time. As complexity increases adding new features can be challenging. Completely replacing a system can take time, and starting from scratch is almost universally a bad idea.&lt;/p&gt;

&lt;p&gt;Migrating gradually to a new system, while keeping the old system to handle the features that haven’t been implemented is a better path.&lt;/p&gt;

&lt;p&gt;But clients need to know about both systems, and update every time a feature has been migrated.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;a href="https://www.martinfowler.com/bliki/StranglerApplication.html"&gt;Strangler pattern&lt;/a&gt; is a technique to gradually migrate legacy systems that was popularized with the exponential popularity of microservices. In this pattern a service acts as a &lt;em&gt;façade&lt;/em&gt; that intercepts requests from the clients, and routes them to either the legacy service or new services.&lt;/p&gt;

&lt;p&gt;Clients continue to use the same interface, unaware of the migration minimizing the possible impact — and the risk of the migration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sntrcYr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Audq9hy-EMH0qSNZmjpBvlA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sntrcYr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Audq9hy-EMH0qSNZmjpBvlA.png" alt=""&gt;&lt;/a&gt;ALB routing requests between the legacy app and the new microservice.&lt;/p&gt;

&lt;p&gt;An Application Load Balancer routes clients’ requests to the Orders Service, the first microservice the team implemented. Everything else continues to go to the Legacy application.&lt;/p&gt;

&lt;p&gt;Orders has its own data store, and implements all the business logic for Orders. Because some of the features on the legacy app use orders, we need to push the data back to the legacy app to stay in sync (an &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer"&gt;Anti-corruption Layer&lt;/a&gt; of sorts).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--44E7qPCx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALzhxzJ5M0zlNkgFm4SixrA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--44E7qPCx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALzhxzJ5M0zlNkgFm4SixrA.png" alt=""&gt;&lt;/a&gt;New feature — Leaderboard Microservice now available.&lt;/p&gt;

&lt;p&gt;As the project evolves, new features come in and we create new services.&lt;/p&gt;

&lt;p&gt;The Leaderboard Service is now available. It’s a completely new feature that the brand new &lt;em&gt;Engagement Team&lt;/em&gt; created so there’s no need to interact with the Legacy app.&lt;/p&gt;

&lt;p&gt;Teams will continue to create new features and port existing features to new services. When all the required functionality has been phased out of the legacy monolithic app it can be decommissioned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations:&lt;/em&gt;&lt;/strong&gt; The &lt;em&gt;façade&lt;/em&gt; keeps evolving side by side with the new services. Some data stores can potentially be used by both new and legacy services. New services should be structured in a way that they can be intercepted easily. At some point the migration should be complete, and the strangler &lt;em&gt;façade&lt;/em&gt; should either go away, or evolve into a gateway or an adaptor.&lt;/p&gt;

&lt;h4&gt;
  
  
  Queue-based Load Leveling
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Load can be unpredictable, some services cannot scale when there’s an intermittent heavy load. Peaks in demand can cause overload, flooding downstream services causing them to fail.&lt;/p&gt;

&lt;p&gt;Introducing a queue between the services that acts as a buffer can alleviate the issues. Storing messages and allowing consumers to process the load at their own pace.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Buffering services with the help of a queue is very common. In Microsoft’s Cloud Architecture Patterns it’s called the &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling"&gt;queue-based load leveling pattern&lt;/a&gt;, Yan Cui calls it &lt;a href="https://read.acloud.guru/applying-the-decoupled-invocation-pattern-with-aws-lambda-2f5f7e78d18"&gt;Decoupled Invocation&lt;/a&gt;, and Jeremy Daly calls it &lt;a href="https://www.jeremydaly.com/serverless-microservice-patterns-for-aws/#scalablewebhook"&gt;the Scalable Webhook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A queue decouples &lt;em&gt;tasks&lt;/em&gt; from &lt;em&gt;services,&lt;/em&gt; creating a buffer that holds requests for less scalable backends or third-party services.&lt;/p&gt;

&lt;p&gt;Regardless of the volume of requests the processing load is driven by the consumers. Low concurrency and batch size can control the workload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4l--rYJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-6QcmhOklq1FcD81Fof_iQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4l--rYJT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-6QcmhOklq1FcD81Fof_iQ.png" alt=""&gt;&lt;/a&gt;A migration process that consumes items really fast from Elasticsearch, pushes them in a queue while a worker slowly feeds them one by one into a Legacy app.&lt;/p&gt;

&lt;p&gt;In the example we have a Migration worker process that reads the content of an Elasticsearch index. Elasticsearch it’s fast. The worker can read thousands of Articles and fetch their dependencies (Authors, Categories, Tags, Shows, Assets, etc.) in less than a second.&lt;/p&gt;

&lt;p&gt;On the right side we have a service that needs to ingest all the content, but before we can create an Article we have to create all its relationships in a specific order, double-checking if they exist, and need to be updated which is slower. Even if we scale horizontally the service — and we did — the relational database behind it becomes the bottle neck.&lt;/p&gt;

&lt;p&gt;After a point (~100k to 500k Articles) querying the database slows down to a crawl because there’s some locking on the &lt;a href="http://archive.oreilly.com/oreillyschool/courses/Rails2/many_to_many.html"&gt;Has-and-Belongs-to-Many&lt;/a&gt; relationship tables.&lt;/p&gt;

&lt;p&gt;By limiting the batch size and the number of workers running concurrently we can maintain a slow but steady flow that reduces lock contention in the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KQkH5O55--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AH_dlsZOyQcka3GbEPiAh-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KQkH5O55--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AH_dlsZOyQcka3GbEPiAh-Q.png" alt=""&gt;&lt;/a&gt;Buffering requests of a busy public API.&lt;/p&gt;

&lt;p&gt;Another common example is using a SQS to buffer API requests to amortize spikes in traffic— like in the diagram above.&lt;/p&gt;

&lt;p&gt;The endpoint returns &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202"&gt;202 — Accepted&lt;/a&gt; to the client, with a transaction id and a location for the result. On the client-side the UI can give feedback to the user emulating the expected behavior.&lt;/p&gt;

&lt;p&gt;The service can process the requests on the background at its own peace. Even if there are long running process involved an increase in the load on the client-side will never affect the throughput and responsiveness of the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations:&lt;/em&gt;&lt;/strong&gt; This pattern is not helpful when services require a synchronous response (waiting for a reply). It’s important to note that not using concurrency limits can diminish the effectiveness of the pattern: AWS Lambda and Kinesis can scale quickly, overwhelming downstream services that are less elastic or slower to scale. Zalando’s API Guidelines includes a &lt;a href="https://opensource.zalando.com/restful-api-guidelines/#events"&gt;full section&lt;/a&gt; about Events that talks about this some important considerations for this pattern.&lt;/p&gt;

&lt;h4&gt;
  
  
  Read-heavy reporting engine
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“It’s very common with read-heavy applications to hit the limits of downstream data engines that are not specialized for the different querying patterns that clients use.&lt;/p&gt;

&lt;p&gt;Caching data and creating specialized views of the most queried data can help mitigate the load impact of a read-heavy service.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Previous patterns help address pushing data and events from one service to others, optimizing scale for write-heavy services. This pattern optimizes the data for different &lt;em&gt;accessing&lt;/em&gt; and &lt;em&gt;querying patterns&lt;/em&gt; that clients need, optimizing scale for read-heavy services.&lt;/p&gt;

&lt;p&gt;Most applications are read-intensive. This is particularly true for our Big Media clients where there are far less users generating content than users consuming the content. The ratio between them can be huge like 1:100000.&lt;/p&gt;

&lt;p&gt;Caching data and creating specialized views of the most frequent access patterns help services scale effectively.&lt;/p&gt;

&lt;p&gt;Caching means temporarily copying frequently used into memory or a shared repository. &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/cache-aside"&gt;Caching data&lt;/a&gt; is one of the most frequently used strategies to improve performance and scale reads.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/materialized-view"&gt;Materialized views&lt;/a&gt; the data is replicated and transformed in a way that is optimized for querying and displaying. Materialized views can be new tables or completely different data stores where the data is mapped to be displayed in a new format or limited to a specific subset of the data.&lt;/p&gt;

&lt;p&gt;Materialized views are also useful to “bridge” different stores to take advantage of their stronger capabilities.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/index-table"&gt;Index Tables&lt;/a&gt; the data is replicated in a new table using specialized indexes specific to common queries. Specialized indexes can be composite keys, secondary keys, and partially denormalized that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ivbT47Je--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhibCn1a_yqdkprm8TkgCbQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ivbT47Je--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhibCn1a_yqdkprm8TkgCbQ.png" alt=""&gt;&lt;/a&gt;Using Materialized Views and Index Tables for popular querying patterns on the social app.&lt;/p&gt;

&lt;p&gt;Dynamo streams and lambda functions are the perfect tools to create specialized views. In the example we have three endpoints — search, tweet and timeline. Each one needs a slightly different querying pattern where the data needs to be optimized in a particular way.&lt;/p&gt;

&lt;p&gt;/search queries the Tweets Index in Elasticsearch. Providing things like phonetic search, typos, related terms, and suggestions. There is no need to index all the data from the original tweet, maybe it only includes the text, location, media url — for a pretty preview, and hashtags. We use the stream to trigger a lambda on TweetCreated that strips all the data we don’t need and indexes the tweet.&lt;/p&gt;

&lt;p&gt;/timeline is created from the most interesting tweets on the network, and the activity of my connections.&lt;/p&gt;

&lt;p&gt;We use a Dynamo table to keep the Top Tweets — an &lt;em&gt;indexed table&lt;/em&gt; limited to the 1000 most viewed items. Tweets are updated via the stream on theTweetViewed event. A lambda function receives the event, queries the Tweets Collection, and saves the result.&lt;/p&gt;

&lt;p&gt;Getting the activity of someone’s connection is easier on a Graph Database like &lt;a href="https://aws.amazon.com/neptune/"&gt;Neptune&lt;/a&gt;. Another lambda triggered by the TweetCreated event creates a record on Neptune maintaining activities for our connection’s streams of tweets.&lt;/p&gt;

&lt;p&gt;In Microsoft’s Cloud Architecture this is a mix of different patterns — Data pipeline, &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/cache-aside"&gt;Cache-aside&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/materialized-view"&gt;Materialized views&lt;/a&gt;, and &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/index-table"&gt;Index Table&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations:&lt;/em&gt;&lt;/strong&gt; Handling cache can be &lt;a href="https://en.wikipedia.org/wiki/Cache_invalidation"&gt;hard&lt;/a&gt;, remember to follow industry &lt;a href="https://en.wikipedia.org/wiki/Cache_invalidation"&gt;best practices&lt;/a&gt;. Maintaining materialized views, and index tables can be complicated. Duplicating the data will add cost, effort, and logic. With very large datasets it can be very difficult to maintain consistency, and keeping the data in sync could slow down the system. Views and Index tables are not fully consistent.&lt;/p&gt;

&lt;h4&gt;
  
  
  Streams and Pipelines
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“A continuous stream processor that captures large volumes of events or data, and distributes them to different services or data stores as fast as they come.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An important feature we see often is processing streams of data, as fast as the data is being generated and scaling quickly to meet demand of large volumes of events. Downstream services receive the stream and apply business logic to transform, analyze or distribute the data.&lt;/p&gt;

&lt;p&gt;Common examples are capturing user-behavior like clickstreams and UI interactions. Data for analytics. Data from IoT sensors, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--04y0CDd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0oPw3DvpApBTZ50HNrnw-g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--04y0CDd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0oPw3DvpApBTZ50HNrnw-g.png" alt=""&gt;&lt;/a&gt;Using Kinesis Streams, Analytics and Athena to build a data pipeline.&lt;/p&gt;

&lt;p&gt;In the example a &lt;a href="https://aws.amazon.com/kinesis/data-streams/"&gt;Kinesis Stream&lt;/a&gt; receives events from Service A. The data is transformed with lambda functions, stored in DynamoDB for fast reading, and indexed in Elasticsearch for a good search user-experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/kinesis/data-analytics/"&gt;Kinesis Analytics&lt;/a&gt; provides fast &lt;a href="https://docs.aws.amazon.com/kinesisanalytics/latest/dev/streaming-sql-concepts.html"&gt;querying data&lt;/a&gt; in the stream in real time. With the S3 integration all the data is stored for future analysis and real insight. &lt;a href="https://aws.amazon.com/athena/"&gt;Athena&lt;/a&gt; provides querying for all the historical data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations:&lt;/em&gt;&lt;/strong&gt; Stream processing can be expensive, it might not be worth it if there dataset is small or there are only a few events. All cloud providers have offerings for Data pipelines, Stream processing, and Analytics but they might not integrate well with services that are not part of their ecosystem.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fan-out and Fan-in
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Large jobs or tasks can easily exceed the execution time limit on Lambda functions. Using a divide-and-conquer strategy can help mitigate the issue.&lt;/p&gt;

&lt;p&gt;The work is split between different lambda workers. Each worker will process the job asynchronously and save its subset of the result in a common repository.&lt;/p&gt;

&lt;p&gt;The final result can be gathered and stitched together by another process or it can be queried from the repository itself.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Fan-out&lt;/strong&gt; and &lt;strong&gt;Fan-in&lt;/strong&gt; refers to breaking a task in subtasks, executing multiple functions concurrently, and then aggregating the result.&lt;/p&gt;

&lt;p&gt;They’re two patterns that are used together. The Fan-out pattern messages are delivered to workers, each receiving a partitioned subtask of the original tasks. The Fan-in pattern collects the result of all individual workers, aggregating it, storing it, and sending an event signaling the work is done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mwrdm2Hq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8bBnJqOLyZmkiwNeSpUVHQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mwrdm2Hq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8bBnJqOLyZmkiwNeSpUVHQ.png" alt=""&gt;&lt;/a&gt;The backend engine of a Digital Assets Management System (DAM) — A classic example of serverless workloads with a twist.&lt;/p&gt;

&lt;p&gt;Resizing images is &lt;a href="https://cloudonaut.io/serverless-image-resizing-at-any-scale/"&gt;one&lt;/a&gt; of the &lt;a href="https://read.acloud.guru/serverless-image-optimization-and-delivery-510b6c311fe5"&gt;most&lt;/a&gt; &lt;a href="https://sketchboard.io/blog/serverless-image-resize-with-amazon-lambda"&gt;common&lt;/a&gt; &lt;a href="https://aws.amazon.com/solutions/serverless-image-handler/"&gt;examples&lt;/a&gt; on using serverless. This is the fan-out approach.&lt;/p&gt;

&lt;p&gt;A client uploads a raw image to the Assets S3 Bucket. API Gateway has an integration to handle uploading directly to S3.&lt;/p&gt;

&lt;p&gt;A lambda function is triggered by S3. Having one lambda function to do all the work can lead to limit issues. Instead, the lambda pushes an Asset Created event to SNS so our processing lambdas get to work.&lt;/p&gt;

&lt;p&gt;There are three lambda functions for resizing — on the right. Each creates a different image size, writing the result on the Renditions bucket.&lt;/p&gt;

&lt;p&gt;The lambda on the bottom reads the metadata from the original source — location, author, date, camera, size, etc. and adds the new asset to the DAM’s Assets Table on DynamoDB, but doesn’t mark it as ready for use. Smart auto-tagging, text extraction and content moderation could be added to processing lambdas with &lt;a href="https://aws.amazon.com/rekognition/image-features/"&gt;Rekognition&lt;/a&gt; later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qAx3mzoS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQMsQOv8R3JB4XfLx2WJ80Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qAx3mzoS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQMsQOv8R3JB4XfLx2WJ80Q.png" alt=""&gt;&lt;/a&gt;The Manager App consumes assets via the DAM API.&lt;/p&gt;

&lt;p&gt;For the Fan-in part of the DAM, we have a lambda function that’s listening to the renditions bucket, when there’s a change on the bucket it checks if all renditions are ready, and marks the assets ready for use.&lt;/p&gt;

&lt;p&gt;With the event-driven nature of serverless, and given the resource limits of lambda functions we favor this type of choreography &lt;a href="https://specify.io/concepts/microservices#choreography"&gt;over orchestration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Considerations&lt;/em&gt;&lt;/strong&gt; : Not all workloads can be split in small enough pieces for lambda functions. Failure should be considered on both flows, otherwise a task might stay unfinished forever. Leverage lambda’s &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html"&gt;retry strategies&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/dlq.html"&gt;Dead-Letter Queues&lt;/a&gt;. Any task that can take over 15 minutes should use containers instead of lambda functions, sticking to the &lt;em&gt;choreography approach&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  More resources 📚
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft’s Azure Architecture Center has an extensive list of &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/"&gt;Cloud Patterns&lt;/a&gt;, a solid guide for &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design"&gt;best practices&lt;/a&gt;, a very good overview of &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/antipatterns/"&gt;performance anti-patterns&lt;/a&gt; and some &lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/serverless-architecture/serverless-design-examples"&gt;examples&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;AWS also has great resources on &lt;a href="https://docs.aws.amazon.com/aws-technical-content/latest/microservices-on-aws/distributed-systems-components.html"&gt;Distributed Systems Components&lt;/a&gt;, &lt;a href="https://aws.amazon.com/serverless/"&gt;Serverless in general&lt;/a&gt;, examples in the &lt;a href="https://aws.amazon.com/serverless/serverlessrepo/"&gt;Serverless Application Repository&lt;/a&gt;, and several &lt;a href="https://aws.amazon.com/about-aws/events/monthlywebinarseries/on-demand/?awsf.ott-on-demand=categories%23serverless&amp;amp;awsf.ott-on-demand-master=categories%23serverless"&gt;webinar videos&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/611b6bf539d2"&gt;Jeremy Daly&lt;/a&gt; has an extensive list of &lt;a href="https://medium.com/@jeremydaly/serverless-microservice-patterns-for-aws-6dadcd21bc02"&gt;Serverless Patterns&lt;/a&gt;. He’s the author of &lt;a href="https://www.jeremydaly.com/projects/lambda-api/"&gt;Lambda API&lt;/a&gt; a lightweight API framework, and the &lt;a href="https://www.jeremydaly.com/projects/serverless-mysql/"&gt;Serverless MySQL&lt;/a&gt; &lt;a href="https://www.npmjs.com/package/serverless-mysql"&gt;module&lt;/a&gt; for managing connections on your lambda functions.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/d00f1e6b06a2"&gt;Yan Cui&lt;/a&gt; has a tonne of content about &lt;a href="https://theburningmonk.com/topics/programming/aws/serverless/"&gt;serverless&lt;/a&gt;. Including &lt;a href="https://theburningmonk.com/topics/programming/aws/serverless/"&gt;Design patterns&lt;/a&gt;, &lt;a href="https://theburningmonk.com/topics/programming/performance-programming/"&gt;performance&lt;/a&gt;, &lt;a href="https://www.realworlddevops.com/episodes/the-business-value-of-serverless-with-yan-cui"&gt;lots&lt;/a&gt; of &lt;a href="https://www.protego.io/the-serverless-show-ft-yan-cui-do-we-ever-learn/"&gt;points&lt;/a&gt; of &lt;a href="https://www.trek10.com/blog/think-faas-serverless-in-production/"&gt;view&lt;/a&gt;, and the best &lt;a href="https://productionreadyserverless.com/"&gt;training on serverless&lt;/a&gt;. He’s must-follow on the serverless scene.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/9f7737a6c0e7"&gt;Mike Roberts&lt;/a&gt; &lt;a href="https://martinfowler.com/articles/serverless.html"&gt;writes&lt;/a&gt; and &lt;a href="http://thoughtworks.libsyn.com/diving-into-serverless-architecture"&gt;talks&lt;/a&gt; about serverless in depth.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/4858a304914"&gt;Sascha ☁ Möllering&lt;/a&gt; talks about &lt;a href="https://www.infoq.com/presentations/serverless-architecture-patterns"&gt;Serverless Architectural Patterns and Best Practices&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/de7bb49141c8"&gt;Rob Gruhl&lt;/a&gt;’s &lt;a href="https://medium.com/tech-at-nordstrom/adventures-in-event-sourced-architecture-part-1-cc21d06187c7"&gt;Event-sourcing at Nordstrom&lt;/a&gt;, the post about &lt;a href="https://read.acloud.guru/serverless-event-sourcing-at-nordstrom-ea69bd8fb7cc"&gt;Hello Retail!&lt;/a&gt;, and the video about &lt;a href="https://www.youtube.com/watch?v=O7PTtm_3Os4"&gt;their architecture&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/280dd9543ba9"&gt;swardley&lt;/a&gt; and &lt;a href="https://medium.com/u/b8d1e8a0e0d8"&gt;Forrest Brazeal&lt;/a&gt; discussion about &lt;a href="https://read.acloud.guru/simon-wardley-is-a-big-fan-of-containers-despite-what-you-might-think-18c9f5352147"&gt;Containers vs Serverless&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Gregor Hohpe’s &lt;a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/"&gt;Enterprise Integration Patterns&lt;/a&gt; is a catalog of 65 technology-independent patterns, some of which he’s ported &lt;a href="https://www.enterpriseintegrationpatterns.com/ramblings/google_cloud_functions.html"&gt;serverless&lt;/a&gt; in Google’s cloud.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/48b36b89f6d6"&gt;Twitch&lt;/a&gt;’s &lt;a href="https://www.twitch.tv/videos/335360204?collection=8zVaW4B4YhU1pA"&gt;Design Patterns for Twitch Scale&lt;/a&gt; touches on several interesting points about patterns that can apply for scale.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/u/73d9ee9c08c8"&gt;Capital One Tech&lt;/a&gt; &lt;a href="https://medium.com/capital-one-tech/microservices-when-to-react-vs-orchestrate-c6b18308a14c"&gt;“When to React vs. Orchestrate”&lt;/a&gt; is an excellent post about choreography vs orchestration.&lt;/li&gt;
&lt;/ul&gt;


</description>
      <category>awslambda</category>
      <category>serverless</category>
      <category>softwareengineering</category>
      <category>serverlessarchitect</category>
    </item>
    <item>
      <title>A tale of a dying monolith: The complexity of replacing something simple</title>
      <dc:creator>Eduardo Romero</dc:creator>
      <pubDate>Tue, 25 Sep 2018 14:06:02 +0000</pubDate>
      <link>https://dev.to/foxteck/a-tale-of-a-dying-monolith-2j1d</link>
      <guid>https://dev.to/foxteck/a-tale-of-a-dying-monolith-2j1d</guid>
      <description>

&lt;h3&gt;A tale of a dying monolith: The complexity of replacing something simple&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pNyTcfXc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AD6ltU71DfGMyGkYGPaeBYA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pNyTcfXc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AD6ltU71DfGMyGkYGPaeBYA.jpeg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/photos/wgjG86EuubE?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Thor Alvis&lt;/a&gt; on &lt;a href="https://unsplash.com/@terminath0r?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve been working on Big Media for almost two years now. Two of my favorite projects were building publishing platforms. Both leveraged the Headless CMS architecture and developed a platform around it.&lt;/p&gt;

&lt;p&gt;While the stacks were entirely different, the patterns between architectural components seemed the same.&lt;/p&gt;

&lt;p&gt;I started to identify the same patterns in older systems I had worked with before, and I wanted to know if I could apply the same patterns to a legacy system.&lt;/p&gt;

&lt;p&gt;A friend was building new versions of the Android and iOS Apps for an old CakePHP app in the Healthcare industry I had helped develop almost five years ago. He wanted to add new features to the apps and needed the backend to support the new features.&lt;/p&gt;

&lt;p&gt;The CakePHP App interacted with data coming from an external system. It was the source of truth for most of the entities in the app. Both systems were tightly coupled, sharing data via database tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QoAOWD-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ad-yytiQAYljF5Y8zoa9h6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QoAOWD-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ad-yytiQAYljF5Y8zoa9h6g.png" alt=""&gt;&lt;/a&gt;CakePHP Monolith tightly-coupled to an external system via the Database.&lt;/p&gt;

&lt;p&gt;Data for things like product names, system ids, pricing, discounts, product status, user accounts, membership status, and stock levels come from the external system.&lt;/p&gt;

&lt;p&gt;Doctors and practitioners would log in to the CakePHP App and add extra information to the data coming from the external system.&lt;/p&gt;

&lt;p&gt;Models for product descriptions, pictures, calendars, procedures, locations, profiles, etc. only existed in the App, and that data was created and managed there. The App would render the content and exposed the data via a REST API to mobile and third-party apps. One big, mighty monolith from the good ole’ days.&lt;/p&gt;

&lt;p&gt;I convinced my friend to let me start building experiments around the project. I began &lt;a href="https://www.martinfowler.com/bliki/StranglerApplication.html"&gt;&lt;em&gt;strangling&lt;/em&gt;&lt;/a&gt; the monolith. After several iterations, this is the resulting architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CqDq3gF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3Rw5Hd7AFof_Uq7fOA3ACw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CqDq3gF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3Rw5Hd7AFof_Uq7fOA3ACw.png" alt=""&gt;&lt;/a&gt;The New Stack — A Highly decoupled architecture with a Headless CMS, an external search engine and real-time API.&lt;/p&gt;

&lt;h3&gt;The Platform — 🕸&lt;/h3&gt;

&lt;p&gt;The new Platform is an event-driven, distributed system. A FeathersJS RESTful layer in the middle of the platform. I offloaded all content related responsibilities to Prismic.io, the CMS functionality on CakePHP was deprecated. I used Algolia to power the search engine on all apps, including the API.&lt;/p&gt;

&lt;p&gt;Based on the events of the API I can do things like index new content or remove content from the search engine, update pricing, stock levels, enable or disable content, and bake in some default data in case an entity does not exist on the CMS yet.&lt;/p&gt;

&lt;p&gt;As a nice bonus, the mobile apps can now give notifications in real time to the users about things that are happening in other parts of the platform.&lt;/p&gt;

&lt;h4&gt;No shared databases — 💥&lt;/h4&gt;

&lt;p&gt;I started by removing the database dependency between systems. Now all systems integrate via RESTful API endpoints.&lt;/p&gt;

&lt;p&gt;The coupled integration was the source of a lot of pain-points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Schema changes from external updates&lt;/strong&gt; : Whenever the external system’s schema was updated the backend needed to be updated too. Some of those changes could tickle all the way up to the mobile apps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cascaded down-time&lt;/strong&gt; : Because the external system was the source of truth for things like membership status and identity when it went down for a reason it would cause partial failures on other systems. Local cache helps but only to some extent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch syncs&lt;/strong&gt; : Because of the systems integrated via the database some things needed synchronization with daily jobs. Stuff like price lists, stock levels, status (i.e., discontinued items), and search indexing on Lucene all had to run every day at the end or beginning of the day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ownership&lt;/strong&gt; : Because there was very little visibility when something was wrong there was a lot of blame-game between maintainers of the external system and the backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now the external system pushes changes via the REST API. They exposed a SOAP API for User Membership which the Feathers backend consumes and proxies as JSON API for external clients.&lt;/p&gt;

&lt;h4&gt;The Headless CMS — 🖋&lt;/h4&gt;

&lt;p&gt;I modeled all Content entities in Prismic. Users can now create and manage content and see it magically appear on all apps in near real-time.&lt;/p&gt;

&lt;p&gt;I kept Cake’s frontend as is. Keeping it as the Rendering engine means that no extra effort needed in the frontend. In the future, they want to migrate the to VueJS or React.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EQHbfP72--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEQuvgF_2UDjHO-HeVULang.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EQHbfP72--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEQuvgF_2UDjHO-HeVULang.png" alt=""&gt;&lt;/a&gt;Modeling Content in Prismic (sample data model)&lt;/p&gt;

&lt;p&gt;I set up webhooks to sync that data back to CakePHP models. The content coming from Prismic will update existing models, handle external assets (backing them up locally) and update current state. Eventually, all content will exist in Prismic.&lt;/p&gt;

&lt;p&gt;From previous experience, data migration is always a lot of work. In this case no data migration is needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7S8BK7SC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiQzHdPP6blK8mjp8PSLCpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7S8BK7SC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiQzHdPP6blK8mjp8PSLCpg.png" alt=""&gt;&lt;/a&gt;JSON Response from Prismic’s API (from same model above).&lt;/p&gt;

&lt;h4&gt;The Search Engine — ️🔍&lt;/h4&gt;

&lt;p&gt;Working with Algolia is a breeze. It’s blazing fast and easy to use. Having an event-driven architecture turned out to be extremely beneficial. Events on the API use &lt;a href="https://www.algolia.com/doc/api-client/javascript/getting-started/"&gt;NodeJS’ client&lt;/a&gt;to index the data asynchronously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oHWjFzQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AX4KslL8YmYKF3SawbqT17Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oHWjFzQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AX4KslL8YmYKF3SawbqT17Q.png" alt=""&gt;&lt;/a&gt;Search response time — Blazing fast, under 10ms.&lt;/p&gt;

&lt;p&gt;In a few milliseconds, the search features on all applications will reflect the changes and show (or hide) the same results. SDKs for PHP, Swift, and Java made the integration easy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pt8E4GSf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2yPg3LOAVmbuDrkwsHQ-AA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pt8E4GSf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2yPg3LOAVmbuDrkwsHQ-AA.png" alt=""&gt;&lt;/a&gt;Search Operations — Averages to 88k monthly operations.&lt;/p&gt;

&lt;p&gt;For other existing clients that were not updated the CakePHP search continued to work as it was before powered by algolia behind the scenes. This means that other integrations like Chrome’s Omnibox search continued to work.&lt;/p&gt;

&lt;h4&gt;Continuous Deployment — 🚀&lt;/h4&gt;

&lt;p&gt;Having more pieces in the architecture means that deploying new versions of the platform can no longer be done by hand. I set up pipelines on &lt;a href="https://buddy.works/"&gt;&lt;em&gt;Buddy&lt;/em&gt;&lt;/a&gt; to handle the deployments automatically.&lt;/p&gt;

&lt;p&gt;The CakePHP app was already being deployed to an EC2 instance on AWS. I just had to automate the process (and added &lt;a href="https://buddy.works/blog/introducing-atomic-deployments"&gt;atomic deployments&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IDWk_0Pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApR3dB9EC9KU3wPsyfJGKTw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IDWk_0Pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApR3dB9EC9KU3wPsyfJGKTw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The FeathersJS app is deployed to the cloud with Zeit’s &lt;a href="https://zeit.co/now"&gt;Now&lt;/a&gt;. A service for immutable infrastructure that makes it simple to deploy features quickly without hassle (or fear).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QjlaQwhr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AJ1bKV4gUxhpNxCa8mdFo8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QjlaQwhr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AJ1bKV4gUxhpNxCa8mdFo8w.png" alt=""&gt;&lt;/a&gt;NodeJS Deployment Pipeline with Now&lt;/p&gt;

&lt;p&gt;Merging a branch will create a new container tied to a new domain name where you can do any tests you need.&lt;/p&gt;

&lt;p&gt;If everything works as expected, the last step will alias the current deploy to the backend’s public DNS name and it will automatically start receiving live traffic. It will also automatically scale the app and make sure it’s running in all available &lt;a href="https://zeit.co/cdn"&gt;regions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AsaBIAj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AytMmI0AOgDf36g9zmuXNxQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AsaBIAj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AytMmI0AOgDf36g9zmuXNxQ.png" alt=""&gt;&lt;/a&gt;now scale&lt;/p&gt;

&lt;p&gt;The configuration for aliasing to production and scaling rules are set in the repository as code. Things like database connection, and the access keys for third-party services are set as environment variables that are injected to now from the pipeline.&lt;/p&gt;

&lt;p&gt;MySQL database lives on Amazon’s RDS. The RethinkDB database is hosted in &lt;a href="https://www.compose.com/databases/rethinkdb"&gt;&lt;em&gt;compose.io&lt;/em&gt;&lt;/a&gt;. Using a three node cluster configuration plus a proxy portal node (and automated backups).&lt;/p&gt;

&lt;h4&gt;Debugging and Tracing —🕵🐛&lt;/h4&gt;

&lt;p&gt;Breaking the monolith apart made the overall architecture easy to understand, easier to maintain and evolve. But at the same time made debugging very hard. Trying to figure out what’s going on when an event crosses system boundaries was very challenging.&lt;/p&gt;

&lt;p&gt;I had already some idea about observability and traceability because of my experience with AWS Lambda, X-Ray, and IOPipe. I really like IOPipe, but it only works for lambda. I remembered reading great insight on observability via &lt;a href="https://medium.com/u/5587d135a397"&gt;Charity Majors&lt;/a&gt;’ &lt;a href="https://twitter.com/mipsytipsy"&gt;twitter&lt;/a&gt;. I decided to try Honeycomb.io.&lt;/p&gt;

&lt;p&gt;I instrumented the backend and the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DzB7sUJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AR902TgSBNMZ_ipuhWT69PA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DzB7sUJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AR902TgSBNMZ_ipuhWT69PA.png" alt=""&gt;&lt;/a&gt;Request per second (purple) and Releases from GitHub (blue).&lt;/p&gt;

&lt;p&gt;Feathers uses ExpressJS under the hood, so I just had to add their &lt;a href="https://www.honeycomb.io/blog/2018/05/the-fastest-most-direct-route-to-instrumented-code-a-honeycomb-beeline/"&gt;beeline for NodeJS&lt;/a&gt;and HTTP requests were handled out of the box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9OrHqlWt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/868/1%2AxAwhcEaIseApND9I0t4uig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9OrHqlWt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/868/1%2AxAwhcEaIseApND9I0t4uig.png" alt=""&gt;&lt;/a&gt;NodeJS Honeycomb Beeline&lt;/p&gt;

&lt;p&gt;I used &lt;em&gt;before&lt;/em&gt; and &lt;em&gt;after&lt;/em&gt; &lt;a href="https://docs.feathersjs.com/api/hooks.html"&gt;&lt;em&gt;hooks&lt;/em&gt;&lt;/a&gt; to add extra context to the tracing information. That helped get better insight into what was going on in the platform and made it easier to build meaningful queries and create visualizations with values that made sense business-wise.&lt;/p&gt;

&lt;p&gt;There is no beeline for PHP yet. But the &lt;a href="https://docs.honeycomb.io/api/events/"&gt;Events API&lt;/a&gt; can be used with all other languages.&lt;/p&gt;

&lt;p&gt;I also instrumented the pipeline so all releases are marked on honeycomb (the blue combs on the image above). I used the &lt;a href="https://docs.honeycomb.io/working-with-data/markers/#honeymarker-installation"&gt;marker CLI&lt;/a&gt; for that. With pipelines already setup to run when code is merged to master and tagged for release, it was just running one extra step with the following:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ honeymarker -k $HONEYCOMB\_KEY -d feathers-ws \
    -t release \
    -m "$EXECUTION\_TAG" \
    add
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Triggers can be leveraged to keep track of any unexpected behavior across the platform. I expect that in the future &lt;a href="https://docs.honeycomb.io/api/triggers/"&gt;Triggers&lt;/a&gt; will be used to track behavior after an update performing roll-backs or aliasing the release automatically.&lt;/p&gt;

&lt;p&gt;I learned a lot! The platform is robust, decoupled and scalable. It helped me validate my theory around content platforms and helped recognize reusable patterns I can apply in different projects easily.&lt;/p&gt;

&lt;p&gt;On the other hand, it made me realize that replacing a monolith with a distributed system helps in the long run &lt;em&gt;but&lt;/em&gt; adds a lot of complexity that can be cumbersome if you don’t have the right tools support it like CI/CD, event logs and tracing.&lt;/p&gt;

&lt;p&gt;I think observability is a key success factor when building service-oriented architectures like microservices or Lambda-based architectures. I’ll continue to dig more into it.&lt;/p&gt;

&lt;p&gt;Finally, the team that will continue to support the platform was also delighted with the outcome. It’s now easier to add new features. Understanding the components is straightforward, more than the original codebase was. They are already thinking of what the next refactor will be — &lt;em&gt;The frontend&lt;/em&gt;. The organization has less aversion to change, and new ideas are already on the pipeline, overall they are moving faster, and there’s a new sense of trust in the engineering team.&lt;/p&gt;

&lt;h3&gt;More resources 📚&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://martinfowler.com/articles/break-monolith-into-microservices.html"&gt;&lt;strong&gt;How to break a Monolith&lt;/strong&gt;&lt;/a&gt; by &lt;a href="https://medium.com/u/f2c25544d719"&gt;Zhamak Dehghani&lt;/a&gt; explains how to migrate a monolith to a microservices architecture. The folks from Nginx have a great &lt;a href="https://www.nginx.com/blog/introduction-to-microservices/"&gt;article series about&lt;/a&gt; Microservices, &lt;a href="https://www.nginx.com/blog/refactoring-a-monolith-into-microservices/"&gt;last one&lt;/a&gt; explains how to refactor a Monolith into Microservices.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://buddy.works/"&gt;&lt;strong&gt;Buddy&lt;/strong&gt;&lt;/a&gt; is my favorite CI/CD tool. It’s easy to use and it has all the cool features like integration with GitHub, deploying to k8s (ECS, GKS), running steps in containers and ephemeral environments they call Sandboxes. Read their &lt;a href="https://buddy.works/guides"&gt;guides&lt;/a&gt; for more information.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.feathersjs.com"&gt;&lt;strong&gt;FeathersJS&lt;/strong&gt;&lt;/a&gt; is a Real-time REST API framework for NodeJS. It’s my favorite framework to develop NodeJS APIs. &lt;a href="https://nestjs.com/"&gt;&lt;strong&gt;NestJS&lt;/strong&gt;&lt;/a&gt; is second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Honeycomb’s&lt;/strong&gt; &lt;a href="https://www.honeycomb.io/blog/"&gt;&lt;strong&gt;blog&lt;/strong&gt;&lt;/a&gt; has very good pointers on Observability. They even sponsored this year’s (and first ever) o11ly con.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://newrelic.com/nodejs"&gt;&lt;strong&gt;New Relic&lt;/strong&gt;&lt;/a&gt; has support for &lt;a href="https://docs.newrelic.com/docs/agents/nodejs-agent/supported-features/nodejs-custom-metrics"&gt;Custom Metrics&lt;/a&gt; and &lt;a href="https://docs.newrelic.com/docs/apm/distributed-tracing/getting-started/introduction-distributed-tracing"&gt;Distributed tracing&lt;/a&gt;, but I haven’t tried. Seems more complicated. I will give it a try because my clients use it.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://martinfowler.com/aboutMe.html"&gt;&lt;strong&gt;Martin Fowler&lt;/strong&gt;&lt;/a&gt; has &lt;a href="https://martinfowler.com/articles/201701-event-driven.html"&gt;a very good post&lt;/a&gt; about Event-driven patterns and a &lt;a href="https://martinfowler.com/articles/two-stack-cms/"&gt;presentation&lt;/a&gt; about “headless cms”&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://humanmade.com/"&gt;&lt;strong&gt;Human Made&lt;/strong&gt;&lt;/a&gt; has a good &lt;a href="https://gallery.mailchimp.com/afdf80ec9bf56213ada4edf20/files/7a1013d4-35da-4b5a-9082-02e317b098fb/Headless_WordPress_The_Future_CMS_1.pdf"&gt;booklet&lt;/a&gt; about Headless CMS with WordPress. Concepts apply to any CMS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thanks to&lt;/em&gt; &lt;a href="https://medium.com/u/bd35c34ebe66"&gt;&lt;em&gt;Felipe Guizar Diaz&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://medium.com/u/4517f8f364b1"&gt;&lt;em&gt;Luis Daniel&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, and&lt;/em&gt; &lt;a href="https://medium.com/u/62df9fbbb4d8"&gt;&lt;em&gt;Miguel Lomelí&lt;/em&gt;&lt;/a&gt; &lt;em&gt;for helping proof-read this article.&lt;/em&gt;&lt;/p&gt;


</description>
      <category>feathersjs</category>
      <category>microservices</category>
      <category>softwareengineering</category>
      <category>architecturecompone</category>
    </item>
    <item>
      <title>Fast API Prototyping with Webtask.io and Serverless.</title>
      <dc:creator>Eduardo Romero</dc:creator>
      <pubDate>Thu, 28 Sep 2017 14:01:02 +0000</pubDate>
      <link>https://dev.to/foxteck/fast-api-prototyping-with-webtaskio-and-serverless-43i</link>
      <guid>https://dev.to/foxteck/fast-api-prototyping-with-webtaskio-and-serverless-43i</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsgxcfkus749s6vaf8sj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsgxcfkus749s6vaf8sj.jpeg"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/photos/WR-ifjFy4CI?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;ShiroÂ hatori&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Function as a Service (FaaS) commonly referred as &lt;em&gt;serverless computing&lt;/em&gt;, feels like the answer to every engineer’s prayers. Simple functions, doing only one small thing and running in infrastructure that scales automatically for a small price, or even for free ðŸ˜!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webtask.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Webtask&lt;/strong&gt;&lt;/a&gt; is the Function as a Service implementation from the Auth0 team. It’s perfect for prototyping and easy to get started with. They just released their Serverless integration &lt;a href="https://serverless.com/blog/serverless-webtasks/" rel="noopener noreferrer"&gt;last week&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m working with &lt;a href="https://serverless.com" rel="noopener noreferrer"&gt;&lt;strong&gt;Serverless&lt;/strong&gt;&lt;/a&gt; on my current project, and I’ve wanted to take Webtask for a spin for a while, so I decided this was the perfect time for it.&lt;/p&gt;

&lt;p&gt;In a few minutes, I had a REST API, deployed, and running with the help of Express, RethinkDB, Webtask, and Serverless.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started ðŸ
&lt;/h3&gt;

&lt;p&gt;Install the serverless framework and create a new project:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Go into the project’s folder and add the dependencies with NPM:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;express and body-parser will handle routing and JSON responses.&lt;/li&gt;
&lt;li&gt;webtask-tools wraps the express app and binds it to the webtask handler.&lt;/li&gt;
&lt;li&gt;rethinkdbdash is a RethinkDB &lt;a href="https://github.com/neumino/rethinkdbdash" rel="noopener noreferrer"&gt;driver&lt;/a&gt;; it has some cool features like connection pooling, an easier interface, and outstanding performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Update serverless.yml. Change the service name and add the IP of the RethinkDB server:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The only differences from the default config should be the service name and the environment and defaults options.&lt;/p&gt;

&lt;p&gt;The options tell the &lt;em&gt;provider&lt;/em&gt; (webtasks in this case) to set RETHINKDB_SERVER as an environment variable and to take its value from either the environment (env), an option from the command line when deploying (--rethinkdb-server) or, if neither is set, from the defaults section.&lt;/p&gt;

&lt;p&gt;The API will be running from Auth0’s FaaS infrastructure. RethinkDB needs to be reachable from the internet. I launched a Digital Ocean instance for that.&lt;/p&gt;

&lt;p&gt;The final step is to set up a Webtask account:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Show me the code! ðŸ¤“
&lt;/h3&gt;

&lt;p&gt;At this point, there is already a Function that can be deployed, run and tested. Give it a try:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It is important to run npm update to install all packages before deploying a function. Dependencies will be pulled from node_modules and uploaded with the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpxnl46c6b55zestkye2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpxnl46c6b55zestkye2.png"&gt;&lt;/a&gt;Running &lt;strong&gt;sls deploy &lt;/strong&gt;will return the URL of the function ðŸŽ‰â€Š–â€ŠReturns a JSON response with a successÂ message.&lt;/p&gt;

&lt;p&gt;The service is running already and zero code. Wow! That is really something!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I realized that this was an excellent first approach to the FaaS paradigm. Definitely easier than setting up an account for AWS Î».&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now into the actual code…&lt;/p&gt;

&lt;h4&gt;
  
  
  Express + RethinkDB
&lt;/h4&gt;

&lt;p&gt;Like most express apps, start by adding express, body-parser, and webtask-tools. Create the app and add the database as a middleware to all routes:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The RethinkDB instance is now available as db through every endpoint.&lt;/p&gt;

&lt;h4&gt;
  
  
  The API
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;GET /: Retrieves all documents from RethinkDB.&lt;/li&gt;
&lt;li&gt;GET /🆔 Returns a particular document with the given id.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It is a simplistic version with no filtering and basic pagination.&lt;/p&gt;

&lt;p&gt;The endpoint will get the response from RethinkDB and return it to the client. If anything goes wrong, it will send the error instead.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POST /: Creates a new document.&lt;/li&gt;
&lt;li&gt;PUT /: Updates an existing document.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Creating and modifying data is just as simple.&lt;/p&gt;

&lt;p&gt;The endpoint gets the JSON from the request and saves it into the shares table. The code is almost the same for both functions.&lt;/p&gt;

&lt;p&gt;RethinkDB has support for &lt;em&gt;upserts&lt;/em&gt;. When the conflict option is set to update if the body of the request has an existing id it will replace the document. The default behavior is to throw an error if the document with that id exists, which is what the POSTs method is doing.&lt;/p&gt;

&lt;p&gt;With returnChanges, the query will return an Array with the resulting changes of the operation in a special object with two properties: new_val and old_val. We use new_val to return the &lt;em&gt;upserted&lt;/em&gt; item in the response. In the case of a POST, the object will include the id of the new document.&lt;/p&gt;

&lt;p&gt;RethinkDB accepts an Array or an Item when inserting. Lines 6 and 11 normalize the response. If the request had an array, it would return an array with the changes. If it were just one item, it would return only one item.&lt;/p&gt;

&lt;p&gt;Besides returning the actual data, it includes the response from RethinkDB. It contains an object with helpful information that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"response": {
    "changes": [
        {
            "new_val": {
                "id": "044ec21c-78b9-47aa-9eb6-e7e5658237db",
                "name": "Tania ðŸ˜"
            },
            "old_val": {
                "id": "044ec21c-78b9-47aa-9eb6-e7e5658237db",
                "name": "Tania"
            }
        }
    ],
    "deleted": 0,
    "errors": 0,
    "inserted": 0,
    "replaced": 1,
    "skipped": 0,
    "unchanged": 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It can be helpful for the requester to know what happened to the data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DELETE /🆔 Removes the document with the given id from the table.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The endpoint gets the id from the request, and ask RethinkDB to remove that document. If anything goes wrong (i.e., the id doesn’t exist) it sends the error back to the client.&lt;/p&gt;

&lt;p&gt;That’s all. A fully working –barebonesâ€Š–â€Šfunction as a service-based REST API. The code is straightforward. Small functions that are easy to read and understand.&lt;/p&gt;

&lt;p&gt;Next steps will be adding Authentication to the requests and some business logic so the service is more helpful.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I will continue working on it and will elaborate about it in another post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Webtask and Serverless make it easy to start &lt;em&gt;“dipping your toes”&lt;/em&gt; into the &lt;em&gt;Function as a Service&lt;/em&gt; world. It focuses around NodeJS with only two event sources HTTP Requests and Scheduled events. Compared to AWS Lambda it’s faster to get started and easier to approach.&lt;/p&gt;

&lt;p&gt;Webtask’s free tier is limited to a request per second. But that should be enough for testing; maybe even for a basic service.&lt;/p&gt;

&lt;h3&gt;
  
  
  More resources ðŸ“š
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://webtask.io/docs/101" rel="noopener noreferrer"&gt;&lt;strong&gt;Webtask.io&lt;/strong&gt;&lt;/a&gt;, the FaaS from the Auth0 guys. It’s easy and straightforward. Works with Node 8 out of the box and it takes less than a minute to get started. Super useful to get started with Slack hooks, Bots, and APIs. It has a fully-featured Web Editor that makes things even simpler. It has real-time logs, 500k of JSON storage, a cron-like scheduler, and Auth0 support is baked in.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://serverless.com/framework/" rel="noopener noreferrer"&gt;&lt;strong&gt;Serverless&lt;/strong&gt;&lt;/a&gt; is a toolkit to work with serverless architectures from any provider (AWS Lambda, Google Functions, Azure Functions, OpenWhisk, Webtask, etc.). With the help of its Event Gateway, you can even combine different providers and make them all work together.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.rethinkdb.com/api/javascript/skip/" rel="noopener noreferrer"&gt;&lt;strong&gt;RethinkDB&lt;/strong&gt;&lt;/a&gt; is an Open Source NoSQL Database with a very nice querying language (&lt;a href="https://www.rethinkdb.com/docs/introduction-to-reql/" rel="noopener noreferrer"&gt;ReQL&lt;/a&gt;) and advanced features like clustering, near-linear scaling, and real-time feeds. It’s my go-to NoSQL data store.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/neumino/thinky" rel="noopener noreferrer"&gt;&lt;strong&gt;Thinky&lt;/strong&gt;&lt;/a&gt; a Javascript ORM for RethinkDB. An easier way to map skinny JSON objects to RethinkDB documents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thanks to&lt;/em&gt; &lt;a href="https://medium.com/u/ff471ed75d8d" rel="noopener noreferrer"&gt;&lt;em&gt;Rafa Salazar&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://medium.com/u/96892ff0fe88" rel="noopener noreferrer"&gt;&lt;em&gt;David NÃºÃ±ez&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://medium.com/u/ad3d4c815e6a" rel="noopener noreferrer"&gt;&lt;em&gt;Andres Cespedes&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://medium.com/u/d4888e26f384" rel="noopener noreferrer"&gt;&lt;em&gt;Martin Moscosa&lt;/em&gt;&lt;/a&gt; &lt;em&gt;for helping with this article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webtask</category>
      <category>rest</category>
      <category>softwareengineering</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Continuous Engineering with the Serverless Framework</title>
      <dc:creator>Eduardo Romero</dc:creator>
      <pubDate>Thu, 03 Aug 2017 19:58:48 +0000</pubDate>
      <link>https://dev.to/foxteck/continuous-engineering-with-the-serverless-framework-cg4</link>
      <guid>https://dev.to/foxteck/continuous-engineering-with-the-serverless-framework-cg4</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AmESgrjzutfFUg2dhEZBEsw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AmESgrjzutfFUg2dhEZBEsw.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are working with microservices, and I’m enjoying it. Our implementation is based on &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; serverless compute. This technology launched in 2014, now in 2017 there are still no well-established engineering best practices for it.&lt;/p&gt;

&lt;p&gt;We always follow best practices when we develop a product. In this post, I describe how we apply our workflow to the function as a service paradigm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;p&gt;We use feature branches. Each story gets its branch. When the code is ready, we create a &lt;a href="https://help.github.com/articles/about-pull-requests/" rel="noopener noreferrer"&gt;&lt;em&gt;Pull Request&lt;/em&gt;&lt;/a&gt; (PR) to have changes merged to develop.&lt;/p&gt;

&lt;p&gt;The team &lt;em&gt;peer-reviews&lt;/em&gt; the code and we suggest changes when needed. We require at least two approvals before merging changes.&lt;/p&gt;

&lt;p&gt;Before changes get merged, CI/CD runs. It checks that we stick to our coding &lt;a href="https://standardjs.com/" rel="noopener noreferrer"&gt;standard&lt;/a&gt; and runs our tests. If everything passes, you can merge your branch.&lt;/p&gt;

&lt;p&gt;When it is merged to develop it gets auto-deployed to our development environment on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Structure
&lt;/h3&gt;

&lt;p&gt;Working with &lt;em&gt;Functions as a Service&lt;/em&gt; &lt;em&gt;(FaaS)&lt;/em&gt; means coding simpler, easy to understand modules.&lt;/p&gt;

&lt;p&gt;Each microservice has its own repository. Inside it we structured our code around the &lt;a href="http://blog.jonathanoliver.com/ddd-strategic-design-core-supporting-and-generic-subdomains/" rel="noopener noreferrer"&gt;&lt;em&gt;subdomain&lt;/em&gt;&lt;/a&gt; it belongs to. Clustering all related functions of a &lt;em&gt;service&lt;/em&gt; together in the same folder.&lt;/p&gt;

&lt;p&gt;We are using the &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/quick-start/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; to manage, build and deploy our functions as a service microservices.&lt;/p&gt;

&lt;p&gt;We create one &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/services/" rel="noopener noreferrer"&gt;&lt;em&gt;service&lt;/em&gt;&lt;/a&gt; per subdomain, with its own &lt;em&gt;serverless.yml&lt;/em&gt; config file, external packages it needs (&lt;em&gt;package.json&lt;/em&gt;) and several JavaScript files that implement its functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Events Definitions
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Function as a Service&lt;/em&gt; is an event driven paradigm. Functions get spawned in response to an event.&lt;/p&gt;

&lt;p&gt;The event definition goes in the &lt;em&gt;serverless.yml&lt;/em&gt; config file for each service. It can be an HTTP request through API Gateway, a file uploaded to S3, a record updated in Aurora DB, a document inserted in DynamoDB, etc.&lt;/p&gt;

&lt;p&gt;We are currently only using HTTP, Kinesis streams and SNS events for our project. The framework has support for more &lt;a href="https://serverless.com/framework/docs/providers/aws/" rel="noopener noreferrer"&gt;event sources&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;p&gt;We use environment variables as much as we can. ARNs, Endpoints, AWS Credentials, AWS Region, etc. Serverless lets you define resources from environment variables. It will let you know if any of the variables are not defined in your environment before it tries to deploy your lambdas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Alx0zENu-ZsIg-zWGsVhHCA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Alx0zENu-ZsIg-zWGsVhHCA.png"&gt;&lt;/a&gt;Serverless Framework complaining about missing environment variables&lt;/p&gt;

&lt;p&gt;The team shares environment variables with the help of &lt;a href="https://www.torus.sh/" rel="noopener noreferrer"&gt;Torus&lt;/a&gt;. Our CI/CD tool reads these environment variables to deploy our lambdas automatically when we approve a PR.&lt;/p&gt;

&lt;h3&gt;
  
  
  Release / Deploy
&lt;/h3&gt;

&lt;p&gt;We use &lt;em&gt;Continuous Integration&lt;/em&gt; and &lt;em&gt;Deployment&lt;/em&gt;. Code Linting and Tests runs before a PR can be merged. Deployments to &lt;em&gt;dev&lt;/em&gt; and &lt;em&gt;staging&lt;/em&gt; are automated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AClvMVSJxCklXjg8BwVKyTg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AClvMVSJxCklXjg8BwVKyTg.png"&gt;&lt;/a&gt;Deploying with CI/CD + Bash + Serverless (staging)&lt;/p&gt;

&lt;p&gt;Our CI/CD pipeline runs a basic NodeJS docker image, finds all our services, goes into each folder, runs &lt;a href="https://yarnpkg.com/lang/en/" rel="noopener noreferrer"&gt;&lt;em&gt;yarn&lt;/em&gt;&lt;/a&gt; to install all dependencies and deploys to AWS. Nothing particularly sophisticated.&lt;/p&gt;

&lt;p&gt;A few bash get run commands, &lt;em&gt;et voilÃ !&lt;/em&gt; A fully functional set of lambdas running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Al5MSQ4f0xIeW8jb31T6K_A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Al5MSQ4f0xIeW8jb31T6K_A.png"&gt;&lt;/a&gt;Result of our CI/CD &lt;strong&gt;&lt;em&gt;yarn&lt;/em&gt;&lt;/strong&gt;&lt;em&gt; + &lt;/em&gt;&lt;strong&gt;&lt;em&gt;sls&lt;/em&gt;&lt;/strong&gt;Â run.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything’s been taken care of ðŸŽ‰.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If everything goes well, we will get a message on Slack that everything’s been taken care of. All services are up to date.&lt;/p&gt;

&lt;p&gt;At this point we are ready to work on the next feature and continue the development cycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  More resources ðŸ“š
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://serverless.com/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; and many &lt;a href="https://github.com/serverless/examples" rel="noopener noreferrer"&gt;examples&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/zeit/micro" rel="noopener noreferrer"&gt;Micro&lt;/a&gt; async http node microservices. Great first approach to self-hosted simple functions to build services.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://apex.run/" rel="noopener noreferrer"&gt;Apex&lt;/a&gt; another framework for managing FaaS. It has runtimes for Go, Clojure, and Rust, so you can write functions on those languages that are not supported by AWS Lambda.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://webtask.io/" rel="noopener noreferrer"&gt;WebTasks&lt;/a&gt; FaaS by the Auth0 team. Integrates really well with Slack and GitHub. Has Cron-like support for scheduled tasks.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://stdlib.com/" rel="noopener noreferrer"&gt;stdlib&lt;/a&gt; a slightly different approach. They strive to become a Standard Library for FaaS. Has a Service Directory and functions can be invoked from regular NodeJS, Python or Ruby projects.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>awslambda</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
