<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Casalboni</title>
    <description>The latest articles on DEV Community by Alex Casalboni (@alexcasalboni).</description>
    <link>https://dev.to/alexcasalboni</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexcasalboni"/>
    <language>en</language>
    <item>
      <title>Graceful degradation in practice: how FeatureOps builds real resilience</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Mon, 19 Jan 2026 13:57:00 +0000</pubDate>
      <link>https://dev.to/alexcasalboni/graceful-degradation-in-practice-how-featureops-builds-real-resilience-1p4i</link>
      <guid>https://dev.to/alexcasalboni/graceful-degradation-in-practice-how-featureops-builds-real-resilience-1p4i</guid>
      <description>&lt;p&gt;Modern software systems fail in interesting and unpredictable ways. A payment provider slows down, an analytics service times out, a third-party API rate-limits you, or a new frontend component crashes on only half your users’ browsers. None of this is unusual anymore. What matters is whether your product collapses with those failures or bends without breaking.&lt;/p&gt;

&lt;p&gt;That ability to “bend” is what resilience engineering calls &lt;strong&gt;graceful degradation&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The ability of a system to maintain at least some of its functionality when portions are not working, or when certain features are not available. [source: &lt;a href="https://en.wiktionary.org/wiki/graceful_degradation" rel="noopener noreferrer"&gt;wikitionary&lt;/a&gt;]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of failing catastrophically, your system falls back to a reduced but still functional experience. It keeps users moving, it protects your brand, and it buys your team time to react instead of firefighting in panic.&lt;/p&gt;

&lt;p&gt;Graceful degradation isn’t one technique. It’s a mindset supported by a handful of practices and techniques such as circuit breakers, timeouts, bulkheads, retries with backoff, and load shedding. These patterns keep systems stable under stress, and &lt;a href="https://www.getunleash.io/blog/featureops-standardize-scale-software-delivery" rel="noopener noreferrer"&gt;FeatureOps&lt;/a&gt; becomes the layer that lets you control or modify those behaviors dynamically at runtime. Instead of relying solely on hard-coded logic, you gain the ability to toggle fallback paths, disable risky integrations, or reduce load instantly when conditions change.&lt;/p&gt;

&lt;p&gt;In other words: your resilience plan is only as good as your ability to control behavior at runtime. This is where feature flags, kill switches, and progressive rollouts become the foundation of engineering resilience.&lt;/p&gt;

&lt;p&gt;Let’s break down how FeatureOps makes graceful degradation real for both frontend and backend systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why graceful degradation matters for modern engineering teams
&lt;/h2&gt;

&lt;p&gt;Most systems today are distributed by default. A typical application might have a frontend calling several internal APIs, backend services relying on third-party platforms, workers consuming external queues, or browser features behaving differently across devices.&lt;/p&gt;

&lt;p&gt;On top of that, many products now run &lt;a href="https://www.getunleash.io/blog/experimentation-is-more-than-a-b-testing" rel="noopener noreferrer"&gt;A/B tests or ship experimental UI variants&lt;/a&gt; directly in production. Every one of these moving parts introduces opportunities for unpredictable behavior, and any one of them can slow down, fail temporarily, or start returning inconsistent results.&lt;/p&gt;

&lt;p&gt;The goal of &lt;strong&gt;graceful degradation&lt;/strong&gt; is simple: when something goes wrong, users keep moving and you stay in control.&lt;/p&gt;

&lt;p&gt;The end goal is to &lt;strong&gt;absorb those failures without derailing the user experience&lt;/strong&gt;. Instead of crashing or blocking an entire workflow, your system should fall back to cached or partial data, disable a problematic UI element, route traffic to safer fallback logic, or temporarily skip a slow backend dependency.&lt;/p&gt;

&lt;p&gt;Sometimes it means turning off a resource-intensive algorithm during peak load, or isolating an experiment variant that behaves differently than expected. The specifics vary, but the outcome is always the same: users keep moving, and you remain in control.&lt;/p&gt;

&lt;p&gt;What makes graceful degradation especially important is that it requires action in the moment, before a deployment is possible. When something goes wrong, you rarely have time for a rebuild or a redeploy. You need levers you can pull instantly to adjust behavior in production.&lt;/p&gt;

&lt;p&gt;This is exactly where FeatureOps becomes essential. Feature flags, kill switches, and progressive rollout controls turn resilience from an improvised reaction into a deliberate runtime capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everything fails all the time
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything fails all the time. [Werner Vogels, CTO @ Amazon]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fault tolerance tries to prevent failures entirely, while graceful degradation accepts that &lt;strong&gt;failures will happen&lt;/strong&gt; and focuses on controlling their impact.&lt;/p&gt;

&lt;p&gt;Most modern distributed systems rely on a mix of both approaches. In practice, FeatureOps leans toward the graceful degradation side by giving teams control levers that keep the product usable even when parts of the system are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  FeatureOps as the backbone of graceful degradation
&lt;/h2&gt;

&lt;p&gt;FeatureOps is all about turning runtime behavior into something you can adjust like a control panel. It gives engineering teams the ability to shape system behavior dynamically, which is exactly what graceful degradation relies on.&lt;/p&gt;

&lt;p&gt;When something breaks, slows down, or starts acting strangely, you need a set of controls that let you respond immediately without touching the deployment pipeline. Feature flags help isolate risky functionality so issues stay contained. &lt;a href="https://www.getunleash.io/feature-flag-use-cases-software-kill-switches" rel="noopener noreferrer"&gt;Kill switches&lt;/a&gt; give you a fast way to disable dependencies or non-critical features when they misbehave. &lt;a href="https://www.getunleash.io/feature-flag-use-cases-progressive-or-gradual-rollouts" rel="noopener noreferrer"&gt;Progressive rollouts&lt;/a&gt; let you limit blast radius by shifting only a portion of traffic onto new code paths until you’re confident in their stability. And &lt;a href="https://docs.getunleash.io/concepts/activation-strategies#targeting" rel="noopener noreferrer"&gt;targeting rules&lt;/a&gt; help you protect specific segments of users or environments if a problem surfaces.&lt;/p&gt;

&lt;p&gt;Together, these capabilities turn runtime behavior into something engineers can manage intentionally rather than reactively. Instead of scrambling to patch production during an incident, teams can adjust traffic, disable unstable features, or reduce load with a few controlled changes. Graceful degradation stops being an emergency tactic and becomes part of your regular operating model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Designing the right fallback
&lt;/h3&gt;

&lt;p&gt;Designing graceful degradation often comes down to choosing the right fallback. A fallback might be cached data when an API slows down, a simplified UI when a component becomes unstable, or stubbed responses when a dependency is temporarily unavailable.&lt;/p&gt;

&lt;p&gt;Feature flags act as the switch deciding when to activate those fallbacks, which keeps the complexity out of your core logic and allows you to adjust behavior without redeploying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frontend graceful degradation with FeatureOps
&lt;/h2&gt;

&lt;p&gt;Frontend failures tend to be very visible. A single broken component can block checkout flows, onboarding screens, or dashboards entirely. But with flags, you can disable or replace individual UI behaviors in seconds.&lt;/p&gt;

&lt;p&gt;Let’s walk through examples for popular frameworks.&lt;/p&gt;

&lt;h3&gt;
  
  
  React example: disabling a failing component
&lt;/h3&gt;

&lt;p&gt;Imagine a performance-heavy chart is causing the page to freeze for some users.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useFlag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useUnleashClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@unleash/proxy-client-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useUnleashClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;updateContext&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// e.g. "DE"&lt;/span&gt;
      &lt;span class="na"&gt;plan&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;plan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;// optional segmentation&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;showAdvancedChart&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useFlag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;advanced-chart-enabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;showAdvancedChart&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;AdvancedChart&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FallbackChart&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the chart starts throwing errors in production, you flip the flag off. Users immediately see the fallback component. No redeploy. No panic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next.js example: gracefully degrading an API-dependent UI component
&lt;/h3&gt;

&lt;p&gt;Suppose you rely on a third-party analytics API that starts timing out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@unleash/nextjs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/analytics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchUserFromSession&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;unleash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// e.g. "UK"&lt;/span&gt;
      &lt;span class="na"&gt;accountType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;accountType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useLive&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;unleash&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isEnabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;live-analytics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;analyticsData&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;useLive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;analyticsData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;analyticsData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;analyticsData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the provider has an outage, you disable live-analytics and ship cached or partial UI instantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Angular example: disabling expensive UI logic
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;app-map&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`
    &amp;lt;app-basic-map&amp;gt;&amp;lt;/app-basic-map&amp;gt;
    &amp;lt;app-heatmap *ngIf="heatmapEnabled"&amp;gt;&amp;lt;/app-heatmap&amp;gt;
  `&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MapComponent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;heatmapEnabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;unleash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UnleashService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// e.g. "UK"&lt;/span&gt;
        &lt;span class="na"&gt;subscription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;plan&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;heatmapEnabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;unleash&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isEnabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;heatmap-feature&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The heatmap renders only when stable. If it starts freezing low-memory devices, flip the flag off globally or for targeted browser segments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend graceful degradation with FeatureOps
&lt;/h2&gt;

&lt;p&gt;Backend systems often face stress and cascading failures when upstream dependencies degrade. Kill switches and fallback flags prevent complete meltdowns.&lt;/p&gt;

&lt;p&gt;Not all degradation is equal. Sometimes you only need to reduce functionality slightly, like showing cached analytics instead of live data. Other times, you disable a full subsystem while keeping the rest of the product operational. Feature flags make both soft and hard degradation possible, and the strategy you choose depends on how critical the failing component is.&lt;/p&gt;

&lt;p&gt;Let’s look at common examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node.js example: kill switching a dependency
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;unleash&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./unleash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/payments&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// however you attach auth info&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// e.g. "UK"&lt;/span&gt;
      &lt;span class="na"&gt;customerTier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tier&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isLive&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;unleash&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isEnabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;billing-live&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;isLive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;degraded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Billing temporarily unavailable for your region&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;billingService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the billing provider slows down, disable “billing-live” and your API stays responsive.&lt;/p&gt;

&lt;p&gt;Most backend systems also rely on timeouts, retries, and circuit breakers. These help react to issues automatically, but they don’t give you fine-grained control when things go badly. Feature flag kill switches complement these mechanisms by giving engineers the ability to intervene proactively when automated recovery isn’t enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Python example: degrading an AI or ML feature
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;unleash_client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;UnleashClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;UnleashClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://unleash.example.com/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recommender-service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize_client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;recommend_products&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;userId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;properties&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;country&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# e.g. "US"
&lt;/span&gt;            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;segment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;segment&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_enabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recommender-live&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;fallback_recommendations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;call_ml_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;TimeoutError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;fallback_recommendations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps when ML features become slow or overloaded.&lt;/p&gt;

&lt;h3&gt;
  
  
   Go example: isolate an unstable microservice
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"net/http"&lt;/span&gt;
    &lt;span class="n"&gt;unleash&lt;/span&gt; &lt;span class="s"&gt;"github.com/Unleash/unleash-client-go/v4"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;userFromRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Properties&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"country"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Country&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c"&gt;// e.g. "DE"&lt;/span&gt;
            &lt;span class="s"&gt;"plan"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Plan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;URL&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"q"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Unleash&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsEnabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"use-search-service"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;unleash&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fallbackSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;writeJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;callSearchService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fallbackSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;writeJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same logic. Grace under pressure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rust example: controlling CPU-intensive workflows
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;unleash_api_client&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;UnleashClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;process_image&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;UnleashClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Context&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="py"&gt;.id&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
        &lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="s"&gt;"country"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="py"&gt;.country&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;())]&lt;/span&gt;  &lt;span class="c1"&gt;// e.g. "FR"&lt;/span&gt;
                &lt;span class="nf"&gt;.into_iter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="nf"&gt;.collect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="nn"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="nf"&gt;.is_enabled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"image-optimizer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;simple_resize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;optimized&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;heavy_optimization&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;optimized&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If errors spike, progressive rollout lets you pause or revert instantly without deploying a fix.&lt;/p&gt;

&lt;h3&gt;
  
  
  Java example: controlling risky workflows
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;io.getunleash.Unleash&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;io.getunleash.UnleashContext&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CheckoutService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;Unleash&lt;/span&gt; &lt;span class="n"&gt;unleash&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;CheckoutService&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Unleash&lt;/span&gt; &lt;span class="n"&gt;unleash&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;unleash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;unleash&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;CheckoutResult&lt;/span&gt; &lt;span class="nf"&gt;checkout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Order&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;UnleashContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;UnleashContext&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"country"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getCountry&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;      &lt;span class="c1"&gt;// e.g. "NO"&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"accountType"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAccountType&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;unleash&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEnabled&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"new-checkout-flow"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;legacyCheckout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;newCheckout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect for isolating checkout problems without a redeploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Graceful degradation vs chaos engineering
&lt;/h2&gt;

&lt;p&gt;Chaos engineering is often described as the practice of deliberately introducing failures to ensure systems can withstand them. It focuses on exposing weaknesses in real-world conditions so you don’t discover them for the first time in production at 3 a.m.&lt;/p&gt;

&lt;p&gt;Graceful degradation, on the other hand, is what keeps the system usable when those failures actually occur. It’s the safety net that prevents isolated problems from cascading into full outages.&lt;/p&gt;

&lt;p&gt;The two ideas complement each other. Chaos experiments reveal where your system is too brittle, while graceful degradation strategies give you a way to absorb that brittleness without harming users. Many teams pair the two by running controlled failure injections and observing how feature flags, fallbacks, kill switches, and rollout strategies behave under stress.&lt;/p&gt;

&lt;p&gt;One powerful pattern here is using feature flags to run chaos experiments safely in production. Whether it’s adding network latency, forcing an external dependency to fail, or simulating high CPU load, you can wrap failure injection behind a flag and enable it only for a specific subset of users, services, or environments. That means you can test real integrations and real traffic without exposing your entire customer base to the experiment.&lt;/p&gt;

&lt;p&gt;If a dependent API goes down during a chaos test and you can flip a flag to route traffic to a fallback path, you’ve validated both your resilience design and your operational readiness.&lt;/p&gt;

&lt;p&gt;FeatureOps provides the runtime switches that make this pairing practical. You can simulate outages or degraded conditions safely, monitor how your system responds, and recover instantly if the experiment uncovers unexpected behavior.&lt;/p&gt;

&lt;p&gt;Instead of chaos engineering being a risky exercise, FeatureOps turns it into a controlled, reversible workflow where every failure has an escape hatch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Progressive rollouts as part of graceful degradation
&lt;/h2&gt;

&lt;p&gt;Graceful degradation isn’t always about turning features off. Sometimes the best way to keep a system stable is to control how much traffic a new component or behavior receives.&lt;/p&gt;

&lt;p&gt;Progressive rollouts make this possible by letting you adjust exposure gradually instead of pushing all users onto a new path at once. This helps you understand how a feature behaves under increasing load, identify performance issues early, and contain failures before they affect everyone.&lt;/p&gt;

&lt;p&gt;For example, you can release a new search algorithm to a small percentage of users, observe real performance, and increase traffic only when you’re confident it behaves correctly. If you start to see latency spikes or error rates climbing, you can pause the rollout or dial it back to a safer percentage without undoing deployments or reverting code.&lt;/p&gt;

&lt;p&gt;This kind of real-time control becomes especially useful when features behave well in staging but reveal unexpected performance characteristics under production traffic. With a progressive rollout, you can treat capacity limits, integration issues, or dependency failures as adjustable variables.&lt;/p&gt;

&lt;p&gt;Instead of all-or-nothing decisions, you move through a spectrum of exposure levels. It’s a graceful way to test stability under real conditions while keeping the user experience protected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kill switches: the emergency lever every team needs
&lt;/h2&gt;

&lt;p&gt;A kill switch is one of the simplest tools in FeatureOps, yet it often delivers the biggest impact during real incidents.&lt;/p&gt;

&lt;p&gt;At its core, a kill switch is just a feature flag dedicated to disabling a specific part of your system when it starts misbehaving. That might mean turning off a third-party integration that’s returning errors, skipping a non-critical workflow that’s consuming too many resources, or shutting down an experimental feature that’s affecting only a portion of users.&lt;/p&gt;

&lt;p&gt;The moment a dependency slows down or an external service becomes unreliable, you can flip the switch and instantly redirect your application to a safer fallback path.&lt;/p&gt;

&lt;p&gt;What makes kill switches so effective is their flexibility. You can disable functionality for all users or limit the change to a specific region, environment, or percentage of traffic if you need a more controlled response. You can even restrict the impact to certain browsers or API consumers when a problem only manifests under specific conditions. Instead of hotfixing production under pressure, you regain stability with a single, deliberate action.&lt;/p&gt;

&lt;p&gt;A well-designed kill switch gives engineers a reliable safety mechanism that buys time, protects customers, and keeps the system usable while deeper issues are investigated.&lt;/p&gt;

&lt;h2&gt;
  
  
  FeatureOps makes graceful degradation repeatable, not improvisational
&lt;/h2&gt;

&lt;p&gt;The real power of FeatureOps is that it turns graceful degradation into a &lt;strong&gt;predictable operating model&lt;/strong&gt; instead of something teams improvise during an outage.&lt;/p&gt;

&lt;p&gt;With flags isolating risky behavior, kill switches ready to shut down failing dependencies, and rollout controls that shape how traffic flows through new code, teams gain the ability to manage production conditions intentionally.&lt;/p&gt;

&lt;p&gt;Instead of relying on tribal knowledge or frantic Slack threads when something goes wrong, engineers can react with well-understood patterns: shift traffic away from unstable features, disable problematic integrations, or reduce load by rolling back a percentage of users to a safer path.&lt;/p&gt;

&lt;p&gt;As teams mature, &lt;a href="https://www.getunleash.io/blog/automated-featureops-impact-metrics-mcp-server" rel="noopener noreferrer"&gt;graceful degradation can also become partially automated&lt;/a&gt;. With &lt;a href="https://docs.getunleash.io/concepts/impact-metrics" rel="noopener noreferrer"&gt;impact metrics&lt;/a&gt; like error rates, latency, or saturation thresholds, Unleash can pause rollouts or trigger fallbacks automatically. This reduces the time between problem and response even further, especially during off-hours or high-load periods.&lt;/p&gt;

&lt;p&gt;This structured approach means resilience isn’t something added later or reserved for crisis moments. It becomes part of day-to-day development.&lt;/p&gt;

&lt;p&gt;This is how engineering teams protect uptime without slowing innovation.&lt;/p&gt;

&lt;p&gt;And because Unleash is open source and self-hostable, you can embed these patterns deeply into your architecture.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>architecture</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deep dive: finding the optimal resources allocation for your Lambda functions</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Thu, 17 Sep 2020 13:39:25 +0000</pubDate>
      <link>https://dev.to/aws/deep-dive-finding-the-optimal-resources-allocation-for-your-lambda-functions-35a6</link>
      <guid>https://dev.to/aws/deep-dive-finding-the-optimal-resources-allocation-for-your-lambda-functions-35a6</guid>
      <description>&lt;p&gt;Building with a serverless mindset brings &lt;a href="https://aws.amazon.com/serverless/" rel="noopener noreferrer"&gt;many benefits&lt;/a&gt;, from high availability to resiliency, pay for value, managed operational excellence, and many more.&lt;/p&gt;

&lt;p&gt;You can often achieve cost and performance improvements as well, with respect to more traditional computing platforms.&lt;/p&gt;

&lt;p&gt;At the same time, the best practices that allow you to design well-architected serverless applications have been evolving in the last five years. Many techniques have emerged such as avoiding "monolithic" functions, optimizing runtime dependencies, minifying code, filtering out uninteresting events, externalizing orchestration, etc. You can read about many of these practices in the &lt;a href="https://d1.awsstatic.com/whitepapers/architecture/AWS-Serverless-Applications-Lens.pdf" rel="noopener noreferrer"&gt;AWS Serverless Application Lens&lt;/a&gt; whitepaper (last update: Dec 2019).&lt;/p&gt;

&lt;p&gt;In this article, I'd like to dive deep into an optimization technique that I consider particularly useful as it doesn't require any code or architecture refactoring.&lt;/p&gt;

&lt;p&gt;I'm referring to optimizing the resource allocation of your Lambda functions. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda resource allocation (power)
&lt;/h2&gt;

&lt;p&gt;You can allocate memory to each individual Lambda function, from 128MB up to 3GB of memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyps6exknud5em8en36a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyps6exknud5em8en36a.png" alt="AWS Lambda memory handler" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before you stop reading because "who cares about memory utilization?" let me clarify that it's much more appropriate to talk about &lt;strong&gt;power&lt;/strong&gt; rather than memory. Because with more memory &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html#function-configuration" rel="noopener noreferrer"&gt;also comes more CPU, I/O throughput, etc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So for the rest of this article, I'm going to call it power. When I say "512MB of power" it will correspond to 512MB of memory for your Lambda function.&lt;/p&gt;

&lt;h3&gt;
  
  
  So why does it matter?
&lt;/h3&gt;

&lt;p&gt;It matters because more power means that your function might run faster. And with AWS Lambda, faster executions mean cheaper executions too. Since you are charged in 100ms intervals, reducing the execution time often reduces the average execution cost.&lt;/p&gt;

&lt;p&gt;For example, let's assume that by doubling the power of your Lambda function from 128MB to 256MB you could reduce the execution time from 310ms to 160ms. This way, you've reduced the billed time from 400ms to 200ms, achieving a 49% performance improvement for the same cost. If you double the power again to 512MB, you could reduce the execution time even further from 160ms to 90ms. So you've halved the billed time again, from 200ms to 100ms, achieving another 44% performance improvement. In total, that's a 71% performance improvement, without changing a single line of code, for the very same cost.&lt;/p&gt;

&lt;p&gt;I understand these numbers are quite hard to parse and visualize in your mind, so here's a chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyoqepvrl1jjwqhqvmbrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyoqepvrl1jjwqhqvmbrr.png" alt="Cost/Performance example" width="591" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The blue line represents our average execution time: 49% lower at 256MB and 71% lower at 512MB. Since Lambda's execution cost is proportional to memory allocation, we'd expect to spend more. But because we are jumping down to 200ms and 100ms respectively, the orange line (cost) is constant.&lt;/p&gt;

&lt;h3&gt;
  
  
  What if I don't need all that memory?
&lt;/h3&gt;

&lt;p&gt;It doesn't matter how much memory you need. This is the counterintuitive part, especially if you come from a more traditional way of thinking about cost and performance.&lt;/p&gt;

&lt;p&gt;Typically, over-provisioning memory means you're wasting resources. But remember, here memory means power 🚀&lt;/p&gt;

&lt;p&gt;Our function might need only 50MB of memory to run correctly, and yet we will allocate 512MB so it will run faster for the same money. In other cases, your function might become faster AND  cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Great, but how do I verify this in practice?
&lt;/h2&gt;

&lt;p&gt;I asked myself the very same question in 2017. One day (on March 27th, around 6 PM CEST), I started working on automating this power-tuning process so my team and I could finally take data-driven decisions instead of guessing.&lt;/p&gt;

&lt;p&gt;Meet AWS Lambda Power Tuning: &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning" rel="noopener noreferrer"&gt;github.com/alexcasalboni/aws-lambda-power-tuning&lt;/a&gt; 🎉&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Lambda Power Tuning is an open-source tool that helps you visualize and fine-tune the power configuration of Lambda functions.&lt;/p&gt;

&lt;p&gt;It runs in your AWS account - powered by AWS Step Functions - and it supports multiple optimization strategies and use cases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The tool will execute a given Lambda function a few times, parse the logs, crunch some numbers, and return the optimal power configuration.&lt;/p&gt;

&lt;p&gt;This process is possible in a reasonable time because there is only one dimension to optimize. Today there are 46 different power values to choose from, and the tool allows you to select which values you want to test. In most cases, you can also afford running all the executions in parallel so that the overall execution takes only a few seconds - depending on your function's average duration.&lt;/p&gt;

&lt;p&gt;Here's what you need to get started with Lambda Power Tuning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy the power tuning app&lt;/strong&gt; &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning" rel="noopener noreferrer"&gt;via Serverless Application Repository (SAR)&lt;/a&gt; - there are other deployment options &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/README-DEPLOY.md" rel="noopener noreferrer"&gt;documented here&lt;/a&gt; (for example, the &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/README-DEPLOY.md#option-4-deploy-with-the-lumigo-cli" rel="noopener noreferrer"&gt;Lumigo CLI&lt;/a&gt; or the &lt;a href="https://github.com/mattymoomoo/aws-power-tuner-ui" rel="noopener noreferrer"&gt;Lambda Power Tuner UI&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the state machine&lt;/strong&gt; via the web console or API - here's where you provide your function's ARN and a few more options&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wait for the execution results&lt;/strong&gt; - you'll find the optimal power here&lt;/li&gt;
&lt;li&gt;You also get a handy &lt;strong&gt;visualization URL&lt;/strong&gt; - this is how you'll find the sweet spot visually before you fully automate the process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frhv9kc9s4of1q8bxcu16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frhv9kc9s4of1q8bxcu16.png" alt="Step Function execution example" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I find the sweet spot visually?
&lt;/h3&gt;

&lt;p&gt;Let's have a look at the two examples below.&lt;/p&gt;

&lt;p&gt;The red curve is always (avg) execution time, while the blue curve is always (avg) execution cost.&lt;/p&gt;

&lt;p&gt;In both cases, I'm checking six common power values: 128MB, 256MB, 512MB, 1GB, etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example 1
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea10pg3uya2fv4ys2rx9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea10pg3uya2fv4ys2rx9.jpg" alt="Example 1" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I'm power-tuning a long-running and CPU-intensive function. It runs in 35 seconds at 128MB and in about 3 seconds at 1.5GB. The cost curve is pretty flat and decreases a bit until 1.5GB, then increases at 3GB.&lt;/p&gt;

&lt;p&gt;The optimal power value is 1.5GB because it's 11x faster and 14% cheaper with respect to 128MB.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example 2
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3egw8olmpq5mlye7ggzi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3egw8olmpq5mlye7ggzi.jpg" alt="Example 2" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The average execution time goes from 2.4 seconds at 128MB to about 300ms at 1GB. At the same time, cost stays precisely the same. So we run 8x faster for the same cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before we proceed with more examples...
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: we may not need 1GB or 1.5GB of memory to run the two functions above, but it doesn't matter because in both cases we get much better performance for similar (or even lower) cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Also note&lt;/strong&gt;: if you are a data geek like me, you've probably noticed two more things to remember when interpreting these charts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The two y-axes (speed and cost) are independent of each other, so the point where the two curves cross each other is not necessarily the optimal value.&lt;/li&gt;
&lt;li&gt;Don't assume that untested power values (e.g. 768MB) correspond to the curve's interpolated value - testing additional power values in between might reveal unexpected patterns.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What does the state machine input/output look like?
&lt;/h3&gt;

&lt;p&gt;Here's the minimal input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lambdaARN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;your-lambda-function-arn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;num&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I highly encourage you to check out some of the other input options too (full documentation &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/README-INPUT-OUTPUT.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lambdaARN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;your-lambda-function-arn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;num&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;parallelInvocation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;payload&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;your&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;payload&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;powerValues&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3008&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;strategy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;speed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dryRun&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For special use cases - for example, when you need to power-tune functions with side-effects or varying payloads - you can provide weighted payloads or pre/post-processing functions.&lt;/p&gt;

&lt;p&gt;Here's what the output will look like (full documentation &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/README-INPUT-OUTPUT.md#state-machine-output" rel="noopener noreferrer"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;results&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;power&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;512&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cost&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0000002083&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;duration&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;2.906&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;stateMachine&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;executionCost&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.00045&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lambdaCost&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0005252&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;visualization&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://lambda-power-tuning.show/#&amp;lt;encoded_data&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;power&lt;/code&gt;, &lt;code&gt;cost&lt;/code&gt;, and &lt;code&gt;duration&lt;/code&gt; represent the optimal power value and its corresponding average cost and execution time.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stateMachine&lt;/code&gt; contains details about the state machine execution itself such as the cost related to Step Functions and Lambda. This information is particularly useful if you want to keep track of optimization costs without surprises - although typically, we are talking about $0.001 for the whole execution (excluding additional costs that your function might generate invoking downstream services).&lt;/p&gt;

&lt;p&gt;Last but not least, you'll find the visualization URL (under lambda-power-tuning.show), an &lt;a href="https://github.com/matteo-ronchetti/aws-lambda-power-tuning-ui" rel="noopener noreferrer"&gt;open-source static website&lt;/a&gt; hosted on AWS Amplify Console. If you don't visit that URL, nothing happens. But even when you visit the URL, there is absolutely no data sharing to any external server or service. The &lt;code&gt;&amp;lt;encoded_data&amp;gt;&lt;/code&gt; mentioned above only contains the raw numbers needed for clientside visualization, without any additional information about your Account ID, function name, or tuning parameters. You are also free to build your custom visualization website and provide it at deploy-time as a CloudFormation parameter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me more examples, please!
&lt;/h2&gt;

&lt;p&gt;Depending on what your function is doing, you'll find completely different cost/performance patterns. With time, you'll be able to identify at first glance which functions might benefit the most from power-tuning and which aren't likely to benefit much.&lt;/p&gt;

&lt;p&gt;I encourage you to build a solid hands-on experience with some of the patterns below, so you'll learn how to categorize your functions intuitively while coding/prototyping. Until you reach that level of experience and considering the low effort and cost required, I'd recommend power-tuning every function and playing a bit with the results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost/Performance patterns
&lt;/h3&gt;

&lt;p&gt;I've prepared a shortlist of 6 patterns you may encounter with your functions.&lt;/p&gt;

&lt;p&gt;Let's have a look at some sample Lambda functions and their corresponding power-tuning results. If you want to deploy all of them, you'll find the &lt;a href="https://gist.github.com/alexcasalboni/9ce2cef56a7d052d4f5e798b37083525" rel="noopener noreferrer"&gt;sample code and SAM template here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  1) The No-Op (trivial data manipulation)
&lt;/h4&gt;

&lt;p&gt;When I say No-op functions, I mean functions that do very little, and they are more common than you might think. It happens pretty often that a Lambda function is invoked by other services to customize their behavior, and all you need is some trivial data manipulation. Maybe a couple of &lt;code&gt;if&lt;/code&gt;'s or a simple format conversion - no API calls or long-running tasks.&lt;/p&gt;

&lt;p&gt;Here's a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NOOP&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;something&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;KO&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;KO&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This kind of function will never exceed 100ms of execution time. Therefore, we expect its average cost to increase linearly with power.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gAAAAQACAAQABsAL;upCOQJsLyT/AEco/TPC4P6uqwj+2870/;m1ZfNJtW3zSbVl81m1bfNfSAJzaaA6Q2" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2ape48ccfbwen577i7v5.png" alt="Example - No-Op" width="800" height="410"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;Even though there is no way to make no-op functions cheaper, sometimes you can make them run 3-5x faster. In this case, it might be worth considering 256MB of power, so it runs in less than 2ms instead of 5ms. If your function is doing something more than a simple &lt;code&gt;if&lt;/code&gt;, you might see a more significant drop - for example, from 30ms to 10ms.&lt;/p&gt;

&lt;p&gt;Does it make sense to pay a bit more just to run 20ms faster? It depends :) &lt;/p&gt;

&lt;p&gt;If your system is composed of 5-10 microservices that need to talk to each other, shaving 20ms off each microservice might allow you to speed up the overall API response by a perceivable amount, resulting in a better UX.&lt;/p&gt;

&lt;p&gt;On the other hand, if this function is entirely asynchronous and does not impact your final users' experience, you probably want to make it as cheap as possible (128MB).&lt;/p&gt;

&lt;h4&gt;
  
  
  2) The CPU-bound (numpy)
&lt;/h4&gt;

&lt;p&gt;This function requires &lt;a href="https://numpy.org/" rel="noopener noreferrer"&gt;numpy&lt;/a&gt;, a very common Python library for scientific computing - which is available as an official &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html" rel="noopener noreferrer"&gt;Lambda layer&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="c1"&gt;# make this execution reproducible
&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# create a random matrix (1500x1500)
&lt;/span&gt;    &lt;span class="n"&gt;matrix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# invert it (this is CPU-intensive!)
&lt;/span&gt;    &lt;span class="n"&gt;inverted_matrix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linalg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matrix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inverted_matrix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function creates a random matrix (1500 rows, 1500 columns) and then inverts it.&lt;/p&gt;

&lt;p&gt;So we are talking about a very CPU-intensive process that requires almost 10 seconds with only 128MB of power.&lt;/p&gt;

&lt;p&gt;The good news is that it will run &lt;em&gt;much&lt;/em&gt; faster with more memory. How much faster? Check the chart below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gAAAAQACAAQABsAL;AJYTRjPalEUodxdFpKyZRKRATkT59eND;R8KlN/SApzepe643Xna1NxNxvDeABM03" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffhzn12errp0ccphgjtvj.png" alt="Example - numpy" width="800" height="412"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;Yes, it will run almost 21x faster (2100%) with 3GB of power. And that's for a cost increase of only 23%.&lt;/p&gt;

&lt;p&gt;Let me repeat that: we can run this function in 450ms instead of 10 seconds if we're happy about paying 23% more.&lt;/p&gt;

&lt;p&gt;If you can't afford a 23% cost increase, you can still run 2x faster for a 1% cost increase (256MB). Or 4x faster for a 5% cost increase (512MB). Or 7x faster for a 9% cost increase (1GB).&lt;/p&gt;

&lt;p&gt;Is it worth it? It depends :)&lt;/p&gt;

&lt;p&gt;If you need to expose this as a synchronous API, you probably want it to run in less than a second.&lt;/p&gt;

&lt;p&gt;If it's just part of some asynchronous ETL or ML training, you might be totally fine with 5 or 10 seconds.&lt;/p&gt;

&lt;p&gt;The important bit is that this data will help you find the optimal trade-off for your specific use case and make an informed decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: the numbers above do not take into consideration cold starts. By default, Lambda Power Tuning ignores cold executions, so all these averages are not biased. This allows you to reason about the largest majority of (warm) executions.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) The CPU-bound (prime numbers)
&lt;/h4&gt;

&lt;p&gt;Let's consider another long-running function. This function also uses numpy to compute the first 1M prime numbers for 1k times in a row.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# do the same thing 1k times in a row
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# compute the first 1M prime numbers
&lt;/span&gt;        &lt;span class="n"&gt;primes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;compute_primes_up_to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;compute_primes_up_to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# this is the fastest single-threaded algorithm I could find =)
&lt;/span&gt;    &lt;span class="c1"&gt;# from https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python/3035188#3035188
&lt;/span&gt;    &lt;span class="n"&gt;sieve&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ones&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;sieve&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;sieve&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
            &lt;span class="n"&gt;sieve&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;   &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
            &lt;span class="n"&gt;sieve&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;r_&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,((&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;nonzero&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sieve&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function takes almost 35 seconds to run with only 128MB of power. &lt;/p&gt;

&lt;p&gt;But good news again! We can make it run &lt;em&gt;much&lt;/em&gt; &lt;em&gt;much&lt;/em&gt; faster with more memory. How much faster? Check the chart below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gAAAAQACAAQABsAL;CREHR2+bhEblnwBG2dduRfagGEW2lPtE;h+2WOINPlDgo0pA4xhiIOL/cgji6RNc4" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4cd55k9wfh94h80vv2w2.png" alt="Example - prime numbers" width="800" height="410"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;Yes, it will run more than 14x faster (1400%) with 1.5GB of power. And that's with a cost &lt;em&gt;DECREASE&lt;/em&gt; of 13.9%.&lt;/p&gt;

&lt;p&gt;Let me repeat that: we can run this function in 2 seconds instead of 35 seconds, while at the same time we make it cheaper to run.&lt;/p&gt;

&lt;p&gt;We could make it even faster (17x faster instead of 14x) with 3GB of power, but unfortunately the algorithm I found on StackOverflow cannot leverage multi-threading well enough (you get two cores above 1.8GB of power), so we'd end up spending 43% more.&lt;/p&gt;

&lt;p&gt;This could make sense in some edge cases, but I'd still recommend sticking to 1.5GB.&lt;/p&gt;

&lt;h5&gt;
  
  
  Unless...
&lt;/h5&gt;

&lt;p&gt;Unless there was an even more optimal power value between 1.5GB and 3GB. We aren't testing all the possible power values. We are trying only 6 of them, just because they are easy to remember.&lt;/p&gt;

&lt;p&gt;What happens if we test all the possible values? We know that our best option is 1.5GB for now, but we might find something even better (faster and cheaper) if we increase the granularity around it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lambdaARN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;your-lambda-function-arn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;....&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;powerValues&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ALL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;....&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what happens if you test all the possible values:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gADAAAABQAGAAcABAAJAAoACwAIAA0ADgAPAAwAEQASABMAEAAVABYAFwAUABkAGgAbABgAHQAeAB8AHAAhACIAIwAgACUAJgAnACQAKQAqACsAKAAtAC4ALwAs=;1TcGR17qs0ZBcoRG9qRdRkBbNkbRVRVGXRIARprU30WaBMlFUSC3RY3ApUVrhpdFWrmKRWCvgEWF83BFT31jRXGPVEVu7UdFq4o8RSBGMkVyCCpF+4MhRZJIG0UqKRVFkPEPRa8+D0Uy8AZFUKIBRbdb/kSPVgBFwK//RPEE/kRbuP1Eagv+RBxz/ETf5PxEYR/9RDS+/ETqsvxEByr9RPFw/ERW0PpE0NX8RPJX+0SoJPxEAVv7RA==;MA6WOFwllziDT5Q4jNqaON8bmTjWkJI4exOPOM5UjTh5xI04pIyNOM5UjTig7oo4yGeJOMhniTjGGIg48y+JOMhniThsm4Q4cDmHOELThDgZWoY4E22COL/cgji/3II4xhiIOEVxhzgZWoY4QtOEOMhniThP/I041pCSOFwllzjjuZs4ak6gOPDipDh3d6k4/guuOISgsjgLNbc4ksm7OBhewDif8sQ4JofJOKwbzjgzsNI4ukTXOA==" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbrlt3s7krr1hyu1w0adr.png" alt="Example - prime numbers granular" width="800" height="410"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;It turns out the (global) sweet spot is 1.8GB - which allows us to run 16x faster and 12.5% cheaper.&lt;/p&gt;

&lt;p&gt;Or we could pick 2112MB - which is 17x faster for the same cost of 128MB (still 20ms slower than 3GB, but for a better avg cost).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: when you see an increasing or decreasing trend (cost or speed), it's likely it will continue for a while also for power values you aren't testing. Generally, I'd suggest increasing your tuning granularity to find globally optimal values.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) The Network-bound (3rd-party API)
&lt;/h4&gt;

&lt;p&gt;Let's move on and talk about a first network-bound example. This function interacts with an external API, an API that's public and not hosted on AWS: &lt;a href="https://swapi.dev" rel="noopener noreferrer"&gt;The Star Wars API&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urllib.request&lt;/span&gt;

&lt;span class="c1"&gt;# this is my (public) third-party API
&lt;/span&gt;&lt;span class="n"&gt;URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://swapi.dev/api/people/?format=json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# prepare request
&lt;/span&gt;    &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# fetch and parse JSON
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urlopen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;json_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="c1"&gt;# extract value from JSON response
&lt;/span&gt;    &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function performs a GET request to fetch the number of characters available via the Star Wars API (we could have used the official &lt;a href="https://swapi-python.readthedocs.io/en/latest/readme.html" rel="noopener noreferrer"&gt;swapi-python library&lt;/a&gt; for a higher-level interface, but that wasn't the point).&lt;/p&gt;

&lt;p&gt;As we could have predicted, this external API's performance isn't impacted at all by the power of our Lambda function. Even though additional power means more I/O throughput, we are only fetching 5KB of data, so most of the execution time is spent waiting for the response, not transferring data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gAAAAQACAAQABsAL;S14yRAeSHkTGNhVEACwbRCYLFESWdBNE;m1bfNcdrQzb0gKc2x2tDN25BezdnBfY3" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4xqbvg34qmthg6y6mnu6.png" alt="Example - API" width="800" height="411"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;The red curve above is pretty flat and the blue curve is always increasing, which means we cannot do much to speed up this function or make it cheaper.&lt;/p&gt;

&lt;p&gt;We might save 50-100 milliseconds with additional power, but usually that's not enough to reduce the cost or keep it constant.&lt;/p&gt;

&lt;p&gt;In this case, we can run a little bit faster with 256MB or 512MB of power - up to 16% faster if we're happy to triple the average execution cost.&lt;/p&gt;

&lt;p&gt;Is it worth it? It depends.&lt;/p&gt;

&lt;p&gt;If your monthly Lambda bill is something like $20, how do you feel about bumping it to $60 and run a customer-facing function 15-20% faster? I would think about it.&lt;/p&gt;

&lt;p&gt;If it's not a customer-facing API, I'd stick to 128MB and make it as cheap as possible. And there might be other factors at play when it comes to third-party APIs. For example, you may need to comply with some sort of rate-limiting; if you're performing batches of API calls in series, a function that runs slower might be a good thing.&lt;/p&gt;

&lt;h4&gt;
  
  
  5) The Network-bound (3x DynamoDB queries)
&lt;/h4&gt;

&lt;p&gt;This pattern is pretty common. Every time our function uses the AWS SDK to invoke a few AWS services and coordinate some business logic. We are still talking about a network-bound function, but it shows a different pattern. In this case, we are performing three &lt;code&gt;dynamodb:GetItem&lt;/code&gt; queries in sequence, but the same pattern holds with other services such as SNS or SQS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;dynamodb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dynamodb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# three identical queries in series
&lt;/span&gt;    &lt;span class="c1"&gt;# this is just an example
&lt;/span&gt;    &lt;span class="c1"&gt;# usually you'd have 3 different queries :)
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;TableName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-table&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;S&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;test-id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Item&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are talking about AWS services, quite likely operating in the same AWS region. So our API calls won't leave the data center at all. &lt;/p&gt;

&lt;p&gt;Surprisingly, we can make this function run much faster with additional power: this pattern is very similar to the first example we analyzed at the beginning of this article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#gAAAAQACAAQABsAL;ALCnQ8ZyH0OkcDNCRIszQovsDkKJiN1B;m1ZfNZtWXzWbVl81m1bfNfSAJzaaA6Q2" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc1uy8ygprevz6g35r9hx.png" alt="Example - DDB" width="800" height="409"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;The function runs in about 350ms at 128MB, 160ms at 256MB, and 45ms at 512MB.&lt;/p&gt;

&lt;p&gt;In practice, every time we double its power we also halve the billed time, resulting in constant price until 512MB.&lt;/p&gt;

&lt;p&gt;After that, we cannot make it cheaper, so 512MB is our sweet spot.&lt;/p&gt;

&lt;p&gt;But we could get an additional 40% performance improvement (28ms execution time) at 3GB, if we are ready to pay 6x more. As usual, this tradeoff is up to you and it depends on your business priorities. My suggestion is to adopt a data-driven mindset and evaluate your options case by case.&lt;/p&gt;

&lt;h4&gt;
  
  
  6) The Network-bound (S3 download - 150MB)
&lt;/h4&gt;

&lt;p&gt;This is not a very common pattern, as downloading large objects from S3 is not a typical requirement. But sometimes you really need to download a large image/video or a machine learning model, either because it wouldn't fit in your deployment package or because you receive a reference to it in the input event for processing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;s3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# from the Amazon Customer Reviews Dataset
# https://s3.amazonaws.com/amazon-reviews-pds/readme.html
&lt;/span&gt;&lt;span class="n"&gt;BUCKET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;amazon-reviews-pds&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;tsv/amazon_reviews_us_Watches_v1_00.tsv.gz&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;LOCAL_FILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/tmp/test.gz&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# download 150MB (single thread)
&lt;/span&gt;    &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;download_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LOCAL_FILE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LOCAL_FILE&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;st_size&lt;/span&gt;

    &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
    &lt;span class="n"&gt;unit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MB&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
        &lt;span class="n"&gt;unit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GB&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

    &lt;span class="c1"&gt;# print "Downloaded 150MB"
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Downloaded %s%s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;unit&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we are trying to store a lot of data in memory, we won't test lower memory configurations such as 128MB and 256MB.&lt;/p&gt;

&lt;p&gt;At first glance, the cost/performance pattern looks quite similar to our fist network-bound example: additional power doesn't seem to improve performance. Execution time is pretty flat around 5 seconds, therefore cost always increases proportionally to the allocated power (at 3GB it's almost 5x more expensive on average).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#AAIABAAGAAgACsAL;QTvQRfV6n0WG0JxFXVmVRQHTmkVs+5tF;qs5pOF52tTjDegU59IAnOZMaWjnQIoA5" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Futnb23hfh7iq97jyq2ta.png" alt="Example - S3" width="800" height="408"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;From this chart, it looks like we can't do much to improve cost and performance. If we go for 1GB of power, we'll run 23% faster for a cost increase of 55%.&lt;/p&gt;

&lt;h5&gt;
  
  
  Can we do better than this?
&lt;/h5&gt;

&lt;p&gt;Good news: this kind of workload will run much faster with a very simple code change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# download 150MB from S3 with 10 threads
&lt;/span&gt;&lt;span class="n"&gt;transfer_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transfer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;TransferConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;download_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LOCAL_FILE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;transfer_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the new code above, we're simply providing a custom &lt;code&gt;TransferConfig&lt;/code&gt; object to enable multi-threading.&lt;/p&gt;

&lt;p&gt;Now the whole process will complete a lot faster by parallelizing the file download with multiple threads, especially since we get two cores above 1.8GB of power.&lt;/p&gt;

&lt;p&gt;Here's the new cost/performance pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lambda-power-tuning.show/#AAIABAAGAAgACsAL;dSWPRX9gKkWOaQhFDiD4RI1060RI9ZJE;P4YgOMdrQzhQUWY4IZaLOEfCpThnBXY4" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7x5unxaqz54xjt9vvhzc.png" alt="Example - S3 multi-thread" width="800" height="410"&gt;&lt;/a&gt; (click on the image to open the interactive visualization)&lt;/p&gt;

&lt;p&gt;Not only is the pattern very different, but the absolute numbers are much better too. We run in 4.5 seconds at minimum power (which is already 10% faster than what we could do before at maximum power). But then it gets even better: we run another 40% faster at 1GB for a cost increase of 23%.&lt;/p&gt;

&lt;p&gt;Surprisingly, we run almost 4x faster (1.1 seconds) at 3GB of power. And that's for the same cost (+5%) with respect to the single-threaded code at 512MB.&lt;/p&gt;

&lt;p&gt;Let me rephrase it: adding one line of code allowed us to run 4x faster for the same cost.&lt;/p&gt;

&lt;p&gt;And if performance didn't matter in this use case, the same change would allow us to make this function 45% faster &lt;strong&gt;AND&lt;/strong&gt; 47% cheaper with minimum power (512MB).&lt;/p&gt;

&lt;p&gt;I believe this is also an interesting example where picking a specific programming language might result in better performance without using additional libraries or dependencies (note: you can achieve the same in Java with the &lt;a href="https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html" rel="noopener noreferrer"&gt;TransferManager utility&lt;/a&gt; or in Node.js with the &lt;a href="https://github.com/awslabs/s3-managed-download-js" rel="noopener noreferrer"&gt;S3 Managed Download module&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;We've dived deep into the benefits of power-tuning for AWS Lambda: it helps you optimize your Lambda functions for performance or cost. Sometimes both.&lt;/p&gt;

&lt;p&gt;Remember that memory means power and there is no such thing as over-provisioning memory with AWS Lambda. There is always an optimal value that represents the best trade-off between execution time and execution cost.&lt;/p&gt;

&lt;p&gt;I've also introduced a mental framework to think in terms of workload categories and cost/performance patterns, so you'll be able to predict what pattern applies to your function while you are coding it. This will help you prioritize which functions might be worth optimizing and power-tuning.&lt;/p&gt;

&lt;p&gt;AWS Lambda Power Tuning is open-source and very cheap to run. It will provide the information and visualization you need to take a data-driven decision.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and let me know if you find new exciting patterns when power-tuning your functions.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>programming</category>
      <category>serverless</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to live-stream meetups on Twitch without any special equipment 🚀</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Mon, 16 Mar 2020 13:32:45 +0000</pubDate>
      <link>https://dev.to/aws/how-to-live-stream-meetups-on-twitch-without-any-special-equipment-56cb</link>
      <guid>https://dev.to/aws/how-to-live-stream-meetups-on-twitch-without-any-special-equipment-56cb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Translations&lt;/strong&gt;: [&lt;a href="https://dev.to/hernangarcia/como-hacer-streaming-en-vivo-en-twitch-sin-ningun-equipo-especial-3cg4"&gt;Spanish&lt;/a&gt;]&lt;/p&gt;




&lt;p&gt;This is a true (fun) story that evolved in the last 10 days. I hope it will be both useful and entertaining for everyone who wants to convert their in-person events into remote meetups on Twitch.&lt;/p&gt;

&lt;p&gt;TL;DR: direct link to technical &amp;amp; architectural details :)&lt;/p&gt;




&lt;p&gt;I'm a software engineer and AWS developer advocate based in Italy, and in the last few weeks I've been working from home because of COVID-19. I've been co-organizing the &lt;a href="https://www.meetup.com/serverless-italy/" rel="noopener noreferrer"&gt;serverless meetup&lt;/a&gt; in Milan since 2016 with a great team of four co-organizers.&lt;/p&gt;

&lt;p&gt;Last week we started experimenting with &lt;a href="//twitch.tv"&gt;Twitch&lt;/a&gt; as our platform of choice for live-streaming our upcoming meetups and give our virtual attendees a chance to communicate and share doubts &amp;amp; ideas in the live chat. We already knew we should use &lt;a href="https://obsproject.com" rel="noopener noreferrer"&gt;OBS&lt;/a&gt; to handle the stream, personalize scenes, etc. - but didn't have much experience with it, so we ran a few tests. &lt;/p&gt;

&lt;h3&gt;
  
  
  Test #1 - don't do this at home 💥
&lt;/h3&gt;

&lt;p&gt;Very naively, I tried to live-stream from my own laptop (MacBook) using OBS. Since OBS can't use any optimized encoder for live-streaming on MacOS, I almost killed my laptop after only 8 minutes.&lt;/p&gt;

&lt;p&gt;So it was pretty clear that we also needed some kind of "special hardware". We asked around and were recommended to run OBS on a PC (Windows) with better CPU performance and ideally a GPU as well.&lt;/p&gt;

&lt;p&gt;Because we wanted to allow multiple concurrent speakers, we understood pretty soon that we needed some sort of video call "bridge" and then we could simply stream that into Twitch. So I quickly set up a room on &lt;a href="https://aws.amazon.com/chime/" rel="noopener noreferrer"&gt;Amazon Chime&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test #2 — it worked, but low quality 👍👎
&lt;/h3&gt;

&lt;p&gt;Our co-organizer &lt;a href="https://twitter.com/Lanzone31" rel="noopener noreferrer"&gt;Simone&lt;/a&gt; live-streamed a Chime call from his office desktop PC (Windows) using OBS to capture the video stream from Chime. It worked quite well, both video and audio were streamed successfully as I shared my screen (PowerPoint + local IDE). Though, since we were simply screencasting a Chime call (not a native video feed), the scene was pretty basic and the resulting video quality was too low to clearly see small text &amp;amp; code.&lt;/p&gt;

&lt;p&gt;So we were lost for a few hours, but another friend from Italy (thanks &lt;a href="https://twitter.com/GianArb" rel="noopener noreferrer"&gt;Gianluca&lt;/a&gt;!) told me that there was another option: since 2018, Skype supports the &lt;a href="https://en.wikipedia.org/wiki/Network_Device_Interface" rel="noopener noreferrer"&gt;NDI protocol&lt;/a&gt;, which allows you to feed the Skype video/audio streams directly into OBS as a native source and hopefully achieve better quality. This sounded promising so we decided to test it too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test #3 — much better, but... 😱
&lt;/h3&gt;

&lt;p&gt;Simone and I created a new Skype room, installed the &lt;a href="https://obsproject.com/forum/resources/obs-ndi-newtek-ndi%E2%84%A2-integration-into-obs-studio.528/" rel="noopener noreferrer"&gt;NDI Plugin for OBS&lt;/a&gt; and ran another test from his office PC. It took a while to configure OBS correctly, but ultimately it worked pretty well and thanks to NDI the video quality was much better!&lt;/p&gt;

&lt;p&gt;So we went on, confirmed the two speakers, published the event on &lt;a href="https://www.meetup.com/Serverless-Italy/events/269256011/" rel="noopener noreferrer"&gt;Meetup.com&lt;/a&gt; and &lt;a href="https://www.linkedin.com/events/gioved-12marzo-serverlessontwitch-tv-22/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and couldn't wait to finally live-stream our first meetup online.&lt;/p&gt;

&lt;p&gt;But then the COVID-19 situation here in Italy escalated even more and overnight we found out that we were stuck at home for at least 3-4 weeks, which meant Simone couldn't reach his office and we were suddenly without a streaming PC 😱&lt;/p&gt;

&lt;h3&gt;
  
  
  Test #4 — a cloudy idea 💡
&lt;/h3&gt;

&lt;p&gt;This is the part of the story where my personal aversion for hardware and my job title become very useful: WE COULD USE THE CLOUD 💡☁️💡&lt;/p&gt;

&lt;p&gt;All we needed was a DaaS (Desktop as a Service) alternative to Simone's PC, so I decided to give &lt;a href="https://aws.amazon.com/workspaces/" rel="noopener noreferrer"&gt;Amazon Workspaces&lt;/a&gt; a try!&lt;/p&gt;

&lt;p&gt;Amazon Workspaces is a DaaS solution offered by AWS since 2014. It allows you to spin up Windows or Linux instances in the cloud and connect via browser or using a desktop client, which is available for MacOS, Windows, and Linux. There are multiple bundles you can choose from depending on your performance requirements. Also, there is no up-front commitment and you can choose either the monthly or hourly billing option. For remote meetups, the hourly option (plus a much smaller monthly fee) is great because 3-4 hours per month should be more than enough for a couple of events.&lt;/p&gt;

&lt;p&gt;We knew that our performance requirements were quite high, so we started with a Windows PowerPro bundle — 8 vCPU, 32 GiB Memory for $7.25/month + $1.49/hour. This way, our first online meetup would cost us less than $10 and we were happy with that.&lt;/p&gt;

&lt;p&gt;The first test with Amazon Workspaces went better than expected, we managed to quickly install all the software on the cloud instance and launch a test stream in less than 30min. Unfortunately, the lack of GPU and Video Memory in the PowerPro bundle resulted in high frame loss and very high CPU usage, corresponding to lagging audio and video.&lt;/p&gt;

&lt;p&gt;We needed a more powerful bundle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test #5 — Workspaces Graphics to the rescue 💪
&lt;/h3&gt;

&lt;p&gt;The two most powerful bundles are Graphics and GraphicsPro (&lt;a href="https://aws.amazon.com/workspaces/pricing/" rel="noopener noreferrer"&gt;full list here&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Graphics&lt;/strong&gt; comes with 8 vCPU, 15 GiB Memory, 1 GPU, 4 GiB Video Memory - for $22/month + $1.75/hour&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GraphicsPro&lt;/strong&gt; comes with 16 vCPU, 122 GiB Memory, 1 GPU, 8 GiB Video Memory — for $66/month + $11.62/hour&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We were quite confident that a Graphics instance would be enough and still allow us to keep the monthly cost below $30 (wrt $80+/month for the GraphicsPro instance). So we decided to give the Graphics instance a try.&lt;/p&gt;

&lt;p&gt;We ran the next test on Amazon Workspaces Graphics with the very same configuration and it worked great! No frame loss on Twitch, very good video and audio quality, and fairly low CPU usage on the instance.&lt;/p&gt;

&lt;p&gt;At this point we had only 24 hours to run a few additional tests before going live with &lt;a href="https://www.meetup.com/Serverless-Italy/events/269256011/" rel="noopener noreferrer"&gt;our first online meetup&lt;/a&gt; on Thursday, Mar 12th at 6.30 PM CET.&lt;/p&gt;

&lt;p&gt;So we did run a few more tests with multiple speakers sharing both audio and video, as well as their screen with flawless take-over, a few backup scenes on OBS, some background noice reduction filters, and we were ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  We went live on Twitch without a hassle 🙌
&lt;/h3&gt;

&lt;p&gt;On Mar 12th we hosted two great speakers and welcomed more than 170 unique viewers during the stream of 1 hour and 42 minutes, with a peak of 109 concurrent viewers 🎉 (&lt;a href="https://www.youtube.com/watch?v=_Xzz2Se0L-8" rel="noopener noreferrer"&gt;recording here, in Italian 🇮🇹&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Thanks to the built-in Twitch chat we could also take live questions and host a quick Q&amp;amp;A at the end of the meetup, with the supports of both co-organizers and speakers.&lt;/p&gt;

&lt;p&gt;Furthermore, as meetup organizers we loved this setup because our attendees didn't have to install any software or browser extensions. In fact, they didn't even have to sign up on Twitch to watch the stream.&lt;/p&gt;

&lt;p&gt;Last but not least, nobody depended on any special hardware which means we can organize as many events as we want, wherever we are in the world — as long as at least one of us can start the broadcast via the Workspaces client and connect to the video call to introduce the speakers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical and architectural details ⚙️
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpyc8ix8k9fotbf4hndb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpyc8ix8k9fotbf4hndb1.png" alt="Alt Text" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the final setup we used, with 2 co-organizers and 2 speakers connected to the bridge, plus one additional co-organizer connected to Workspaces to change scenes under the hood, and a fourth co-organizer helping with the Twitch chat moderation. You don't really need 4 people, but I'd say it really helps if you have at least 2.&lt;/p&gt;

&lt;p&gt;Here's a shortlist of technical steps you can follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an AWS Account&lt;/li&gt;
&lt;li&gt;Install the &lt;a href="https://clients.amazonworkspaces.com/" rel="noopener noreferrer"&gt;Workspaces client&lt;/a&gt; on your local machine&lt;/li&gt;
&lt;li&gt;Spin up the Workspaces instance — it will take a few minutes&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://obsproject.com/download" rel="noopener noreferrer"&gt;OBS (32 bits)&lt;/a&gt;, the &lt;a href="https://obsproject.com/forum/resources/obs-ndi-newtek-ndi%E2%84%A2-integration-into-obs-studio.528/" rel="noopener noreferrer"&gt;OBS NDI plugin&lt;/a&gt;, and Skype on the Workspaces instance&lt;/li&gt;
&lt;li&gt;Enable NDI in the Skype client 👉 Settings - Calling - Advanced - Allow NDI usage (&lt;a href="https://www.newtek.com/blog/tips/using-ndi-in-skype/" rel="noopener noreferrer"&gt;video instructions here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Configure OBS with scenes including audio &amp;amp; video from NDI source — this also takes a while but it's much easier if you do it during a video call so you can quickly see the audio/video preview in OBS&lt;/li&gt;
&lt;li&gt;Create a Twitch account&lt;/li&gt;
&lt;li&gt;Fetch the Twitch stream key from your account's settings, and configure it in OBS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We also have a few lessons learned to share, so I recommend reading the tips &amp;amp; tricks section below before starting with your own setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips &amp;amp; Tricks for OBS &amp;amp; Amazon Workspaces
&lt;/h2&gt;

&lt;p&gt;The story above is a quick summary of the various steps which led us to the final solution. The list below contains some tips so you can achieve the same faster and without much frustration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You may need to request a service limit increase in your AWS account to be able to spin up 1 Graphics instance on Amazon Workspaces&lt;/li&gt;
&lt;li&gt;If you are using a Linux machine locally, you'll need to enable the &lt;a href="https://docs.aws.amazon.com/workspaces/latest/userguide/amazon-workspaces-linux-client.html" rel="noopener noreferrer"&gt;Workspaces Linux client&lt;/a&gt; in your AWS Console (it's just a checkbox)&lt;/li&gt;
&lt;li&gt;It looks like the OBS NDI plugin is incompatible with OBS 64bit, so make sure you've installed the 32bit version if nothing works&lt;/li&gt;
&lt;li&gt;It's much easier if you have an independent Skype profile running on Workspaces (you can't join a call twice with the same profile)&lt;/li&gt;
&lt;li&gt;Remember to mute the Workspaces machine in the bridge and also in OBS&lt;/li&gt;
&lt;li&gt;If your speakers don't have a professional microphone, you can enable the OBS noise reduction filter (&lt;a href="https://medium.com/@Wootpeanuts/removing-background-noise-with-obs-studio-17214d967fe0" rel="noopener noreferrer"&gt;see instructions here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;If you are the only speaker, running everything on your laptop (Skype + Workspaces + PowerPoint + ...) might slow down the whole thing and eventually crash your laptop (true story!) — that's why we'd recommend at least two people/laptops&lt;/li&gt;
&lt;li&gt;The NDI plugin for OBS can use the "current speaker" webcam as audio/video source, so you don't have to prepare one scene per speaker and manually switch every time a new speaker starts talking&lt;/li&gt;
&lt;li&gt;You can use a service like &lt;a href="https://restream.io" rel="noopener noreferrer"&gt;restream.io&lt;/a&gt; to stream to Twitch, Periscope, and YouTube concurrently&lt;/li&gt;
&lt;li&gt;After the first meetup we have also tested in-browser solutions such as &lt;a href="https://studio.golightstream.com" rel="noopener noreferrer"&gt;LightStream Studio&lt;/a&gt; and &lt;a href="https://streamyard.com" rel="noopener noreferrer"&gt;StreamYard&lt;/a&gt; - these tools come with an easier in-browser setup, but we realized they don't really work for us because we want to have more control over the overall scenes/audio/video setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to configure OBS 📹
&lt;/h3&gt;

&lt;p&gt;Configuring OBS might be the most challenging step if you've never used it, so I'd recommend spending some time learning it. You can check &lt;a href="https://obsproject.com/wiki/OBS-Studio-Quickstart" rel="noopener noreferrer"&gt;this official quickstart&lt;/a&gt; or &lt;a href="https://obsproject.com/forum/resources/the-most-in-depth-obs-course-ever-made.601/" rel="noopener noreferrer"&gt;this in-depth series of videos&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To speed things up even more, you may want to reuse and adapt our OBS profile and scene collection. You can find both &lt;a href="http://bit.ly/serverless-meetup-obs-setup" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The zip file contains two important artifacts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a folder named "Serverless Meetup" — this is the OBS profile that you can import (

&lt;code&gt;Profile &amp;gt; Import&lt;/code&gt;

) and then select from the same &lt;code&gt;Profile&lt;/code&gt; menu&lt;/li&gt;
&lt;li&gt;a JSON file named "serverless-meetup-obs-preset.json" — this is the OBS scene collection that you can import (

&lt;code&gt;Scene collection &amp;gt; Import&lt;/code&gt;

) and then select from the same &lt;code&gt;Scene collection&lt;/code&gt; menu&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Final thoughts 🤔
&lt;/h3&gt;

&lt;p&gt;We've been pretty happy with this setup and we have collected a lot of positive feedback from our online attendees. We'll keep using this setup for our next serverless meetups online and maybe try to virtually meet even more often now that we have a cloud-based and easy-to-share setup.&lt;/p&gt;

&lt;p&gt;I personally hope this article will help many meetup organizers to keep their groups active and motivated during these challenging times. We don't have to sacrifice our communities only because we can't meet in person.&lt;/p&gt;

&lt;p&gt;If you're curious about details I haven't mentioned in this article or if you have other ideas to share please feel free to reach out to me &lt;a href="https://twitter.com/alex_casalboni" rel="noopener noreferrer"&gt;on Twitter&lt;/a&gt; or drop a comment below.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Translations&lt;/strong&gt;: [&lt;a href="https://dev.to/hernangarcia/como-hacer-streaming-en-vivo-en-twitch-sin-ningun-equipo-especial-3cg4"&gt;Spanish&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>aws</category>
      <category>discuss</category>
      <category>productivity</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How to FaaS like a pro: 12 less common ways to invoke your serverless functions on Amazon Web Services [Part 3]</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Wed, 30 Oct 2019 20:47:46 +0000</pubDate>
      <link>https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-amazon-web-services-part-3-4589</link>
      <guid>https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-amazon-web-services-part-3-4589</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4p5561kmnmcfu09n41o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4p5561kmnmcfu09n41o.jpeg" width="800" height="535"&gt;&lt;/a&gt;“I really like the peace of mind of building in the cloud” cit. myself [Photo by &lt;a href="https://unsplash.com/@kaushikpanchal?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Kaushik Panchal&lt;/a&gt; on &lt;a href="https://unsplash.com" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;This is the last part of my FaaS like Pro series, where I discuss and showcase some less common ways to invoke your serverless functions with AWS Lambda.&lt;/p&gt;

&lt;p&gt;You can find &lt;a href="https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-aws-part-1-4nbb"&gt;[Part 1] here&lt;/a&gt; — covering Amazon Cognito User Pools, AWS Config, Amazon Kinesis Data Firehose, and AWS CloudFormation.&lt;/p&gt;

&lt;p&gt;And &lt;a href="https://dev.to/aws/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-2-21hp"&gt;[Part 2] here&lt;/a&gt; — covering AWS IoT Button, Amazon Lex, Amazon CloudWatch Logs, and Amazon Aurora.&lt;/p&gt;

&lt;p&gt;In the third part I will describe four more:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
AWS CodeDeploy — pre &amp;amp; post deployment hooks
&lt;/li&gt;
&lt;li&gt;
AWS CodePipeline — custom pipeline actions
&lt;/li&gt;
&lt;li&gt;
Amazon Pinpont — custom segments &amp;amp; channels
&lt;/li&gt;
&lt;li&gt;
AWS ALB (Application Load Balancer) — HTTP target&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  9. AWS CodeDeploy (pre/post-deployment hooks)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;CodeDeploy&lt;/a&gt; is part of the AWS Code Suite and allows you &lt;strong&gt;automate software deployments&lt;/strong&gt; to Amazon EC2, AWS Fargate, AWS Lambda, and even on-premises environments.&lt;/p&gt;

&lt;p&gt;Not only it enables features such as &lt;a href="https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/" rel="noopener noreferrer"&gt;safe deployments&lt;/a&gt; for serverless functions, but it also integrates with Lambda to implement custom hooks. This means you can inject custom logic at different steps of a deployment in order to add validation, 3rd-party integrations, integrations tests, etc. Each hook runs only one per deployment and can potentially trigger a rollback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ek2oa70m3fwaci71812.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ek2oa70m3fwaci71812.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure different lifecycle event hooks, depending on the compute platform (AWS Lambda, Amazon ECS, Amazon EC2 or on-premises).&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Lambda
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;BeforeAllowTraffic&lt;/em&gt;&lt;/strong&gt;  — runs before traffic is shifted to the deployed Lambda function&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;AfterAllowTraffic&lt;/em&gt;&lt;/strong&gt;  — runs after all traffic has been shifted&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Amazon ECS &amp;amp; Amazon EC2/on-premises
&lt;/h4&gt;

&lt;p&gt;See the &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html" rel="noopener noreferrer"&gt;full documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Amazon ECS and EC2 have a more complex deployment lifecycle, while Lambda follows a simple flow: Start &amp;gt; BeforeAllowTraffic &amp;gt; AllowTraffic &amp;gt; AfterAllowTraffic &amp;gt; End. In this flow, you can inject your custom logic before traffic is shifted to the new version of your Lambda function and after all traffic is shifted.&lt;/p&gt;

&lt;p&gt;For example, we could run some integration tests in the BeforeAllowTraffic hook. And we could implement a 3rd-party integration (JIRA, Slack, email, etc.) in the AfterAllowTraffic hook.&lt;/p&gt;

&lt;p&gt;Let’s have a look at a sample implementation of a Lambda hook for CodeDeploy:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above doesn’t do much, but it shows you the overall hook structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It receives a DeploymentId and LifecycleEventHookExecutionId that you’ll use to invoke CodeDeploy’s PutLifecycleEventHookExecutionStatus API&lt;/li&gt;
&lt;li&gt;The execution status can be either Succeeded or Failed&lt;/li&gt;
&lt;li&gt;You can easily provide an environment variable to the hook function so that it knows which functions we are deploying and what’s its ARN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’d recommend defining the hook functions in the same CloudFormation (or SAM) template of the function you’re deploying. This way it’s very easy to define fine-grained permissions and environment variables.&lt;/p&gt;

&lt;p&gt;For example, let’s define an AWS SAM template with a simple Lambda function and its corresponding Lambda hook:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The template above is defining two functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;myFunctionToBeDeployed is our target function, the one we’ll be deploying with AWS CodeDeploy&lt;/li&gt;
&lt;li&gt;preTrafficHook is our hook, invoked before traffic is shifted to myFunctionToBeDeployed during the deployment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’ve configured two special properties on myFunctionToBeDeployed called DeploymentPreference and AutoPublishAlias . These properties allows us to specify which deployment type we want (linear, canary, etc.), which hooks will be invoked, and which alias will be used to shifting the traffic in a weighted fashion.&lt;/p&gt;

&lt;p&gt;A few relevant details about the pre-traffic hook definition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I am defining an &lt;strong&gt;environment variable&lt;/strong&gt; named NewVersion which will contain the ARN of the newly deployed function, so that we could invoke it and run a some tests&lt;/li&gt;
&lt;li&gt;preTrafficHook needs &lt;strong&gt;IAM permissions&lt;/strong&gt; to invoke the codedeploy:PutLifecycleEventHookExecutionStatus API and I’m providing fine-grained permissions by referencing the deployment group via ${ServerlessDeploymentApplication}&lt;/li&gt;
&lt;li&gt;since we want to run some tests on the new version of myFunctionToBeDeployed, our hook will need &lt;strong&gt;IAM permissions&lt;/strong&gt; to invoke thelambda:invokeFunction API, and I’m providing fine-grained permissions by referencing myFunctionToBeDeployed.Version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a real-world scenario, you may want to set up a proper timeout based on which tests you’re planning to run and how long you expect them to take.&lt;/p&gt;

&lt;p&gt;In even more complex scenarios, you may event execute an AWS Step Functions state machine that will run multiple tasks in parallel before reporting the hook execution status back to CodeDeploy.&lt;/p&gt;

&lt;p&gt;Last but not least, don’t forget that you can implement a very similar behaviour for non-serverless deployments involving Amazon ECS or EC2. In this case, you’ll have many more hooks available such as BeforeInstall, AfterInstall, ApplicationStop, DownloadBundle, ApplicationStart, ValidateService, etc (&lt;a href="http://BeforeInstall" rel="noopener noreferrer"&gt;full documentation here&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  10. AWS CodePipeline (custom action)
&lt;/h3&gt;

&lt;p&gt;CodePipeline is part of the AWS Code Suite and allows you to &lt;strong&gt;design and automate release pipelines&lt;/strong&gt; (CI/CD). It integrates with the other Code Suite services such as CodeCommit, CodeBuild, and CodeDeploy, as well as popular 3rd-party services such as GitHub, CloudBees, Jenkins CI, TeamCity, BlazeMeter, Ghost Inspector, StormRunner Load, Runscope, and XebiaLabs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz326e8lz3qdbgubyg71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz326e8lz3qdbgubyg71.png" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In situations when built-in integrations don’t suit your needs, you can let CodePipeline integrate with your own Lambda functions as a pipeline stage. For example, you can use a Lambda function to verify if a website’s been deployed successfully, to create and delete resources on-demand at different stages of the pipeline, to back up resources before deployments, to swag CNAME values during a blue/green deployment, and so on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstup9kpkvd6lm6xmclka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstup9kpkvd6lm6xmclka.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s have a look at a sample implementation of a Lambda stage for CodePipeline:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The function will receive three main inputs in the CodePipeline.job input:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;id — the JobID required to report success or failure via API&lt;/li&gt;
&lt;li&gt;data.actionConfiguration.configuration.UserParameters — the stage dynamic configuration; you can think of this as an environment variable that depends on the pipeline stage, so you could reuse the same function for dev, test, and prod pipelines&lt;/li&gt;
&lt;li&gt;context.invokeid — the invocation ID related to this pipeline execution, useful for tracing and debugging in case of failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the simple code snippet above I am doing the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify that the given URL is valid&lt;/li&gt;
&lt;li&gt;Fetch the URL via HTTP(S)&lt;/li&gt;
&lt;li&gt;Report success via the CodePipeline putJobSuccessResult API if the HTTP status is 200&lt;/li&gt;
&lt;li&gt;Report failure via the CodePipeline putJobFailureResult API in case of errors — using different error messages and contextual information&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Of course, we could extend and improve the validation step, as well as the URL verification. Receiving a 200 status is a very minimal way to verify that our website was deployed successful. Here we could add automated browser testing and any other custom logic.&lt;/p&gt;

&lt;p&gt;It’s also worth remembering that you can implement this logic in any programming language supported by Lambda (&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html" rel="noopener noreferrer"&gt;or not&lt;/a&gt;). Here I’ve used Node.js but the overall structure wouldn’t change much in Python, Go, C#, Ruby, Java, PHP, etc.&lt;/p&gt;

&lt;p&gt;Now, let me show you how we can integrate all of this into a CloudFormation template (using AWS SAM as usual):&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the template above I’ve defined three resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS::Serverless::Function to implement our custom pipeline stage; note that it will require IAM permissions to invoke the two CodePipeline API’s&lt;/li&gt;
&lt;li&gt;An AWS::CodePipeline::Pipeline where we’d normally add all our pipeline &lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html" rel="noopener noreferrer"&gt;stages and actions&lt;/a&gt;; plus, I’m adding an action of type Invoke with provider Lambda that will invoke the myPipelineFunction function&lt;/li&gt;
&lt;li&gt;An AWS::Lambda::Permission that grants CodePipeline permissions to invoke the Lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One more thing to note: in this template I’m not including the IAM role for CodePipeline for brevity.&lt;/p&gt;

&lt;p&gt;You can find more details and step-by-step instructions in the &lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html" rel="noopener noreferrer"&gt;official documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  11. Amazon Pinpoint (custom segments &amp;amp; channels)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/pinpoint/" rel="noopener noreferrer"&gt;Amazon Pinpoint&lt;/a&gt; is a managed service that allows you to send multi-channel personalized communications to your own customers.&lt;/p&gt;

&lt;p&gt;Pinpoint natively supports many channels including email, SMS (in over 200 countries), voice (audio messages), and push notifications (Apple Push Notification service, Amazon Device Messaging, Firebase Cloud Messaging, and Baidu Cloud Push).&lt;/p&gt;

&lt;p&gt;As you’d expect, Pinpoint allows you to define &lt;strong&gt;users/endpoints&lt;/strong&gt; and &lt;strong&gt;messaging campaigns&lt;/strong&gt; to communicate with your customers.&lt;/p&gt;

&lt;p&gt;And here’s where it nicely integrates with AWS Lambda for two interesting use cases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Custom segments&lt;/strong&gt;  — it allows you to dynamically &lt;strong&gt;modify the campaign’s segment at delivery-time&lt;/strong&gt; , which means you can implement a Lambda function to filter out some of the users/endpoints to engage a more narrowly defined subset of users, or even to enrich users’ data with custom attributes (maybe coming from external systems)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom channels&lt;/strong&gt;  — it allows you to integrate unsupported channels such as instant messaging services or web notifications, so you can implement a Lambda function that will take care of the message delivery outside of Amazon Pinpoint&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s dive into both use cases!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; both use cases are still in beta and some implementation details are still subject to change&lt;/p&gt;

&lt;h4&gt;
  
  
  11.A — How to define Custom Segments
&lt;/h4&gt;

&lt;p&gt;We can connect a Lambda function to our Pinpoint Campaign and dynamically modify, reduce, or enrich our segment’s endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqct90op11f9tcwf2s5ny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqct90op11f9tcwf2s5ny.png" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Lambda function will receive a structured event:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The important section of the input event is the set of Endpoints. The expected output of our function is a new set of endpoints with the same structure. This new set might contain fewer endpoints and/or new attributes as well. Also note that our function will receive at most 50 endpoints in a batch fashion. If you segment contains more than 50 endpoints, the function will be involved multiple times.&lt;/p&gt;

&lt;p&gt;For example, let’s implement a custom segment that will include only the APNS channel (Apple) and generate a new custom attribute named CreditScore:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above is iterating over the given endpoints and dynamically modify the set before returning it back to Amazon Pinpoint for delivery.&lt;/p&gt;

&lt;p&gt;For each endpoint, we are excluding it from the set if it’s not APNS (just as an example), then we are generating a new CreditScore attribute only for active endpoints.&lt;/p&gt;

&lt;p&gt;Let’s now define the CloudFormation template for our Pinpoint app:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The important section of the template above is the CampaignHook attribute of the AWS::Pinpoint::Campaign resource. We are providing the Lambda function name and configuring it with Mode: FILTER. As we’ll see in the next section of this article, we are going to use Mode: DELIVERY to implement custom channels.&lt;/p&gt;

&lt;p&gt;In case we had multiple campaigns that required the same custom segment, we could centralize the CampaignHook definition into an AWS::Pinpoint:ApplicationSettings resource:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This way, all the campaigns in our Pinpoint application will inherit the same Lambda hook.&lt;/p&gt;

&lt;p&gt;You can find the &lt;a href="https://docs.aws.amazon.com/pinpoint/latest/developerguide/segments-dynamic.html" rel="noopener noreferrer"&gt;full documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  11.B — How to define Custom Channels
&lt;/h4&gt;

&lt;p&gt;We can connect a Lambda function to our Pinpoint Campaign to integrate unsupported channels. For example, Facebook Messenger or even your own website backend to show in-browser notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2adawx71ijo3map98i5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2adawx71ijo3map98i5.png" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To define a custom channel we can use the same mechanism described above for custom segments, but using Mode: DELIVERY in our CampaignHook configuration. The biggest difference is that Pinpoint won’t deliver messages itself, as our Lambda hook will take care of that.&lt;/p&gt;

&lt;p&gt;Our function will receive batches of 50 endpoints, so if you segment contains more than 50 endpoints the function will be involved multiple times (round(N/50) times to be precise).&lt;/p&gt;

&lt;p&gt;We will receive the same input event:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Our Lambda function will need to iterate through all the given Endpoints and deliver messages via API.&lt;/p&gt;

&lt;p&gt;Let’s implement the Lambda function that will deliver messages to FB Messenger, in Node.js:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above defines a few configuration parameters, that I’d recommend storing on the AWS SSM Parameter Store or AWS Secrets Manager, here hard-coded for brevity.&lt;/p&gt;

&lt;p&gt;The Lambda handler is simply iterating over event.Endpoints and generating an async API call for each one. Then we run all the API calls in parallel and wait for their completion using await Promise.all(...).&lt;/p&gt;

&lt;p&gt;You could start from this sample implementation for FB Messenger and adapt it for your own custom channel by editing the deliver(message, user) function.&lt;/p&gt;

&lt;p&gt;Let’s now define the CloudFormation template for our Pinpoint app:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The overall structure is the same of custom segments. Only two main differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We don’t need to define a channel&lt;/li&gt;
&lt;li&gt;We are using DELIVERY for the campaign hook mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the &lt;a href="https://docs.aws.amazon.com/pinpoint/latest/developerguide/channels-custom.html" rel="noopener noreferrer"&gt;full documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  12. AWS ALB (Application Load Balancer)
&lt;/h3&gt;

&lt;p&gt;AWS ALB is one of the three type of load balancers supported by Elastic Load Balancing on AWS, together with Network Load Balancers and Classic Load Balancers.&lt;/p&gt;

&lt;p&gt;ALB operates at the Layer 7 of the &lt;a href="https://en.wikipedia.org/wiki/OSI_model" rel="noopener noreferrer"&gt;OSI model&lt;/a&gt;, which means it has the ability to inspect packets and HTTP headers to optimize its job. It was announced in August 2016 and it introduced popular features such as content-based routing, support for container-based workloads, as well as for WebSockets and HTTP/2.&lt;/p&gt;

&lt;p&gt;Since Nov 2018, ALB supports AWS Lambda too, which means you can invoke Lambda functions to serve HTTP(S) traffic behind your load balancer.&lt;/p&gt;

&lt;p&gt;For example — thanks to the content-based routing feature — you could configure your existing application load balancer to serve all traffic under /my-new-feature with AWS Lambda, while all other paths are still served by Amazon EC2, Amazon ECS, or even on-premises servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvjsklvmoiggyssa8rzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvjsklvmoiggyssa8rzo.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this is great to implement new features, it also opens up new interesting ways to evolve your compute architecture over time without necessarily refactoring the whole application. For example, by migrating one path/domain at a time transparently for your web or mobile clients.&lt;/p&gt;

&lt;p&gt;If you’ve already used AWS Lambda with Amazon API Gateway, AWS ALB will look quite familiar, with a few minor differences.&lt;/p&gt;

&lt;p&gt;Let’s have a look at the request/response structure:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;AWS ALB will invoke our Lambda functions synchronously and the event structure looks like the JSON object above, which includes all the request headers, its body, and some additional metadata about the request itself such as HTTP method, query string parameters, etc.&lt;/p&gt;

&lt;p&gt;ALB expects our Lambda function to return a JSON object similar to the following:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;That’s it! As long as you apply a few minor changes to your Lambda function’s code, it’s quite straightforward to switch from Amazon API Gateway to AWS ALB. Most differences are related to the way you extract information from the input event and the way you compose the output object before it’s converted into a proper HTTP response. I’d personally recommend structuring your code by separating your business logic from the platform-specific input/output details (or the “adaptor”). This way, your business logic won’t change at all and you’ll just need to adapt how its inputs and outputs are provided.&lt;/p&gt;

&lt;p&gt;For example, here’s how you could implement a simple Lambda function to work with both API Gateway and ALB:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now, I wouldn’t recommend this coding exercise unless you have a real-world use case where your function needs to handle both API Gateway and ALB requests. But keep this in mind when you implement your business logic so that switching in the future won’t be such a painful refactor.&lt;/p&gt;

&lt;p&gt;For example, here’s how I would implement a simple Lambda function that returns Hello Alex! when I invoke the endpoint with a querystring such as ?name=Alex and returns Hello world! if no name is provided:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In this case, I’d only need to apply very minor changes to build_response if I wanted to integrate the same function with API Gateway.&lt;/p&gt;

&lt;p&gt;Now, let’s have a look at how we’d build our CloudFormation template. AWS SAM does not support ALB natively yet, so we’ll need to define a few raw CloudFormation resources:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The Application Load Balancer definition requires a list of EC2 subnets and a VPC. This is a good time to remind you that AWS ALB is not fully serverless, as it requires some infrastructure/networking to be managed and it’s priced by the hour. Also, it’s worth noting that we need to grant ALB permissions to invoke our function with a proper AWS::Lambda::Permission resource.&lt;/p&gt;

&lt;p&gt;That said, let me share a few use cases where you may want to use AWS ALB to trigger your Lambda functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need a “hybrid” compute architecture including EC2, ECS, and Lambda under the same hostname — maybe to implement new features for a legacy system or to cost-optimize some infrequently used sub-systems&lt;/li&gt;
&lt;li&gt;Your API’s are under constant load and you are more comfortable with a &lt;a href="https://aws.amazon.com/elasticloadbalancing/pricing/" rel="noopener noreferrer"&gt;by-the-hour pricing&lt;/a&gt; (ALB) than a &lt;a href="https://aws.amazon.com/api-gateway/pricing/" rel="noopener noreferrer"&gt;pay-per-request model&lt;/a&gt; (API Gateway) — this might be especially true if you don’t need many of the advanced features of API Gateway such as input validation, velocity templates, DDOS protection, canary deployments, etc.&lt;/li&gt;
&lt;li&gt;You need to implement some advanced routing logic — with ALB’s &lt;a href="https://aws.amazon.com/blogs/aws/new-advanced-request-routing-for-aws-application-load-balancers/" rel="noopener noreferrer"&gt;content-based routing rules&lt;/a&gt; you can route requests to different Lambda functions based on the request content (hostname, path, HTTP headers, HTTP method, query string, and source IP)&lt;/li&gt;
&lt;li&gt;You want to build a global multi-region and highly resilient application powered by &lt;a href="https://aws.amazon.com/global-accelerator/" rel="noopener noreferrer"&gt;AWS Global Accelerator&lt;/a&gt; — ALB can be configured as an accelerated endpoint using the AWS global network&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let me know if you could think of a different use case for ALB + Lambda.&lt;/p&gt;

&lt;p&gt;You can read more about this topic on the &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
Also, here you can find an &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:072567720793:applications~ALB-Lambda-Target-HelloWorld" rel="noopener noreferrer"&gt;ALB app&lt;/a&gt; on the Serverless Application Repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;That’s all for part 3!&lt;/p&gt;

&lt;p&gt;I sincerely hope you’ve enjoyed diving deep into &lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;AWS CodeDeploy&lt;/a&gt;, &lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt;, &lt;a href="https://aws.amazon.com/pinpoint/" rel="noopener noreferrer"&gt;Amazon Pinpoint&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;AWS Application Load Balancer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now you can customize your CI/CD pipelines, implement custom segments or channels for Amazon Pinpoint, and serve HTTP traffic though AWS ALB.&lt;/p&gt;

&lt;p&gt;This is the last episode of this series and I’d recommend checking out the first two articles &lt;a href="https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-aws-part-1-4nbb"&gt;here&lt;/a&gt; and &lt;a href="https://dev.to/aws/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-2-21hp#8-amazon-aurora-triggers-amp-external-data"&gt;here&lt;/a&gt; if you haven’t read them yet, where I talked about integrating Lambda with Amazon Cognito User Pools, AWS Config, Amazon Kinesis Data Firehose, AWS CloudFormation, AWS IoT Button, Amazon Lex, Amazon CloudWatch Logs, and Amazon Aurora.&lt;/p&gt;

&lt;p&gt;Thank you all for reading and sharing your feedback!&lt;br&gt;&lt;br&gt;
As usual, feel free to share and/or drop a comment below :)&lt;/p&gt;




&lt;p&gt;Originally published on &lt;a href="https://medium.com/hackernoon/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-3-2ae2b8def4d7" rel="noopener noreferrer"&gt;HackerNoon&lt;/a&gt; on Oct 30, 2019.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>serverless</category>
      <category>node</category>
      <category>python</category>
    </item>
    <item>
      <title>How to FaaS like a pro: 12 less common ways to invoke your serverless functions on Amazon Web Services [Part 2]</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Mon, 01 Jul 2019 15:16:01 +0000</pubDate>
      <link>https://dev.to/aws/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-2-21hp</link>
      <guid>https://dev.to/aws/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-2-21hp</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gnqegmu8iz7gice91qh.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gnqegmu8iz7gice91qh.jpeg" width="800" height="532"&gt;&lt;/a&gt;I was told my articles contain too much code &amp;amp; I had to make it sweeter [Photo by &lt;a href="https://unsplash.com/photos/2N0enFKzDe8" rel="noopener noreferrer"&gt;Valerian KOo&lt;/a&gt; on &lt;a href="https://unsplash.com/" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;A few weeks ago I’ve shared the first part of this series where I analyzed in depth some less common ways to invoke AWS Lambda such as &lt;em&gt;Cognito User Pools&lt;/em&gt;, &lt;em&gt;AWS Config&lt;/em&gt;, &lt;em&gt;Amazon Kinesis Data Firehose&lt;/em&gt;, and &lt;em&gt;AWS CloudFormation&lt;/em&gt;. You can &lt;a href="https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-aws-part-1-4nbb"&gt;find [Part 1] here&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;In the second part I will describe four more:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
AWS IoT Button — 1-Click handlers
&lt;/li&gt;
&lt;li&gt;
Amazon Lex — Fulfillment activities
&lt;/li&gt;
&lt;li&gt;
Amazon CloudWatch Logs — Subscriptions
&lt;/li&gt;
&lt;li&gt;
Amazon Aurora — Triggers and external data&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  5. AWS IoT Button (1-Click)
&lt;/h3&gt;

&lt;p&gt;Since early 2018, you can trigger Lambda functions from simple IoT devices, with one click. We called it &lt;a href="https://aws.amazon.com/iot-1-click/" rel="noopener noreferrer"&gt;AWS IoT 1-Click&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z9q9dvcltqjincxek3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z9q9dvcltqjincxek3c.png" width="572" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nu24sqnd69v06h12wrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nu24sqnd69v06h12wrk.png" width="312" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4jbrlay2s7xmzwkup2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4jbrlay2s7xmzwkup2q.png" width="544" height="266"&gt;&lt;/a&gt;AWS IoT Enterprise Button (&lt;a href="https://www.amazon.com/dp/B075FPHHGG" rel="noopener noreferrer"&gt;link&lt;/a&gt;), AT&amp;amp;T LTE-M Button (&lt;a href="https://marketplace.att.com/products/att-lte-m-button" rel="noopener noreferrer"&gt;link&lt;/a&gt;), and SORACOM LTE-M Button (&lt;a href="https://soracom.jp/products/gadgets/aws_button/" rel="noopener noreferrer"&gt;link&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;All you need is one of the IoT buttons above and a few lines of code in your favorite programming language to implement a Lambda Action.&lt;/p&gt;

&lt;p&gt;Of course, these devices encrypt outbound data using TLS and communicate with AWS via API to invoke your functions.&lt;/p&gt;

&lt;p&gt;Once you’ve claimed your devices on the &lt;a href="https://console.aws.amazon.com/iot1click/" rel="noopener noreferrer"&gt;AWS Console&lt;/a&gt; — or via the &lt;a href="https://docs.aws.amazon.com/iot-1-click/latest/developerguide/1click-mobile-app.html" rel="noopener noreferrer"&gt;mobile app &lt;/a&gt;— they will appear on your AWS Console and you can map their click events to a specific action.&lt;/p&gt;

&lt;p&gt;There are three possible action types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Send SMS&lt;/strong&gt;  — it lets configure the phone number and message&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send Email&lt;/strong&gt;  — it lets you configure the email address, subject, and body&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger Lambda function&lt;/strong&gt;  — it lets you select a Lambda function in any region&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, you can configure custom attributes (similar to tags) for each deployed device.&lt;/p&gt;

&lt;p&gt;Please note that SMS and emails are two very common scenarios that AWS provides as built-in options, but under the hood there will always be a Lambda Function implementing the click logic (in these two cases, using &lt;a href="https://aws.amazon.com/sns/" rel="noopener noreferrer"&gt;Amazon SNS&lt;/a&gt; for delivering the message).&lt;/p&gt;

&lt;p&gt;If you need something more sophisticated than SMS or email, you can &lt;strong&gt;implement your own logic with AWS Lambda&lt;/strong&gt;. For example, you may want to invoke a 3rd-party API, send a voice message rendered by &lt;a href="https://aws.amazon.com/polly/" rel="noopener noreferrer"&gt;Amazon Polly&lt;/a&gt;, or simply store a new item on Amazon DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t8fug5fm80obcztqemw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t8fug5fm80obcztqemw.png" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we start coding, let’s mention a few important details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our Lambda function will receive two types of events: buttonClicked and deviceHealthMonitor&lt;/li&gt;
&lt;li&gt;The input event always contains useful information about the device such as its ID, its custom attributes, the remaining lifetime, etc.&lt;/li&gt;
&lt;li&gt;For buttonClicked events we receive two additional pieces of information: clickType ( &lt;strong&gt;&lt;em&gt;SINGLE&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;DOUBLE&lt;/em&gt;&lt;/strong&gt; , or &lt;strong&gt;&lt;em&gt;LONG&lt;/em&gt;&lt;/strong&gt; ) and reportedTime (an ISO-formatted date). The idea is that we may want to implement different behaviors for single, double, and long clicks. Alternatively, we could simply ignore some click types, or even treat them as a generic click event&lt;/li&gt;
&lt;li&gt;As you can imagine, deviceHealthMonitor events are triggered when the health parameters are below a given threshold; they allow you to take appropriate actions when the device expected lifetime is too low&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what the typical click event will look like:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Let’s now implement a simple Lambda function that will store a new (daily) item into DynamoDB on click, and delete it on double-click.&lt;/p&gt;

&lt;p&gt;Because we may want to run the same business logic on other computing platforms — such as EC2 instances, ECS containers, or even Kubernetes — here is a &lt;strong&gt;platform-agnostic implementation in Python&lt;/strong&gt; :&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the code snippet above, I’m defining a new ClickHandler class, which abstract some details for the concrete DailyClickHandle class. Its constructor will receive the buttonClicked event as input. Once we create a new DailyClickHandler object, we can invoke its run() method to perform the correct logic for single, double, or long clicks.&lt;/p&gt;

&lt;p&gt;I am creating a new DynamoDB Item on-single-click, using the current date as the primary key and storing the most recent reported time as well. On-double-click I am deleting the same daily item.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; I am &lt;em&gt;not&lt;/em&gt; keeping track of the number of daily clicks for this simple use case, but that would be a nice improvement and a useful exercise for you — let me know if you manage to implement it and share your results!&lt;/p&gt;

&lt;p&gt;Since I’ve encapsulated the main business logic into a stand-alone class/module, my Lambda handler will be pretty minimal, just a simple adapter:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The Lambda handler above will check if the current event is a health-check or an actual click. If it’s an actual click, it will create a new DailyClickHandler object and invoke its run() method.&lt;/p&gt;

&lt;p&gt;The next step before we can deploy everything is to define our CloudFormation template (&lt;a href="https://en.wikipedia.org/wiki/Infrastructure_as_code" rel="noopener noreferrer"&gt;IaC&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;We will need to define a new AWS::IoT1Click::Project resource and map its onClickCallback attribute to our Lambda function (&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iot1click-project.html" rel="noopener noreferrer"&gt;full CloudFormation reference here&lt;/a&gt;):&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Please note that the CloudFormation template above will create a new IoT 1-Click project and its configuration, but you’ll still need to add your IoT devices to the project either manually (on the AWS Console) or via the UpdateProject API.&lt;/p&gt;

&lt;p&gt;If you want to take this sample code as a starting point for your own project, maybe you could keep track of hourly or weekly tasks (instead of daily) by &lt;em&gt;storing an hourly/weekly item on DynamoDB&lt;/em&gt;. Or you could extend the Lambda function to &lt;em&gt;start a new CodePipeline deployment&lt;/em&gt; (haven’t you always wanted a physical “ &lt;strong&gt;&lt;em&gt;deploy button&lt;/em&gt;&lt;/strong&gt; ” on your desk?).&lt;/p&gt;

&lt;p&gt;Let me know what you’ll build with AWS IoT 1-Click!&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Amazon Lex (fulfillment activity)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/lex/" rel="noopener noreferrer"&gt;Amazon Lex&lt;/a&gt; allows you to build chatbots and conversational interfaces, powered by the same tech as Alexa.&lt;/p&gt;

&lt;p&gt;Lex supports both voice and text I/O, and it comes with advanced natural language understanding (NLU) capabilities. These capabilities help you &lt;strong&gt;extract and store the right information from the conversation&lt;/strong&gt; so that you can focus your time on improving the interface itself rather than wasting time and energy on edge cases, input parsing, and error handling.&lt;/p&gt;

&lt;p&gt;Once Lex has collected all the information you need from the conversation, you can &lt;strong&gt;configure your bot to invoke a Lambda function to fulfil the user’s intentions&lt;/strong&gt; , which could be something like creating a hotel reservation, rescheduling an appointment, requesting assistance on a given topic, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z211smozukymcac3sd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z211smozukymcac3sd6.png" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To understand how we can integrate Lambda with Lex we need to understand a few important concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intents&lt;/strong&gt;  — the different actions/goals that you bot can perform (for example, “&lt;em&gt;Book a hotel&lt;/em&gt;”, “&lt;em&gt;Rent a car&lt;/em&gt;”, “&lt;em&gt;Reschedule an appointment&lt;/em&gt;”, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slots&lt;/strong&gt;  — the individual pieces of information/fields that your bot will collect during the conversation (for example, “&lt;em&gt;Location&lt;/em&gt;”, “&lt;em&gt;Arrival Date&lt;/em&gt;”, “&lt;em&gt;Car type&lt;/em&gt;”, etc.) — Some slots have built-in types such as cities, dates, phone numbers, sports, job roles, etc. And you can also define your own custom slot types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sample utterances&lt;/strong&gt;  — typical sentences and hints about how a user might convey the intent, potentially by providing slot values all together (for example, “&lt;em&gt;Book a hotel room in {Location}&lt;/em&gt;” or “&lt;em&gt;Book a hotel room for {N} nights in {Location}&lt;/em&gt;”)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channels&lt;/strong&gt;  — the messaging platforms where you can integrate Lex with just a few clicks, such as &lt;em&gt;Facebook Messenger&lt;/em&gt;, &lt;em&gt;Slack&lt;/em&gt;, &lt;em&gt;Kik&lt;/em&gt;, and &lt;em&gt;Twilio SMS&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two main ways to integrate Lamdba with Lex:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input initialization &amp;amp; validation&lt;/strong&gt;  — it allows you to validate each slot value as soon as it is collected by Lex, and eventually prompt an “&lt;em&gt;invalid value message&lt;/em&gt;” to request a different value&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fulfillment activity&lt;/strong&gt;  — it lets you process the collected values and proceed with the actual business logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since you often want to re-validate your inputs before proceeding with the fulfillment, many developers like to implement a single Lambda function to take care of both validation and fulfillment. In some specific scenarios — for example if you have optional slots or very heavy validation logic — you may want to implement two independent Lambda functions.&lt;/p&gt;

&lt;p&gt;Let’s now assume that we are implementing a &lt;strong&gt;&lt;em&gt;BookHotel&lt;/em&gt;&lt;/strong&gt; intent and we want to implement two independent Lambda functions for data validation and fulfillment. Here are the slots we’ll be collecting during the conversation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Location — the city where we need a hotel&lt;/li&gt;
&lt;li&gt;CheckInDate — the date when we’ll check-in at the hotel&lt;/li&gt;
&lt;li&gt;Nights — the number of nights we’ll stay at the hotel&lt;/li&gt;
&lt;li&gt;RoomType — a custom slot with values such as &lt;em&gt;queen&lt;/em&gt;, &lt;em&gt;king&lt;/em&gt;, &lt;em&gt;deluxe&lt;/em&gt;, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a real-world use case, we’ll need to make sure that the four collected slots are semantically valid. For example, the Location needs to be a city supported by our booking system; the CheckInDate must be in the future; the number of Nights must be greater than zero (and maybe lower than a maximum allowed number?); RoomType needs to be a valid type supported by our booking system; and so on.&lt;/p&gt;

&lt;p&gt;In the code snippet below I am implementing the BookHotel intent in Python. Because I’d like you to focus on the core business logic, I’ve moved most of the “boring” validation logic and utilities to reusable external modules (&lt;a href="https://gist.github.com/alexcasalboni/3ea2d8dda11c6b73bbf98adf2dd6a214" rel="noopener noreferrer"&gt;you can find the three files here&lt;/a&gt;).&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As you can see in the snippet above, the Lambda handler itself is only a simple wrapper/adapter for the book_hotel business logic. In this case, we are handling both single slots validation and final fulfillment with one function.&lt;/p&gt;

&lt;p&gt;The main logic looks like this: load session data (this is given in the input event), validate individual slot, elicit slots if missing/invalid data, delegate the next step to Lex until we reach the final fulfillment. Then, we can finally book the hotel through our backend or 3rd-party API.&lt;/p&gt;

&lt;p&gt;The full code snippet is available &lt;a href="https://gist.github.com/alexcasalboni/3ea2d8dda11c6b73bbf98adf2dd6a214" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and it is actually a refactor of the lex-book-trip-python Lambda blueprint that you can find in the AWS Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7bs120r1ofh9v4yb1x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7bs120r1ofh9v4yb1x6.png" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve deployed this Lambda function, you can use it as your bot’s validation code hook and fulfillment directly in the Lex console, as shown in the next screenshots:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4hjzdr8hr1y7v3q9pbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4hjzdr8hr1y7v3q9pbc.png" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faul5rhzx95hlc41bweg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faul5rhzx95hlc41bweg1.png" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, Amazon Lex is not supported by CloudFormation yet, but &lt;a href="https://github.com/aws-samples/aws-lex-web-ui/tree/master/templates" rel="noopener noreferrer"&gt;here you can find a set of CloudFormation templates that will deploy a Lex bot using custom resources&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Amazon CloudWatch Logs (subscriptions)
&lt;/h3&gt;

&lt;p&gt;Whenever you functions print or console.log something, you will find the corresponding logs on CloudWatch Logs. And the same happens for over 30 services that can natively publish logs into CloudWatch, including &lt;em&gt;Amazon Route 53&lt;/em&gt;, &lt;em&gt;Amazon VPC&lt;/em&gt;, &lt;em&gt;Amazon API Gateway&lt;/em&gt;, &lt;em&gt;AWS CloudTrail&lt;/em&gt;, etc.&lt;br&gt;&lt;br&gt;
Not to mention all those on-premises servers that publish logs into CloudWatch using the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html" rel="noopener noreferrer"&gt;CloudWatch Agent&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But that’s *NOT* the reason why CloudWatch is on this list.&lt;/p&gt;

&lt;p&gt;You can also use CloudWatch Logs as an event source for Lambda. In fact, CloudWatch allows you to define filtered subscriptions on log groups and implement your own Lambda function to process those logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqfcy6eb7azyo6oeiv57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqfcy6eb7azyo6oeiv57.png" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, you may want to process all the (filtered) logs generated by an Amazon EC2 instance and correlate those logs with the corresponding trace from AWS X-Ray. Finally, you could store the processed information on Amazon S3, maybe send an email report, or even open a new issue on GitHub with all the information required for debugging the problem.&lt;/p&gt;

&lt;p&gt;Let’s look at the structure of CloudWatch Logs events:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Well, we can’t see much until we base64-decode and unzip the incoming data. The good news is that you can achieve that with built-in libraries for most runtimes, including Node.js and Python.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once decoded, the CloudWatch Logs payload will look like the following object:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The payload contains some meta-data about the event itself, such as the name of the corresponding logStream, logGroup, and subscriptionFilters.&lt;/p&gt;

&lt;p&gt;The actual payload you want to process is the list of logEvents , each one with its id , timestamp , and message. Please note that, depending on the subscription filter you define, you will likely receive only a subset of the logs corresponding to a given process/task/function execution. That’s why you may want to fetch additional information from the same log stream, especially if you are filtering errors or exceptions for debugging them later.&lt;/p&gt;

&lt;p&gt;The following code snippets is a sample implementation in Python:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;A few notes on the code snippet above:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It’s assuming that you’ve created a decode.py file with the decode function we’ve seen earlier in this article&lt;/li&gt;
&lt;li&gt;The code is sleeping for 5 seconds, waiting for all the logs to be collected in the corresponding stream; this way, we can collect a few more lines of logs before and after this match&lt;/li&gt;
&lt;li&gt;We could implement a fetch_traces function to fetch X-Ray traces based on some sort of Request Id (which is automatically added for Lambda function execution logs, but you may have a different format in your own custom logs)&lt;/li&gt;
&lt;li&gt;Ideally, we would like to avoid time.sleep at all and instead define a proper state machine with &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Function&lt;/a&gt;; this way, we wouldn’t pay for the 5-10 seconds of idle execution because Step Functions allows us to define Wait states (up to a whole year) while charging only for state transitions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can also find a &lt;a href="https://github.com/awsdocs/aws-lambda-developer-guide/blob/master/sample-apps/error-processor/processor/index.js" rel="noopener noreferrer"&gt;similar implementation in Node.js here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ok, now that we have a better understanding of the moving parts and a sample implementation, it’s time to define a CloudFormation template for our logs processing application.&lt;/p&gt;

&lt;p&gt;The best part is that we don’t have to define any special CloudFormation resource because &lt;a href="https://aws.amazon.com/serverless/sam/" rel="noopener noreferrer"&gt;AWS SAM&lt;/a&gt; will do most of the work for us. All we need to do is defining a CloudWatchLogs event for our processing function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Without AWS SAM, we’d need to manually create an AWS::Logs::SubscriptionFilter resource, as well as an additional AWS::Lambda::Permission resource to grant CloudWatch permissions to invoke our function. AWS SAM will transform our CloudWatchLogs event into those resources and it allows us to use a much simpler syntax.&lt;/p&gt;

&lt;p&gt;You can learn more about the &lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#event-source-types" rel="noopener noreferrer"&gt;built-in event sources supported by AWS SAM on GitHub&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Also, don’t forget to provide your processing function the correct permissions as well. In the YAML template above, I’m providing fine-grained permissions to invoke only logs:GetLogEvents on one log group. Alternatively, I could have used a managed IAM policy such as CloudWatchLogsReadOnlyAccess.&lt;/p&gt;

&lt;p&gt;You can find a &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/sample-errorprocessor.html" rel="noopener noreferrer"&gt;full reference architecture for errors processing here&lt;/a&gt;, which also includes AWS X-Ray traces.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Amazon Aurora (triggers &amp;amp; external data)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;Aurora&lt;/a&gt; is a cloud-native relational database engineered from the ground up, with a MySQL and PostgreSQL-compatible interface. It comes with up to 15 read-replicas and different flavours based on your application needs, such as &lt;a href="https://aws.amazon.com/rds/aurora/global-database/" rel="noopener noreferrer"&gt;Aurora Global Database&lt;/a&gt;for multi-region apps requiring high resiliency and data replication, or &lt;a href="https://aws.amazon.com/rds/aurora/serverless/" rel="noopener noreferrer"&gt;Aurora Serverless&lt;/a&gt; for infrequent, intermittent, or unpredictable workloads.&lt;/p&gt;

&lt;p&gt;We can integrate Aurora MySQL with Lambda in two different ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;synchronously&lt;/strong&gt;  — useful to fetch data from other AWS services in our MySQL queries;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;asynchronously&lt;/strong&gt;  — useful to perform tasks when something happens, for example via triggers&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Asynchronous invocation — Example: external data or API
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5kwapv6g2sky863krnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5kwapv6g2sky863krnw.png" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By invoking Lambda functions synchronously you can retrieve data stored in other services such as S3, Elasticsearch, Redshift, Athena, or even 3rd-party API’s.&lt;/p&gt;

&lt;p&gt;For example, we could fetch today’s weather to make some of our queries dynamic.&lt;/p&gt;

&lt;p&gt;First of all, we’ll need to &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html#AuroraMySQL.Integrating.LambdaAccess" rel="noopener noreferrer"&gt;give the Aurora cluster access to Lambda&lt;/a&gt; by setting the aws_default_lambda_role cluster parameter with a proper IAM role. In case your cluster isn’t publicly accessible, &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.Network.html" rel="noopener noreferrer"&gt;you’ll also need to enable network communication&lt;/a&gt;. Then we can grant invoke permissions to the database user:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now we can finally invoke our Lambda functions using lambda_sync:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above is selecting all fields from a weather_mapping table, assuming that we are storing the mapping between some parameter of our system and the current weather in a given location (which could be parametrized). For example, our application could use different images, welcome messages, or even prices based on the current weather.&lt;/p&gt;

&lt;p&gt;Please also note that the Lambda function FetchWeather should return an atomic value — in this case a string — since Aurora MySQL doesn’t support JSON parsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disclaimer 1&lt;/em&gt;&lt;/strong&gt; : lambda_sync and lambda_async are available Aurora MySQL version 1.6 and above. For older versions, you can use the stored procedure mysql.lambda_async.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disclaimer 2&lt;/em&gt;&lt;/strong&gt; : the functionality above could be implemented at the application layer as well, and I’m pretty sure you will come up with more creative use cases for synchronous invocations :)&lt;/p&gt;

&lt;h4&gt;
  
  
  Asynchronous invocation — Example: triggers
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cawemk7pad0ieqlfa6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cawemk7pad0ieqlfa6x.png" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By invoking Lambda functions asynchronously you can implement something very similar to &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html" rel="noopener noreferrer"&gt;Amazon DynamoDB Streams&lt;/a&gt;, for those scenarios where you need to react to specific queries or events happening in the database.&lt;/p&gt;

&lt;p&gt;For example, you may want to send an email every time a new record is inserted into a DemoContacts table. In this case you could achieve the same by sending the email from your application code. But in some other scenarios you might need to add too much logic to your application code (or even just modify too many files/modules), so it would be simpler to use a database trigger to extend the behaviour of the application.&lt;/p&gt;

&lt;p&gt;In practice, you can define a MySQL trigger that will invoke your Lambda function asynchronously.&lt;/p&gt;

&lt;p&gt;First of all, we’ll need to &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html#AuroraMySQL.Integrating.LambdaAccess" rel="noopener noreferrer"&gt;give the Aurora cluster access to Lambda&lt;/a&gt; by setting the aws_default_lambda_role cluster parameter with a proper IAM role. In case your cluster isn’t publicly accessible, &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.Network.html" rel="noopener noreferrer"&gt;you’ll also need to enable network communication&lt;/a&gt;. Then we can grant invoke permissions to the database user:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now we can define a MySQL trigger:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above defines a trigger named TR_contacts_on_insert that collects the inserted values of email and fullname, and then invokes a Lambda function asynchronously. The built-in function lambda_async requires a function ARN and a JSON payload, here built by concatenating strings.&lt;/p&gt;

&lt;p&gt;In case you want to reuse the invoke logic above for other similar triggers, you may want to create a reusable stored procedure as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;That’s all for part 2!&lt;/p&gt;

&lt;p&gt;I hope you’ve been inspired to build something new with &lt;a href="https://aws.amazon.com/iot-1-click/" rel="noopener noreferrer"&gt;AWS IoT 1-Click&lt;/a&gt;, &lt;a href="https://aws.amazon.com/lex/" rel="noopener noreferrer"&gt;Amazon Lex&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;Amazon CloudWatch Logs&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;Amazon Aurora&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now you can serverlessly handle IoT buttons clicks, implement the fullfillment logic of your chatbots, process logs in real-time, and implement MySQL triggers and or fetch data from external services/databases into Aurora.&lt;/p&gt;

&lt;p&gt;In the 3rd (and last) part of this series I will discuss the last four less common ways to invoke your Lambda functions, including AWS CodeDeploy, AWS CodePipeline, Amazon Pinpoint, and more! Stay tuned and let me know if you’d like to read about other Lambda integrations.&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read such a long article.&lt;br&gt;&lt;br&gt;
Feel free to share and/or drop a comment below :)&lt;/p&gt;




&lt;p&gt;Originally published on &lt;a href="https://medium.com/hackernoon/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-2-78a5f09a773d" rel="noopener noreferrer"&gt;HackerNoon&lt;/a&gt; on Jul 1, 2019.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>serverless</category>
      <category>node</category>
      <category>python</category>
    </item>
    <item>
      <title>How to FaaS like a pro: 12 less common ways to invoke your serverless functions on Amazon Web Services [Part 1]</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Tue, 02 Apr 2019 16:15:20 +0000</pubDate>
      <link>https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-aws-part-1-4nbb</link>
      <guid>https://dev.to/aws/how-to-faas-like-a-pro-12-less-common-ways-to-invoke-your-serverless-functions-on-aws-part-1-4nbb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk10k4rfyr5l4ljjvz01.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk10k4rfyr5l4ljjvz01.jpeg" width="800" height="533"&gt;&lt;/a&gt;Yes, this is you at the end of this article, contemplating new possibilities!&lt;br&gt;
[Photo by &lt;a href="https://unsplash.com/photos/etsVKbvxhCc" rel="noopener noreferrer"&gt;Joshua Earle&lt;/a&gt; on &lt;a href="https://unsplash.com" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;If you feel like skipping the brief introduction below, you can jump straight to the first four trigger with these shortlinks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
Amazon Cognito User Pools — Users management &amp;amp; custom workflows&lt;/li&gt;
&lt;li&gt;
AWS Config — Event-driven configuration checks&lt;/li&gt;
&lt;li&gt;
Amazon Kinesis Data Firehose — Data ingestion &amp;amp; validation&lt;/li&gt;
&lt;li&gt;
AWS CloudFormation — IaC, Macros &amp;amp; custom transforms&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  A bit of history first
&lt;/h4&gt;

&lt;p&gt;When AWS Lambda &lt;a href="https://aws.amazon.com/it/blogs/compute/aws-lambda-is-generally-available/" rel="noopener noreferrer"&gt;became generally available on April 9th, 2015&lt;/a&gt; it became the first Function-as-a-Service out there, and there were only a few ways you could trigger your functions besides direct invocation: Amazon S3, Amazon Kinesis, and Amazon SNS. Three months later we got Amazon API Gateway support, which opened a whole new wave for the web and REST-compatible clients.&lt;/p&gt;

&lt;p&gt;By the end of 2015, you could already trigger functions via Amazon DynamoDB Streams, Kinesis Streams, S3 objects, SNS topics, and CloudWatch Events (scheduled invocations).&lt;/p&gt;

&lt;p&gt;Personally, I started experimenting with AWS Lambda &lt;a href="https://aws.amazon.com/it/blogs/aws/machine-learning-recommendation-systems-and-data-analysis-at-cloud-academy/" rel="noopener noreferrer"&gt;around early 2016 for a simple machine learning use case&lt;/a&gt;. A few months later I published the very first video about my experience with Lambda, which covered all the available triggers and configurations available at the time; well, the video is still available &lt;a href="https://www.youtube.com/watch?v=NhGEik26324" rel="noopener noreferrer"&gt;here&lt;/a&gt;, but the AWS Console is pretty different now so I’d recommend you watch it only if you are feeling nostalgic =)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Back to history…&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the following months, AWS Lambda became very popular and many other AWS services started integrating it and allowing you to trigger functions in many new ways. These integrations are fantastic for processing/validating data, as well as for customizing and extending the behavior of these services.&lt;/p&gt;

&lt;p&gt;You may be already aware of (or intuitively guess) how AWS Lambda integrates with services such as &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-s3" rel="noopener noreferrer"&gt;S3&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-dynamo-db" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-kinesis-streams" rel="noopener noreferrer"&gt;Kinesis Data Streams&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-ses" rel="noopener noreferrer"&gt;SES&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-sqs" rel="noopener noreferrer"&gt;SQS&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/iot/latest/developerguide/iot-lambda-rule.html" rel="noopener noreferrer"&gt;IoT Core&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/en_us/step-functions/latest/dg/connectors-lambda.html" rel="noopener noreferrer"&gt;Step Functions&lt;/a&gt;, and &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/services-alb.html" rel="noopener noreferrer"&gt;ALB&lt;/a&gt;. And there are plenty of articles and getting-started guides out there using these integrations as a good starting point for your serverless journey.&lt;/p&gt;

&lt;p&gt;In this article, I’d like to share with you some of the many other less common, less well-known, or even just newer ways to invoke your Lambda functions on AWS. Some of these integrations do not even appear on the official &lt;a href="https://docs.aws.amazon.com/en_us/lambda/latest/dg/invoking-lambda-function.html" rel="noopener noreferrer"&gt;Supported Event Sources&lt;/a&gt; documentation page yet and I believe they are worth mentioning and experimenting with.&lt;/p&gt;

&lt;p&gt;For each service/integration, I will share useful links, code snippets, and CloudFormation templates &amp;amp; references. Please feel free to add a comment below if you think something’s missing or if you need more resources/details. Even if you don’t know Python or JavaScript, the code will be pretty self-explanatory and with useful comments. Please drop a comment on Gist or at the bottom of this article if you have questions or doubts.&lt;/p&gt;

&lt;p&gt;Let’s get started with the first 4 triggers for AWS Lambda.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Amazon Cognito User Pools (custom workflows)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cognito/" rel="noopener noreferrer"&gt;Cognito User Pools&lt;/a&gt; allow you to add &lt;strong&gt;authentication and user management&lt;/strong&gt; to your applications. With AWS Lambda, you can &lt;a href="https://docs.aws.amazon.com/en_us/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html" rel="noopener noreferrer"&gt;customize your User Pool Workflows&lt;/a&gt; and trigger your functions during Cognito’s operations in order to customize your User Pool behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F602ut8wjx2hbx8no1vzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F602ut8wjx2hbx8no1vzo.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the list of available triggers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pre Sign-up&lt;/strong&gt;  — triggered just before Cognito signs up a new user (or admin) and allows you to perform custom validation to accept/deny it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post Confirmation&lt;/strong&gt;  — triggered after a new user (or admin) signs up and allows you to send custom messages or to add custom logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre Authentication&lt;/strong&gt;  — triggered when a user attempts to sign in and allows custom validation to accept/deny it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post Authentication&lt;/strong&gt;  — triggered after signing in a user and allows you to add custom logic after authentication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Authentication&lt;/strong&gt;  — triggered to define, create, and verify custom challenges when you use the &lt;a href="https://docs.aws.amazon.com/en_us/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html#amazon-cognito-user-pools-custom-authentication-flow" rel="noopener noreferrer"&gt;custom authentication flow&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre Token Generation&lt;/strong&gt;  — triggered before every token generation and allows you to customize identity token claims (for example, new passwords and refresh tokens)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Migrate User&lt;/strong&gt;  — triggered when a user does not exist in the user pool at the time of sign-in with a password or in the forgot-password flow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Message&lt;/strong&gt;  — triggered before sending an email, phone verification message, or a MFA code and allows you to customize the message&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All these triggers allow you to implement state-less logic and personalize how Cognito User Pools work using your favorite programming language. Keep in mind that your functions are invoked synchronously and will need to complete within 5 seconds, simply by returning the incoming &lt;em&gt;event&lt;/em&gt; object with an additional &lt;em&gt;response&lt;/em&gt; attribute.&lt;/p&gt;

&lt;p&gt;It might be convenient to handle multiple events from the same Lambda Function as Cognito will always provide an attribute named &lt;em&gt;event.triggerSource&lt;/em&gt; to help you implement the right logic for each event.&lt;/p&gt;

&lt;p&gt;For example, here’s how you’d implement the Lambda function code for a &lt;em&gt;Custom Message&lt;/em&gt; in Node.js:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As you can see, the logic is completely stateless and it’s considered best practice to always check the &lt;em&gt;triggerSource&lt;/em&gt; value to make sure you are processing the correct event — and eventually raise an error/warning in case of unhandled sources.&lt;/p&gt;

&lt;p&gt;The following code snippet shows how you can define the Lambda function and Cognito User Pool in a CloudFormation template (here I’m using &lt;a href="https://github.com/awslabs/serverless-application-model" rel="noopener noreferrer"&gt;AWS SAM&lt;/a&gt; syntax, but you could also use plain CloudFormation):&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;All you need to do is adding a &lt;em&gt;LambdaConfig&lt;/em&gt; property to your User Pool definition and reference a Lambda function.&lt;/p&gt;

&lt;p&gt;You can find all the attributes of &lt;em&gt;LambdaConfig&lt;/em&gt; on the &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/aws-properties-cognito-userpool-lambdaconfig.html" rel="noopener noreferrer"&gt;documentation page&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. AWS Config (event-driven configuration checks)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/config/" rel="noopener noreferrer"&gt;AWS Config&lt;/a&gt; allows you to keep track of how the configurations of your AWS resources change over time. It’s particularly useful for recording historical values and it also allows you to compare historical configurations with desired configurations. For example, you could use AWS Config to make sure all the EC2 instances launched in your account are &lt;em&gt;t2.micro&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;As a developer, the interesting part is that you can implement this kind of compliance checks with AWS Lambda. In other words, you can define a custom rule and associate it with &lt;strong&gt;a Lambda function that will be invoked in response to each and every configuration change&lt;/strong&gt; (or periodically).&lt;/p&gt;

&lt;p&gt;Also, your code can decide whether the new configuration is valid or not :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd91jfo50yaykdf6e5arn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd91jfo50yaykdf6e5arn.png" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, you don’t have to listen to every possible configuration change of all your resources. Indeed, &lt;strong&gt;you can listen to specific resources&lt;/strong&gt; based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tags&lt;/strong&gt; (for example, resources with an environment or project-specific tag)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Type&lt;/strong&gt; (for example, only &lt;em&gt;AWS::EC2::Instance&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Type + Identifier&lt;/strong&gt; (for example, a specific EC2 Instance ARN)&lt;/li&gt;
&lt;li&gt;All &lt;strong&gt;changes&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many AWS Lambda blueprints that allow you to get started quickly without coding everything yourself (for example, &lt;em&gt;config-rule-change-triggered&lt;/em&gt;). But I think it’s important to understand the overall logic and moving parts, so in the next few paragraphs we will dive deep and learn how to write a new Lambda function from scratch.&lt;/p&gt;

&lt;p&gt;Practically speaking, your function will receive four very important pieces of information as part of the input &lt;em&gt;event&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;invokingEvent&lt;/em&gt; represents the configuration change that triggered this Lambda invocation; it contains a field named &lt;em&gt;messageType&lt;/em&gt; which tells you if the current payload is related to a periodic scheduled invocation (&lt;em&gt;ScheduledNotification&lt;/em&gt;), if it’s a regular configuration change (&lt;em&gt;ConfigurationItemChangeNotification&lt;/em&gt;) or if the change content was too large to be included in the Lambda event payload (&lt;em&gt;OversizedConfigurationItemChangeNotification&lt;/em&gt;); in the first case, &lt;em&gt;invokingEvent&lt;/em&gt; will also contain a field named &lt;em&gt;configurationItem&lt;/em&gt; with the current configuration, while in the other cases we will need to fetch the current configuration via the AWS Config History API&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ruleParameters&lt;/em&gt; is the set of key/value pairs that you optionally define when you create a custom rule; they represent the (un)desired status of your configurations (for example, &lt;em&gt;desiredInstanceType=t2.small&lt;/em&gt;) and you can use its values however you want; let’s say this is a smart way to parametrize your Lambda function code and reuse it with multiple rules&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;resultToken&lt;/em&gt; is the token we will use when to notify AWS Config about the config evaluation results (see the three possible outcomes below)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;eventLeftScope&lt;/em&gt; tells you whether the AWS resource to be evaluated has been removed from the rule’s scope, in which case we will just skip the evaluation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on the inputs above, our lambda function will evaluate the configuration compliance and it will be able to invoke the &lt;em&gt;PutEvaluations&lt;/em&gt; API with three possible results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;COMPLIANT&lt;/em&gt; if the current configuration is &lt;em&gt;OK&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NON_COMPLIANT&lt;/em&gt; if the current configuration is &lt;em&gt;NOT OK&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NOT_APPLICABLE&lt;/em&gt; if this configuration change can be ignored&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, enough theory :)&lt;/p&gt;

&lt;p&gt;Let’s write some code and see AWS Config in action.&lt;/p&gt;

&lt;p&gt;For example, let’s implement a custom rule to check that all EC2 instances launched in our account are &lt;em&gt;t2.small&lt;/em&gt; using Node.js:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the code snippet above, I am importing a simple utility module (&lt;a href="https://gist.github.com/alexcasalboni/60a3b45017ad3d44f052c2dd3c1661e4" rel="noopener noreferrer"&gt;that you can find here&lt;/a&gt;) to make the overall logic more readable.&lt;/p&gt;

&lt;p&gt;Most of the magic happens in the JavaScript function named &lt;em&gt;evaluateChangeNotificationCompliance&lt;/em&gt;. Its logic is parametrized based on &lt;em&gt;ruleParameters&lt;/em&gt; and the value of &lt;em&gt;desiredInstanceType&lt;/em&gt; — that we will define in a CloudFormation template below — so that we can reuse the same Lambda function for different rules.&lt;/p&gt;

&lt;p&gt;Now, let’s define our AWS Config custom rule and Lambda function in CloudFormation:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Defining a custom rule is fairly intuitive. In the &lt;em&gt;Scope&lt;/em&gt; property I am selecting only &lt;em&gt;AWS::EC2::Instance&lt;/em&gt; resources and I am passing &lt;em&gt;t2.small&lt;/em&gt; as an input parameter of the custom rule. Then, I define the &lt;em&gt;Source&lt;/em&gt; property and reference my Lambda function.&lt;/p&gt;

&lt;p&gt;You can find the &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-config-configrule.html" rel="noopener noreferrer"&gt;full documentation about AWS Config custom rules here&lt;/a&gt;, with good references for scheduled rules, tags filtering, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Amazon Kinesis Data Firehose (data validation)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/kinesis/data-firehose/" rel="noopener noreferrer"&gt;Kinesis Data Firehose&lt;/a&gt; allows you to ingest streaming data into standard destinations for analytics purposes such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk.&lt;/p&gt;

&lt;p&gt;You can have multiple data producers that will PutRecords into your delivery stream. Kinesis Firehose &lt;strong&gt;will take care of buffering, compressing, encrypting, and optionally even reshaping and optimizing your data&lt;/strong&gt; for query performance (for example, in Parquet columnar format).&lt;/p&gt;

&lt;p&gt;Additionally, you can attach a Lambda function to the delivery stream. This function will be able to &lt;a href="https://docs.aws.amazon.com/en_us/firehose/latest/dev/data-transformation.html" rel="noopener noreferrer"&gt;validate, manipulate, or enrich incoming records&lt;/a&gt; before Kinesis Firehose proceeds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q1pdt26f2im8oddoxkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q1pdt26f2im8oddoxkj.png" width="800" height="353"&gt;&lt;/a&gt;(Optionally, you might have API Gateway or CloudFront in front of Kinesis Firehose for RESTful data ingestion)&lt;/p&gt;

&lt;p&gt;Your Lambda function will receive &lt;strong&gt;a batch of records&lt;/strong&gt; and will need to return the same list of records with an additional &lt;em&gt;result&lt;/em&gt; field, whose value can be one of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Ok&lt;/em&gt; if the record was successfully processed/validated&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Dropped&lt;/em&gt; if the record doesn’t need to be stored (Firehose will just skip it)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ProcessingFailed&lt;/em&gt; if the record is not valid or something went wrong during its processing/manipulation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s now implement a generic and reusable validation &amp;amp; manipulation logic in Python:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The code snippet above is structured so that you only need to implement your own &lt;em&gt;transform_data logic&lt;/em&gt;. There you can add new fields, manipulate existing ones, or decide to skip/drop the current record by raising a &lt;em&gt;DroppedRecordException&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A few implementation details worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both incoming and outgoing records must be &lt;strong&gt;base64-encoded&lt;/strong&gt; (the snippet above already takes care of it)&lt;/li&gt;
&lt;li&gt;I am assuming the incoming records are in JSON format, but you may as well ingest CSV data or even your own custom format; just make sure you (de)serialize records properly, as Kinesis Firehose always expects to work with plain strings&lt;/li&gt;
&lt;li&gt;I am adding a trailing \n character after each encoded record so that Kinesis Firehose will serialize one JSON object per line in the delivery destination (this is required for Amazon S3 and Athena to work correctly)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, you can implement your own data manipulation logic in any programming language supported by AWS Lambda and — in some more advanced use cases —  &lt;strong&gt;you may need to fetch additional data from Amazon DynamoDB&lt;/strong&gt; or other data sources.&lt;/p&gt;

&lt;p&gt;Let’s now define our data ingestion application in CloudFormation.&lt;/p&gt;

&lt;p&gt;You can attach a Lambda function to a Kinesis Firehose delivery stream by defining the &lt;em&gt;ProcessingConfiguration&lt;/em&gt; attribute.&lt;/p&gt;

&lt;p&gt;In addition to that, let’s setup Firehose to deliver the incoming records to Amazon S3 &lt;strong&gt;every 60 seconds&lt;/strong&gt; (or as soon as &lt;strong&gt;10MB are collected&lt;/strong&gt; ), &lt;strong&gt;compressed with GZIP&lt;/strong&gt;. We’ll also need an ad-hoc IAM Role to define fine-grained permissions for Firehose to invoke our Lambda and write into S3.&lt;/p&gt;

&lt;p&gt;Here is the full CloudFormation template for your reference:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The best part of this architecture in my opinion is that it’s 100% serverless and you won’t be charged if no data is being ingested. So it allows you to have multiple 24x7 environments for development and testing at virtually no cost.&lt;/p&gt;

&lt;p&gt;You can find the &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-kinesisfirehose-deliverystream.html" rel="noopener noreferrer"&gt;complete CloudFormation documentation here&lt;/a&gt;. Plus, you’ll also find &lt;a href="https://github.com/alexcasalboni/serverless-data-pipeline-sam" rel="noopener noreferrer"&gt;an end-to-end pipeline including Amazon API Gateway and Amazon Athena here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. AWS CloudFormation (Macros)
&lt;/h3&gt;

&lt;p&gt;We have already seen many &lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;CloudFormation&lt;/a&gt; templates so far in this article. That’s how you define your applications and resources in a JSON or YAML template. CloudFormation allows you to deploy the same stack to multiple AWS accounts, regions, or environments such as dev and prod.&lt;/p&gt;

&lt;p&gt;A few months ago — &lt;a href="https://aws.amazon.com/about-aws/whats-new/2018/09/introducing-aws-cloudformation-macros/" rel="noopener noreferrer"&gt;in September 2018&lt;/a&gt; — AWS announced a new CloudFormation feature called Macros.&lt;/p&gt;

&lt;p&gt;CloudFormation comes with built-in transforms such as &lt;em&gt;AWS::Include&lt;/em&gt; and &lt;em&gt;AWS::Serverless&lt;/em&gt; that simplify template authoring by condensing resource definition expressions and enabling components reusing. These transforms are applied at deployment-time to your CloudFormation templates.&lt;/p&gt;

&lt;p&gt;Similarly, a &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/template-macros.html" rel="noopener noreferrer"&gt;CloudFormation Macro&lt;/a&gt; is a &lt;strong&gt;custom transform&lt;/strong&gt; backed by your own Lambda Function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4ysbk9ic8oharqlje4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4ysbk9ic8oharqlje4i.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three main steps to create and use a macro:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Lambda function that will process the raw template&lt;/li&gt;
&lt;li&gt;Define a resource of type &lt;em&gt;AWS::CloudFormation::Macro&lt;/em&gt; (&lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-macro.html" rel="noopener noreferrer"&gt;resource reference here&lt;/a&gt;), map it to the Lambda function above, and deploy the stack&lt;/li&gt;
&lt;li&gt;Use the Macro in a CloudFormation template&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Macros are particularly powerful because you can apply them either to the whole CloudFormation template — using the &lt;em&gt;Transform&lt;/em&gt; property — or only to a sub-section — using the intrinsic &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-transform.html" rel="noopener noreferrer"&gt;Fn::Transform function&lt;/a&gt;, optionally with parameters.&lt;/p&gt;

&lt;p&gt;For example, you may define a macro that will expand a simple resource &lt;em&gt;MyCompany::StaticWebsite&lt;/em&gt; into a proper set of resources and corresponding defaults, including S3 buckets, CloudFront distributions, IAM roles, CloudWatch alarms, etc.&lt;/p&gt;

&lt;p&gt;It’s also useful to remember that you can use macros only in the account in which they were created and that macro names must be unique within a given account. If you enable cross-account access to your processing function, you can define the same macro in multiple accounts for easier reuse.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to implement a CloudFormation Macro
&lt;/h4&gt;

&lt;p&gt;Let’s now focus on the implementation details of the Lambda function performing the template processing.&lt;/p&gt;

&lt;p&gt;When your function is invoked, it’ll receive the following as input:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;region&lt;/em&gt; is the region in which the macro resides&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;accountID&lt;/em&gt; is the account ID of the account invoking this function&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;fragment&lt;/em&gt; is the portion of the template available for processing (could be the whole template or only a sub-section of it) in JSON format, including siblings&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;params&lt;/em&gt; is available only if you are processing a sub-section of the template and it contains the custom parameters provided by the target stack (not evaluated)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;templateParameterValues&lt;/em&gt; contains the template parameters of the target stack (already evaluated)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;requestId&lt;/em&gt; is the ID of the current function invocation (used only to match the response)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the processing logic is completed, the Lambda function will need to return the following three attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;requestId&lt;/em&gt; must match the same request ID provided as input&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;status&lt;/em&gt; should be set to the string "success" (anything else will be treated as a processing failure)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;fragment&lt;/em&gt; is the processed template, including siblings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s interesting to note that in some cases the &lt;em&gt;processedfragment&lt;/em&gt; will be the same &lt;em&gt;fragment&lt;/em&gt; you receive as input.&lt;/p&gt;

&lt;p&gt;I can think of four possible manipulation/processing scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your function processes some resources and &lt;strong&gt;customizes their properties&lt;/strong&gt; (without adding or removing other resources)&lt;/li&gt;
&lt;li&gt;Your function &lt;strong&gt;extends the input fragment by creating new resources&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Your function &lt;strong&gt;replaces some of the resources&lt;/strong&gt;  — potentially your own custom types — with other real CloudFormation resources (note: this is what AWS SAM does too!)&lt;/li&gt;
&lt;li&gt;Your function does not alter the input fragment, but &lt;strong&gt;intentionally fails if something is wrong or missing&lt;/strong&gt; (for example, if encryption is disabled or if granted permissions are too open)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Of course, your macros could be a mix of the four scenarios below.&lt;/p&gt;

&lt;p&gt;In my opinion, &lt;em&gt;scenario (4&lt;/em&gt;) is particularly powerful because it allows you to &lt;strong&gt;implement custom configuration checks before the resources are actually deployed and provisioned&lt;/strong&gt; , with respect to the AWS Config solution we’ve discussed at the beginning of this article.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario (3)&lt;/em&gt; is probably the most commonly used, as it allows you to define your own personalized resources such as &lt;em&gt;MyCompany::StaticWebsite&lt;/em&gt; (with S3 buckets, CloudFront distributions, or Amplify Console apps) or &lt;em&gt;MyCompany::DynamoDB::Table&lt;/em&gt; (with enabled autoscaling, on-demand capacity, or even a complex shared configuration for primary key and indexes), etc.&lt;/p&gt;

&lt;p&gt;Some of the more complex macros make use of a mix of stateless processing and &lt;a href="https://docs.aws.amazon.com/en_us/AWSCloudFormation/latest/UserGuide/template-custom-resources.html" rel="noopener noreferrer"&gt;CloudFormation Custom Resources&lt;/a&gt; backed by an additional Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/CloudFormation/MacrosExamples" rel="noopener noreferrer"&gt;Here you can find real-world implementation examples of CloudFormation Macros&lt;/a&gt;, the corresponding macro templates, and a few sample templates too. I am quite sure you will enjoy the following macros in particular: &lt;em&gt;AWS::S3::Object&lt;/em&gt;, &lt;em&gt;Count&lt;/em&gt;, &lt;em&gt;StackMetrics&lt;/em&gt;, &lt;em&gt;StringFunctions&lt;/em&gt;, and more!&lt;/p&gt;

&lt;h4&gt;
  
  
  How to deploy a CloudFormation Macro
&lt;/h4&gt;

&lt;p&gt;Once you’ve implemented the processing function, you can use it to deploy a new macro.&lt;/p&gt;

&lt;p&gt;Here is how you define a new macro resource:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;That’s it!&lt;/p&gt;

&lt;p&gt;AWS CloudFormation will invoke the processing function every time we reference the macro named &lt;em&gt;MyUniqueMacroName&lt;/em&gt; in a CloudFormation template.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to use a CloudFormation Macro
&lt;/h4&gt;

&lt;p&gt;Using a macro is the most likely scenario for most developers.&lt;/p&gt;

&lt;p&gt;It’s quite common that macros are owned and managed by your organization or by another team, and that you’ll just use/reference a macro in your CloudFormation templates.&lt;/p&gt;

&lt;p&gt;Here is how you can use the macro defined above and apply it to the whole template:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In case you’d like to apply the same macro only to a sub-section of your template, you can do so by using the &lt;em&gt;Fn::Transform&lt;/em&gt; intrinsic function:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Let me know what CloudFormation Macros you’ll build and what challenges they solve for your team!&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;That’s all for Part 1 :)&lt;/p&gt;

&lt;p&gt;I hope you have learned something new about &lt;a href="https://aws.amazon.com/cognito/" rel="noopener noreferrer"&gt;Amazon Cognito&lt;/a&gt;, &lt;a href="https://aws.amazon.com/config/" rel="noopener noreferrer"&gt;AWS Config&lt;/a&gt;, &lt;a href="https://aws.amazon.com/kinesis/data-firehose/" rel="noopener noreferrer"&gt;Amazon Kinesis Data Firehose&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;Amazon CloudFormation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can now customize your Cognito User Pools workflow, validate your configurations in real-time, manipulate and validate data before Kinesis delivers it to the destination, and implement macros to enrich your CloudFormation templates.&lt;/p&gt;

&lt;p&gt;In the next two parts of this series, we will learn more about other less common Lambda integrations for services such as AWS IoT 1-Click, Amazon Lex, Amazon CloudWatch Logs, AWS CodeDeploy, and Amazon Aurora.&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read such a long article.&lt;br&gt;&lt;br&gt;
Feel free to share and/or drop a comment below.&lt;/p&gt;




&lt;p&gt;Originally published on &lt;a href="https://hackernoon.com/how-to-faas-like-a-pro-12-uncommon-ways-to-invoke-your-serverless-functions-on-aws-part-1-dca1078f0c80" rel="noopener noreferrer"&gt;HackerNoon&lt;/a&gt; on Apr 2, 2019.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>serverless</category>
      <category>node</category>
      <category>python</category>
    </item>
    <item>
      <title>Welcome to Cloud-Native Development with AWS Cloud9 &amp; AWS CodeStar ⚡</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Mon, 15 Jan 2018 08:48:43 +0000</pubDate>
      <link>https://dev.to/alexcasalboni/welcome-to-cloud-native-development-with-aws-cloud9--aws-codestar--3ell</link>
      <guid>https://dev.to/alexcasalboni/welcome-to-cloud-native-development-with-aws-cloud9--aws-codestar--3ell</guid>
      <description>&lt;p&gt;I have been experimenting with &lt;a href="https://aws.amazon.com/cloud9/" rel="noopener noreferrer"&gt;AWS Cloud9&lt;/a&gt; since Werner Vogels announced it a few weeks ago at AWS re:Invent 2017 (&lt;a href="https://www.youtube.com/watch?v=fwFoU_Wb-fU" rel="noopener noreferrer"&gt;keynote video here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;This article is the paraphrased version of my talk &lt;a href="https://clda.co/aws-cloud9-codestar" rel="noopener noreferrer"&gt;AWS Cloud9 &amp;amp; CodeStar for Serverless Apps&lt;/a&gt;, given at the &lt;a href="https://www.meetup.com/AWSusergroupItaly" rel="noopener noreferrer"&gt;AWS User Group in Milan&lt;/a&gt; on Jan 10th.&lt;/p&gt;




&lt;p&gt;I am going to skip the “&lt;em&gt;Serverless Disclaimer&lt;/em&gt;” section of my deck.&lt;br&gt;&lt;br&gt;
If you are not familiar with Serverless, please have a look &lt;a href="https://medium.com/@PaulDJohnston/a-simple-definition-of-serverless-8492adfb175a" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or &lt;a href="https://martinfowler.com/articles/serverless.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or &lt;a href="https://auth0.com/blog/what-is-serverless/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or &lt;a href="https://en.wikipedia.org/wiki/Serverless_computing" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or &lt;a href="https://devops.stackexchange.com/questions/61/what-is-serverless" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or &lt;a href="https://dev.to/adnanrahic/a-crash-course-on-serverless-with-nodejs-5jp"&gt;here&lt;/a&gt;, or &lt;a href="https://hackernoon.com/how-serverless-computing-will-change-the-world-in-2018-7818fc06b447" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are familiar with Serverless and you don’t like it, you may still enjoy this article and the benefits of Cloud9 and CodeStar. Just make sure you mentally replace “&lt;em&gt;FaaS&lt;/em&gt;” with “&lt;em&gt;Container&lt;/em&gt;”, and “&lt;em&gt;SAM&lt;/em&gt;” with “&lt;em&gt;CloudFormation&lt;/em&gt;” :)&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-951172504743268353-902" src="https://platform.twitter.com/embed/Tweet.html?id=951172504743268353"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-951172504743268353-902');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=951172504743268353&amp;amp;theme=dark"
  }



&lt;/p&gt;




&lt;h3&gt;
  
  
  What is AWS Cloud9?
&lt;/h3&gt;

&lt;p&gt;AWS Cloud 9 is a “&lt;em&gt;cloud IDE&lt;/em&gt;” for writing, running, and debugging code.&lt;/p&gt;

&lt;p&gt;I’d start saying that most IDEs are fantastic tools to boost your productivity and the quality of your code if you can use them and invested a few months/years in learning them properly.&lt;/p&gt;

&lt;p&gt;That being said, some IDEs offer more advanced and useful features than others (&lt;em&gt;please don’t take it personally, unless you code in Word or Notepad&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eqv8sy27ou4xegnknpr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eqv8sy27ou4xegnknpr.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techcrunch.com/2016/07/14/amazons-aws-buys-cloud9-to-add-more-development-tools-to-its-web-services-stack/" rel="noopener noreferrer"&gt;AWS acquired Cloud9&lt;/a&gt; in July 2016, which has now been rebranded as AWS Cloud9. Even though it looked brilliant on Werner’s browser, my first impression during the keynote was something along the lines of “ &lt;strong&gt;&lt;em&gt;Why should I pay to write code?&lt;/em&gt;&lt;/strong&gt; ”, immediately followed by “ &lt;strong&gt;&lt;em&gt;Does that mean I cannot code when I’m offline?&lt;/em&gt;&lt;/strong&gt; ”. Like most software engineers, I’ve used many IDEs in the last ten years, for free, and I’m used to coding a lot while I’m traveling.&lt;/p&gt;

&lt;p&gt;Apparently, I was not alone, and many developers asked the same questions during my presentation. So let me briefly recap my arguments.&lt;/p&gt;

&lt;p&gt;Regarding the cost, I believe it’s pretty much negligible for most organizations that already use AWS heavily (less than &lt;em&gt;$2 per month&lt;/em&gt; for a &lt;em&gt;t2.micro&lt;/em&gt; environment, 8 hours a day, 20 days a month). And without considering AWS Free Tier and automatic cost-saving settings (hibernation after 30min).&lt;/p&gt;

&lt;p&gt;The “&lt;em&gt;no coding offline&lt;/em&gt;” drawback is much harder to defend, but let me try.&lt;/p&gt;

&lt;p&gt;Unless you are going for a hardcore debugging session over a well-established project, can you really code for more than 30min when you are offline? Can you git clone or git pull anything useful? Can you npm install or pip install the modules you need? Can you mock or ignore all the third-party services and APIs your application consumes? Can you avoid googling around your favorite framework’s documentation?&lt;/p&gt;

&lt;p&gt;Sure, you could prepare for a 12h flight and download/install everything you need in advance. But how often does that happen? Simply put, I’ve seen the best developers and engineers give up and take a break when the network goes down.&lt;/p&gt;

&lt;p&gt;On the other hand, AWS Cloud9 offers you a better alternative for when your machine gives up :) I could throw my dev machine out of the window right now, switch to my colleague’s notebook, login into my AWS Account and keep working on the very same Cloud9 session (which is saved and stored server-side). That means you could as well use a much cheaper machine such as a Chromebook or a tablet (or a phone?). Well, you could be 100% operational using the random machine of an internet cafe, or your grandmother’s computer :)&lt;/p&gt;

&lt;p&gt;Of course, there are always exceptions, and I’ll make sure I’m ready to use my local IDE when Cloud9 is not an option. In the meantime, I hope AWS will work on some sort of client-side offline support (and maybe turn Cloud9 into a progressive web app?).&lt;/p&gt;

&lt;h4&gt;
  
  
  … and why does it matter for developers?
&lt;/h4&gt;

&lt;p&gt;I think AWS Cloud9 solves a bunch of problems for the many organizations currently trying to set up elaborate stacks of tools on every developer’s machine, especially if the team is heterogeneous and/or distributed.&lt;/p&gt;

&lt;p&gt;Let’s recap some of its features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s a &lt;strong&gt;full-fledged IDE&lt;/strong&gt; (based on the &lt;a href="https://github.com/ajaxorg/ace/" rel="noopener noreferrer"&gt;Ace open-source editor&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;It comes with an &lt;strong&gt;integrated web terminal&lt;/strong&gt;.
It is a real ssh session directly your browser, without the need of managing and storing ssh keys or IAM credentials on your local machine.&lt;/li&gt;
&lt;li&gt;This web terminal can run on a new EC2 instance managed by AWS Cloud9 (EC2 env.), or you can bring your own instance (SSH env.)&lt;/li&gt;
&lt;li&gt;EC2 environments offer quite a handy &lt;strong&gt;cost-saving functionality&lt;/strong&gt; , which means you can optionally configure them to &lt;strong&gt;hibernate&lt;/strong&gt; after 30 minutes of inactivity (or more)&lt;/li&gt;
&lt;li&gt;EC2 environments are based on &lt;a href="http://docs.aws.amazon.com/cloud9/latest/user-guide/ami-contents.html" rel="noopener noreferrer"&gt;this Amazon Machine Image&lt;/a&gt;, which includes &lt;strong&gt;at least 90% of the dev tooling you need&lt;/strong&gt;.
For example, the &lt;em&gt;AWS CLI, sam-local, git, gcc, c++, Docker, node.js, npm, nvm, CoffeeScript, Python, virtualenv, pip, pylint, boto3, PHP, MySQL, Apache, Ruby, Rails, Go, Java&lt;/em&gt;, etc.
In case anything is missing, you can always install it :)&lt;/li&gt;
&lt;li&gt;AWS Cloud9 comes with the &lt;strong&gt;live debugging capabilities&lt;/strong&gt; you’d expect from a modern IDE (currently only for Node.js, though)&lt;/li&gt;
&lt;li&gt;It enables &lt;strong&gt;collaborative coding and debugging sessions&lt;/strong&gt;.
You can share a Cloud9 environment with other &lt;em&gt;IAM users&lt;/em&gt; and invite them with &lt;em&gt;read-only&lt;/em&gt; or &lt;em&gt;read-write&lt;/em&gt; permissions.&lt;/li&gt;
&lt;li&gt;It comes with built-in support for &lt;strong&gt;AWS Lambda&lt;/strong&gt;.
This greatly simplifies the process of creating new Lambda Functions, updating and testing their code locally, deploying new versions, etc.&lt;/li&gt;
&lt;li&gt;Plus, &lt;strong&gt;AWS SAM&lt;/strong&gt; is part of the team too.
&lt;a href="https://github.com/awslabs/serverless-application-model" rel="noopener noreferrer"&gt;AWS SAM&lt;/a&gt; — or Serverless Application Model — is an open specification that allows you to define serverless applications and related resources with a simplified CloudFormation syntax. Cloud9 natively integrates some of the functionalities offered by &lt;a href="https://github.com/awslabs/aws-sam-local" rel="noopener noreferrer"&gt;SAM Local&lt;/a&gt;, an open-source CLI tool written in Go by AWS to simplify local development and testing of Serverless applications. For example, you can invoke Functions locally and emulate API Gateway endpoints too.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I took the following screenshot during a live-debugging session of a very simple Lambda Function ( &lt;strong&gt;&lt;em&gt;note&lt;/em&gt;&lt;/strong&gt; : I also spent 3 minutes customising theme &amp;amp; layout, according to my taste and needs).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloud9/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwnghi5amavlok68q58t.png"&gt;&lt;/a&gt;AWS Cloud9 in action during a live-debugging session with Node.js&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Cloud9 Limitations and my personal “wishes”
&lt;/h4&gt;

&lt;p&gt;I do have a few wishes for AWS Cloud9, and I’ve shared a few of them on Twitter (tweets below).&lt;/p&gt;

&lt;p&gt;Let me discuss a few of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The “&lt;em&gt;numeric limitations&lt;/em&gt;” are pretty reasonable, in my opinion. You can create up to 20 environments per user (max 10 open concurrently), 100 per account, and you can invite up to 8 members in each environment.
These are reasonable numbers since &lt;strong&gt;you can’t share environment across AWS accounts&lt;/strong&gt;  (yet?)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live debugging&lt;/strong&gt; is terrific, but &lt;strong&gt;only available for Node.js&lt;/strong&gt;. I’m looking forward to built-in support for Python (and all the others, of course).&lt;/li&gt;
&lt;li&gt;The same holds for some &lt;strong&gt;runtime customizations&lt;/strong&gt; currently not supported. For example, I couldn’t find a way to change the &lt;strong&gt;default linting and code completion&lt;/strong&gt; behavior for Python (only &lt;em&gt;pylint&lt;/em&gt; arguments are supported). Without custom linting, Python3 developers cannot benefit from &lt;a href="https://twitter.com/alex_casalboni/status/949685627456573440" rel="noopener noreferrer"&gt;Python’s type hints capabilities&lt;/a&gt; (&lt;a href="http://mypy-lang.org/" rel="noopener noreferrer"&gt;&lt;em&gt;mypy&lt;/em&gt;&lt;/a&gt; is required).&lt;/li&gt;
&lt;li&gt;As discussed above, there is &lt;strong&gt;no support for “offline” development&lt;/strong&gt;.
Working offline may become a critical need in some scenarios/teams, although you can always download the whole environment to your local machine with one click (&lt;em&gt;File &amp;gt; Download Project&lt;/em&gt;) and keep working locally.&lt;/li&gt;
&lt;li&gt;I think Cloud9 could be much &lt;strong&gt;better integrated with the AWS Console&lt;/strong&gt; , especially with services such as Lambda and API Gateway. For example, there is no easy way to jump to the Lambda Console of a given Function (or API Gateway).&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;built-in Lambda integration&lt;/strong&gt; is excellent, but it’s still &lt;strong&gt;not as productive as the native Lambda Console&lt;/strong&gt;. For example, you can test a Function locally, but you can’t quickly pick the test event from a list of templates. For now, you can workaround this by running &lt;a href="https://github.com/awslabs/aws-sam-local#generate-sample-event-source-payloads" rel="noopener noreferrer"&gt;sam local generate-event&lt;/a&gt; in the terminal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-948514630472790017-51" src="https://platform.twitter.com/embed/Tweet.html?id=948514630472790017"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-948514630472790017-51');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=948514630472790017&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-948513525420486656-602" src="https://platform.twitter.com/embed/Tweet.html?id=948513525420486656"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-948513525420486656-602');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=948513525420486656&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-950315058604793856-316" src="https://platform.twitter.com/embed/Tweet.html?id=950315058604793856"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-950315058604793856-316');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=950315058604793856&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-949685627456573440-848" src="https://platform.twitter.com/embed/Tweet.html?id=949685627456573440"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-949685627456573440-848');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=949685627456573440&amp;amp;theme=dark"
  }



&lt;/p&gt;




&lt;h3&gt;
  
  
  What is AWS CodeStar?
&lt;/h3&gt;

&lt;p&gt;AWS CodeStar (aka &lt;strong&gt;Code-*&lt;/strong&gt; ) is a sort of “&lt;em&gt;catch-all service&lt;/em&gt;” for the ever-expanding suite of tools for developers.&lt;br&gt;&lt;br&gt;
It is a free service that lets you manage and link together services such as &lt;em&gt;CodeCommit&lt;/em&gt;, &lt;em&gt;CodeBuild&lt;/em&gt;, &lt;em&gt;CodePipeline&lt;/em&gt;, &lt;em&gt;CodeDeploy&lt;/em&gt;, &lt;em&gt;Lambda&lt;/em&gt;, &lt;em&gt;EC2&lt;/em&gt;, &lt;em&gt;Elastic Beanstalk&lt;/em&gt;, &lt;em&gt;CloudFormation&lt;/em&gt;, &lt;em&gt;Cloud9&lt;/em&gt;, etc.&lt;/p&gt;

&lt;p&gt;One of my 2018 new year resolutions is to use more memes in my decks (until someone decides to stop me, for some reason), so here’s how I presented some of the pain points that CodeStar can solve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/CI/CD" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiiccdbcu7mfvn4pibilu.jpeg"&gt;&lt;/a&gt;Too many projects tend to &lt;strong&gt;postpone &lt;/strong&gt;&lt;a href="https://en.wikipedia.org/wiki/CI/CD" rel="noopener noreferrer"&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt; until it’s way too late&lt;/strong&gt; and their &lt;strong&gt;productivity level is too low&lt;/strong&gt;, just because it sounds hard. But it doesn’t have to be that hard, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskx5yrwq4111ek1psnj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskx5yrwq4111ek1psnj0.png"&gt;&lt;/a&gt;Nobody likes &lt;strong&gt;building and maintaining real-time dashboards&lt;/strong&gt;. But they are vital from day one, for a manager to assess the status of a project, as well as for a developer to monitor how the system is behaving.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://git-scm.com/docs/git-push" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2oe4jkuk2smha6w8ckn.jpeg"&gt;&lt;/a&gt;This is what most developers want to do all day. &lt;strong&gt;Just write some code and push to master&lt;/strong&gt;. How can you provide such a simple and frictionless experience without impacting quality and ownership?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.atlassian.com/software/jira" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0n0kx79toyngz25lith.jpeg"&gt;&lt;/a&gt;Issue tracking can be a very &lt;strong&gt;frustrating experience&lt;/strong&gt;, especially if not well integrated with source control, access control, team management, monitoring, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Data-driven parenthesis&lt;/em&gt;&lt;/strong&gt; : I can statistically confirm that the JIRA meme generated 42% more laughs than all others combined.&lt;/p&gt;
&lt;h4&gt;
  
  
  … and why does it matter for organizations?
&lt;/h4&gt;

&lt;p&gt;CodeStar may not be the best fit for every project/organization, especially the most experienced and advanced ones, but it definitely provides some very good defaults to get started with. It’s worth noting that CodeStar is 100% free, and you only pay for the resources it will spin up on your behalf.&lt;/p&gt;

&lt;p&gt;Let’s recap its features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CodeStar offers the concept of “ &lt;strong&gt;&lt;em&gt;project templates&lt;/em&gt;&lt;/strong&gt; ”.
Each template represents a complete stack and includes a sample app, with a given backend, programming language, and framework.&lt;/li&gt;
&lt;li&gt;It supports three compute layers: &lt;strong&gt;&lt;em&gt;EC2&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Elastic Beanstalk&lt;/em&gt;&lt;/strong&gt; , and  &lt;strong&gt;&lt;em&gt;Lambda&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It supports six programming languages: &lt;strong&gt;&lt;em&gt;C#&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Java&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Node.js&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Python&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;PHP&lt;/em&gt;&lt;/strong&gt; , and &lt;strong&gt;&lt;em&gt;Ruby&lt;/em&gt;&lt;/strong&gt; (plus plain HTML apps).&lt;/li&gt;
&lt;li&gt;It supports plenty of frameworks: &lt;strong&gt;&lt;em&gt;Express&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Spring&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Django&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Flask&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;ASP.NET Core&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;Laravel&lt;/em&gt;&lt;/strong&gt; , etc.
&lt;strong&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/strong&gt; : AWS Lambda projects only support Express (Node.js) and Spring (Java), plus a few sample projects for &lt;strong&gt;&lt;em&gt;Alexa Skills&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Depending on the project template of your choice, CodeStar will spin up a &lt;strong&gt;&lt;em&gt;CI/CD pipeline&lt;/em&gt;&lt;/strong&gt; by linking together CodePipeline, CodeBuild, and CodeDeploy.&lt;/li&gt;
&lt;li&gt;Every CodeStar project starts with &lt;strong&gt;source control&lt;/strong&gt; ( &lt;strong&gt;&lt;em&gt;git&lt;/em&gt;&lt;/strong&gt; ), either on &lt;strong&gt;&lt;em&gt;AWS CodeCommit or GitHub&lt;/em&gt;&lt;/strong&gt;. CodeStar will create the git repository for you (via OAuth in case of GitHub) and take care of triggers/hooks for CI/CD.&lt;/li&gt;
&lt;li&gt;You can optionally &lt;strong&gt;configure your own coding tools&lt;/strong&gt; to work with CodeStar. Currently, only Cloud9, Eclipse, and VSCode are natively supported (plus the regular AWS CLI). You’ll read more on the Cloud9 integration later in this article.&lt;/li&gt;
&lt;li&gt;Even though many projects start as a one-person effort over the weekend, good projects tend to evolve and &lt;strong&gt;onboard more people quickly&lt;/strong&gt;. CodeStar allows you to invite IAM users to your project with one of these roles: &lt;strong&gt;Owner&lt;/strong&gt; (“God Mode”), &lt;strong&gt;Contributor&lt;/strong&gt; (everything but team management), or &lt;strong&gt;Viewer&lt;/strong&gt; (read-only dashboard access). More technical info &lt;a href="https://docs.aws.amazon.com/codestar/latest/userguide/access-permissions.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;As mentioned above, &lt;strong&gt;issue tracking&lt;/strong&gt; is probably the most frustrating part of many projects, as it brings context switching, visibility, and misunderstanding issues into the equation. CodeStar allows you to &lt;strong&gt;integrate JIRA or GitHub Issues&lt;/strong&gt; into your project dashboard, which might help in reducing the context switch and centralizing all the information.&lt;/li&gt;
&lt;li&gt;CodeStar provides a &lt;strong&gt;customisable app dashboard&lt;/strong&gt; (screenshot below), which includes a &lt;strong&gt;&lt;em&gt;project wiki section&lt;/em&gt;&lt;/strong&gt; , a &lt;strong&gt;&lt;em&gt;CloudWatch Metrics&lt;/em&gt;&lt;/strong&gt; section, your project’s git history, API endpoints, open issues, Cloud9 environments, etc.
You can &lt;strong&gt;drag-and-drop sections around&lt;/strong&gt; as you wish and use this dashboard to check the overall status of your project quickly and effectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codestar/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2r4gypwz11pyuf5um0i.png"&gt;&lt;/a&gt;A typical CI/CD pipeline managed by CodeStar (AWS Lambda project)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codestar/latest/userguide/how-to-customize.html" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgy20fbjvf4yyj0y5pdm.png"&gt;&lt;/a&gt;A brand-new CodeStar dashboard (AWS Lambda project)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codestar/latest/userguide/how-to-manage-team-permissions.html" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oz76i3b61a1rocixebg.jpeg"&gt;&lt;/a&gt;CodeStar’s team management and user roles&lt;/p&gt;
&lt;h4&gt;
  
  
  AWS CodeStar Limitations and a few “&lt;em&gt;gotchas&lt;/em&gt;”
&lt;/h4&gt;

&lt;p&gt;CodeStar can look like magic if you’ve never played with CodePipeline and CodeBuild, but unfortunately it’s not perfect yet. I’ve shared a few “wishes” on Twitter too (tweets below), and here’s a quick recap of what I’ve found.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can create up to &lt;strong&gt;&lt;em&gt;333 projects per account&lt;/em&gt;&lt;/strong&gt; (it looks like the number came out of some kind of thoughtful calculation, right?), but you can only have &lt;strong&gt;&lt;em&gt;10 projects per user&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;100 users per project&lt;/strong&gt;.
As for Cloud9, these seem like reasonable numbers, especially since CodeStar &lt;strong&gt;does not support federated users&lt;/strong&gt; or temporary access credentials, which means you’ll need to create a lot of users if a few developers start collaborating across AWS Accounts.&lt;/li&gt;
&lt;li&gt;Although the default user roles seem to cover most use cases, you are stuck with the owner/contributor/viewer permissions as &lt;strong&gt;you can’t create custom roles&lt;/strong&gt;. For example, you may want to have a “&lt;em&gt;project manager&lt;/em&gt;” role with view-only and team-management permissions.&lt;/li&gt;
&lt;li&gt;Remember that &lt;strong&gt;CodeStar permissions are not related to Cloud9 permissions&lt;/strong&gt;. If you want other owners or contributors to join your Cloud9 environment, you’ll have to invite them explicitly. Which makes sense, but I believe it could be improved/automated.&lt;/li&gt;
&lt;li&gt;The most critical limitation I’ve found is that &lt;strong&gt;there is no way to customise project templates&lt;/strong&gt;. As I’ve mentioned already, CodeStar provides pretty good defaults for a lot of things, but you’ll probably need to change a few details here and there. For example, you may want to add a testing step to CodePipeline, add custom permissions to the default IAM role, edit the default build file, change the default API Gateway stage, etc. And once you’ve done that, &lt;strong&gt;there is no way to save your edits into a new custom project template&lt;/strong&gt; so that your team will start from there. Which means you’ll have to apply your customizations to each new project. And you only have two options: 1) &lt;em&gt;applying changes manually&lt;/em&gt;, or 2) &lt;em&gt;automatically applying Change Sets to your CloudFormation Stacks&lt;/em&gt; (please note that every project will create one or more CloudFormation Stack and that the original templates are not versioned anywhere, so good luck with that!). Plus, see the next bullet point :)&lt;/li&gt;
&lt;li&gt;Some of the CodeStar functionalities are based on &lt;strong&gt;mysterious and undocumented CloudFormation magic&lt;/strong&gt; such as theAWS::CodeStar::SyncResources resource and the AWS::CodeStar transform. They both sound pretty powerful, but there is no easy way to know what they are used for (or how we could use them).
My current understanding is that AWS::CodeStar::SyncResources will wait for all the other resources to be deployed (i.e. &lt;em&gt;DependsOn&lt;/em&gt;) and then make sure everything is okay (e.g. IAM permissions, project id, etc.). AWS::CodeStar seems to simply to inject SyncResources into the processed template so that we don’t have to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-948942798929227781-790" src="https://platform.twitter.com/embed/Tweet.html?id=948942798929227781"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-948942798929227781-790');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=948942798929227781&amp;amp;theme=dark"
  }



&lt;/p&gt;




&lt;h3&gt;
  
  
  What about AWS CodeStar + AWS Cloud9?
&lt;/h3&gt;

&lt;p&gt;Cloud9 and CodeStar are pretty cool services on their own, and I was excited to see how they’ve been integrated. Or, better, how Cloud9 has been integrated into CodeStar.&lt;/p&gt;

&lt;p&gt;You can associate AWS Cloud9 Environments to your CodeStar project natively. In case multiple developers are working on the same project, you can create and assign a Cloud9 Environment to each developer (eventually, they’ll collaborate and invite each other, if needed).&lt;/p&gt;

&lt;p&gt;Once you open Cloud9, you’ll find your IAM credentials integrated with git (&lt;a href="https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html" rel="noopener noreferrer"&gt;which does require some work&lt;/a&gt;) and your CodeCommit repository already cloned for you.&lt;/p&gt;

&lt;p&gt;Unfortunately, this &lt;strong&gt;magic doesn’t happen if you choose GitHub&lt;/strong&gt; (for now?).&lt;/p&gt;

&lt;p&gt;As a couple of friends and colleagues pointed out, it’s not such a critical or technical complex integration, in the sense that you could have taken care of it yourself (as you’d do on your local machine). But I think it’s a great way to &lt;strong&gt;streamline the development experience&lt;/strong&gt; and &lt;strong&gt;reduce the margin for error&lt;/strong&gt; , especially when you work on &lt;strong&gt;&lt;em&gt;multiple projects&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;multiple accounts&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, most developers make great use of &lt;strong&gt;&lt;em&gt;AWS profiles&lt;/em&gt;&lt;/strong&gt; when working on their local machine, and some of them also manage to remember which profile can do what, in which account, etc. With CodeStar+Cloud9 &lt;strong&gt;you won’t care anymore about profiles or local credentials&lt;/strong&gt; since every Cloud9 environment is bound to a specific project and account. Also, since CI/CD is enabled by default, most of the time you will just &lt;strong&gt;&lt;em&gt;write code, test with sam-local&lt;/em&gt;&lt;/strong&gt; and git push 💛&lt;/p&gt;

&lt;p&gt;Of course, you may also have a generic Cloud9 Environment (i.e. not related to a specific project) and use it with multiple profiles to manage unique resources or prototype new stuff.&lt;/p&gt;




&lt;h3&gt;
  
  
  Let me open a parenthesis: AWS SAM
&lt;/h3&gt;

&lt;p&gt;I decided to conclude my presentation with a brief parenthesis about AWS SAM, which got a few mentions and therefore deserves some context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;** Serverless alert **&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5p67e5n6rstn81axfdq.jpeg"&gt;&lt;/a&gt;Meet SAM!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SAM&lt;/em&gt; stands for &lt;strong&gt;Serverless Application Model&lt;/strong&gt; , and it’s an &lt;a href="https://github.com/awslabs/serverless-application-model" rel="noopener noreferrer"&gt;open specification&lt;/a&gt; whose goal is to offer a standard way to define serverless applications.&lt;/p&gt;

&lt;p&gt;Technically speaking, it’s a &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html" rel="noopener noreferrer"&gt;CloudFormation Transform&lt;/a&gt; named AWS::Serverless that will convert special Serverless resources such as AWS::Serverless::Function into standard CloudFormation syntax.&lt;/p&gt;

&lt;p&gt;You can think of Transforms as a way to &lt;strong&gt;augment the expressiveness of CloudFormation templates&lt;/strong&gt; so that you can define complex resources and their relationships in a much more concise way.&lt;/p&gt;

&lt;p&gt;If you are familiar with other tools such as the Serverless Framework, you’ll notice that the syntax is quite similar (there is even a &lt;a href="https://github.com/SAPessi/serverless-sam" rel="noopener noreferrer"&gt;plugin to convert your templates to SAM&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In fact, you can deploy SAM templates with &lt;a href="https://github.com/awslabs/aws-sam-local" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS SAM Local&lt;/strong&gt;&lt;/a&gt;, a CLI tool for local development written in Go and officially released by AWS.&lt;/p&gt;

&lt;p&gt;You can use AWS SAM Local to &lt;strong&gt;test your Lambda Functions locally&lt;/strong&gt; and &lt;strong&gt;emulate API Gateway endpoints&lt;/strong&gt; too. The CLI tool is available by default on every Cloud9 EC2 Environment, and the UI already supports some of its functionalities.&lt;/p&gt;

&lt;h4&gt;
  
  
  A few SAM examples
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwxybym9r4wj4vsqb55b.png"&gt;&lt;/a&gt;A simple serverless Function (additional properties are available for IAM policies, VCP config, DLQ, tracing, etc.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/docs/safe_lambda_deployments.rst#safe-lambda-deployments" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fift6rnc858h5lm5r9fc0.png"&gt;&lt;/a&gt;The same Function defined above, plus CodeDeploy Traffic Shifting (10% every 10min)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#event-source-object" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnfotpnr9gg049n4lyro.png"&gt;&lt;/a&gt;The same Function defined above, plus an API Gateway endpoint (nothing else needs to be defined)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlesssimpletable" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsddvqkhmr5mld55503n.png"&gt;&lt;/a&gt;A simplified DynamoDB definition (only primary key and throughput)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessapi" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwngiqwmjd5r2fixi782b.png"&gt;&lt;/a&gt;An API Gateway defined in swagger format (sam-local will package it to S3 before deploying!)&lt;/p&gt;

&lt;h4&gt;
  
  
  My personal “wishes” for AWS SAM
&lt;/h4&gt;

&lt;p&gt;I have only one wish for AWS SAM: I would love to see more transparency and documentation related to the AWS::Serverless Transform.&lt;/p&gt;

&lt;p&gt;Having an open specification and local CLI tool on GitHub is great, but as long as CloudFormation Transforms behave like an undebuggable black box, the community won’t be able to contribute much, in my opinion.&lt;/p&gt;

&lt;p&gt;And since I like dreaming, why not allowing &lt;strong&gt;custom CloudFormation Transforms too&lt;/strong&gt;? I am almost ready to bet they are implemented with some kind of Lambda hook, and I can’t even start to imagine how many great things the community might be able to develop and share that way.&lt;/p&gt;




&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-951219516134100997-216" src="https://platform.twitter.com/embed/Tweet.html?id=951219516134100997"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-951219516134100997-216');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=951219516134100997&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;I hope you learned something new about AWS Cloud9 and CodeStar (please don’t confuse them and create weird hybrids such as “ &lt;strong&gt;CloudStar&lt;/strong&gt; ” as I did a few times). I would recommend building a simple prototype or a sample project on CodeStar asap. You can get started &lt;a href="https://console.aws.amazon.com/codestar" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;




&lt;p&gt;If you got this far, you probably enjoyed the article or feel like sharing your thoughts. Either way, don’t forget to recommend &amp;amp; share, and please do not hesitate to give feedback &amp;amp; share your ideas =)&lt;/p&gt;




</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>serverless</category>
      <category>coding</category>
    </item>
    <item>
      <title>Database News: 7 Updates from AWS re:Invent 2017</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Tue, 19 Dec 2017 08:00:49 +0000</pubDate>
      <link>https://dev.to/alexcasalboni/database-news-7-updates-from-aws-reinvent-2017-2fa</link>
      <guid>https://dev.to/alexcasalboni/database-news-7-updates-from-aws-reinvent-2017-2fa</guid>
      <description>&lt;p&gt;Following AWS re:Invent 2017, we’ve counted more than 40 announcements of new or improved AWS services.  Today, we’ll be talking about our picks for the new database and storage services that should be on your radar for 2018.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s New in the Database world?
&lt;/h2&gt;

&lt;p&gt;If you’re into Magic Quadrants, &lt;a href="https://www.gartner.com/doc/reprints?id=1-4J1NAHG&amp;amp;ct=171023&amp;amp;st=sb"&gt;here is what Gartner had to say about Operational Database Management Systems&lt;/a&gt; just one month before the announcements at AWS re:invent. It’s funny to see how quickly these documents become obsolete! :)&lt;/p&gt;

&lt;p&gt;Although the largest majority of new services were related to AI and IoT, AWS also announced several new and exciting ways to manage and interact with existing databases, entirely new managed services, and even new optimized ways to fetch data from more traditional storage services such as S3 and Glacier. The bad news is that four out of seven of the following announcements are still in preview. However, I’d gamble that they’ll be GA within a few months, and ideally before the end of Q1 2018. Let’s dig into the new database announcements.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Amazon Aurora Multi-Master (Preview)
&lt;/h3&gt;

&lt;p&gt;Amazon Aurora is a &lt;strong&gt;cloud-native DBMS&lt;/strong&gt; that is simultaneously compatible with well-known open-source databases (only MySQL and PostgreSQL for now), promises the same level of &lt;strong&gt;performance&lt;/strong&gt; as most commercial databases, and is 10 times &lt;strong&gt;cheaper&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Amazon Aurora already comes with the ability to scale up to 15 read replicas across different Availability Zones and even multiple regions, default auto-scaling, and seamless recovery from replica failures. You can find out &lt;a href="https://www.vividcortex.com/blog/three-things-that-differentiate-amazon-aurora-from-mysql"&gt;how it’s different from a regular MySQL cluster here&lt;/a&gt;, and you can see a series of&lt;a href="https://www.percona.com/blog/2016/05/26/aws-aurora-benchmarking-part-2/"&gt;interesting benchmarks here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uyNqt1g3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-11.59.53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uyNqt1g3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-11.59.53.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=1IxDLeFQKPk"&gt;In Andy Jassy's re:Invent keynote&lt;/a&gt;, he announced the Aurora Multi-Master, which allows Aurora clusters to &lt;strong&gt;scale out for both read and write operations&lt;/strong&gt; by providing multiple master nodes across Availability Zones. This update enables both &lt;strong&gt;higher throughput&lt;/strong&gt; and &lt;strong&gt;higher availability&lt;/strong&gt; for Aurora clusters. Interestingly, Jassy pre-announced that Aurora will go multi-region as well, sometime in 2018.&lt;/p&gt;

&lt;p&gt;Although the preview is only compatible with MySQL, I hope it will also be available for PostgreSQL once it becomes generally available.&lt;/p&gt;

&lt;p&gt;In the meantime, &lt;a href="https://pages.awscloud.com/amazon-aurora-multimaster-preview.html"&gt;you can apply for the preview here&lt;/a&gt;and get started with this &lt;a href="https://cloudacademy.com/amazon-web-services/labs/getting-started-with-amazon-aurora-database-engine-86/"&gt;Amazon Aurora Hands-on Lab on Cloud Academy&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Amazon Aurora Serverless (Preview)
&lt;/h3&gt;

&lt;p&gt;Less than 60 seconds after announcing Aurora Multi-Master, Andy Jassy announced Aurora Serverless, which transforms how you think of databases for infrequent, intermittent, or unpredictable workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--USnI1Jg5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-12.01.19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--USnI1Jg5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-12.01.19.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Aurora Serverless can be enabled or disabled any time based on the current load of queries, in practice it is the very first on-demand relational database with a &lt;strong&gt;pay-per-second model&lt;/strong&gt;. In fact, you are charged independently for &lt;strong&gt;compute capacity&lt;/strong&gt; , &lt;strong&gt;storage&lt;/strong&gt; , and &lt;strong&gt;I/O&lt;/strong&gt;. In other words, Aurora Serverless is charged based on the total number of requests ($0.20 per million), the total size of the database ($0.10 per GB per month), and the total query execution time in seconds based on the concurrent Aurora Capacity Units ($0.06 per hour per ACU). &lt;/p&gt;

&lt;p&gt;The main point here is that the number of &lt;strong&gt;ACUs will drop to zero&lt;/strong&gt; as soon as you stop running queries. As long as Aurora Serverless is disabled, you are charged only for the database storage. If you are familiar with AWS Lambda, this is not too different from how Lambda Functions are charged (and storage tends to be pretty cheap).&lt;a href="https://pages.awscloud.com/amazon-aurora-serverless-preview.html"&gt;Aurora Serverless is still in preview&lt;/a&gt;, and only for MySQL, but it is really promising for plenty of serverless use cases. First of all, it aligns with &lt;a href="https://medium.com/@PaulDJohnston/a-simple-definition-of-serverless-8492adfb175a"&gt;my favorite definition of serverless&lt;/a&gt; (i.e. it costs you “nothing” when no one is using it). Secondly, it will solve many problems related to serverless applications and relational databases such as connection pooling, networking, autoscaling, IAM-based credentials, etc. When combined with Aurora Multi-Master and Multi-Region, it will finally enable highly available serverless applications backed by a horizontally scalable relational database.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Amazon DynamoDB Global Tables (GA)
&lt;/h3&gt;

&lt;p&gt;DynamoDB quickly became one of the most used NoSQL databases when it was launched back in 2012. It’s not the easiest database to work with when it comes to data design (&lt;a href="https://cloudacademy.com/amazon-web-services/working-with-amazon-dynamodb-course/"&gt;learn more here&lt;/a&gt;), but it definitely solves most of the problems related to scalability, performance, security, extensibility, etc.&lt;/p&gt;

&lt;p&gt;AWS announced and released the first tools and building blocks to enable DynamoDB cross-region replication back in 2015 (i.e. &lt;a href="https://github.com/awslabs/dynamodb-cross-region-library"&gt;cross-region replication Java library&lt;/a&gt; and DynamoDB Streams). However, setting everything up manually was &lt;a href="https://dzone.com/articles/real-time-cross-region-dynamodb-replication-using"&gt;still kind of cumbersome&lt;/a&gt;, especially when considering more than two or three regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--osrjL2Gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-16.50.52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--osrjL2Gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-16.50.52.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The newly announced DynamoDB Global Tables will take care of that complexity for you by enabling &lt;strong&gt;multi-master and multi-region automatic replication&lt;/strong&gt;. This is incredibly powerful, especially if you are designing a single-region serverless application backed by API Gateway, Lambda, and DynamoDB. With just a few clicks, you can easily enable Global Tables, deploy your stack into multiple regions, &lt;a href="https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/"&gt;perform some ACM and Route53 tricks&lt;/a&gt;, and finally obtain a multi-region serverless application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jL47f3s8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-13.34.57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jL47f3s8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-13.34.57.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most interesting part is that you won’t need to rewrite your application code since DynamoDB will take care of IAM Roles, Streams, Functions, etc. Also, you will have access to both local and global table metrics.&lt;/p&gt;

&lt;p&gt;On the other hand, &lt;strong&gt;only empty tables can become part of a global table&lt;/strong&gt; , which means you can’t easily go multi-region with your existing DynamoDB tables.&lt;/p&gt;

&lt;p&gt;Also, remember that removing a region from your Global Table will not delete the local DynamoDB table, which will be available until you actually delete it, but you won’t be able to re-add it to the Global Table either (unless it’s still empty). Basically, you have to carefully choose the regions of your Global Table before you start writing any data into it. We hope that someone will come up with a clever workaround soon.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Amazon DynamoDB On-Demand Backup (GA)
&lt;/h3&gt;

&lt;p&gt;The ability to backup and restore DynamoDB tables used to require &lt;a href="https://github.com/bchew/dynamodump"&gt;ad-hoc tools and workarounds&lt;/a&gt;, which often required a long time and impacted production database performance. AWS has finally announced a built-in mechanism to create &lt;strong&gt;full backups&lt;/strong&gt; of DynamoDB tables and &lt;strong&gt;point-in-time restore&lt;/strong&gt; capabilities. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f8-0udZT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-13.43.37-1-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f8-0udZT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-13.43.37-1-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Backups will be incredibly useful for handling both regulatory requirements and application errors. You can create backups on the web console or via API. Although there is no built-in way to automatically create periodic backups, our friends at Serverless Inc. have already developed a &lt;a href="https://serverless.com/blog/automatic-dynamodb-backups-serverless/"&gt;Serverless Framework Plugin&lt;/a&gt; to automate this task for you.&lt;/p&gt;

&lt;p&gt;My favorite part is that on-demand backups will have absolutely &lt;strong&gt;no impact on the production database&lt;/strong&gt;. I also like that they are &lt;strong&gt;instant operations&lt;/strong&gt; thanks to how DynamoDB handles snapshots and changelogs. It’s worth noting that each backup will save your &lt;strong&gt;table’s data and your capacity settings and indexes&lt;/strong&gt;. Also, both backups and restores are charged based on the amount of data ($0.10 per GB per month for backups, $0.15 per GB for restores).&lt;/p&gt;

&lt;p&gt;The only bad news is that the point-in-time restore functionality will come later next year, while on-demand backups are available today.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Amazon Neptune (Preview)
&lt;/h3&gt;

&lt;p&gt;Amazon Neptune is the only new service on this list. It is a fully managed graph database and &lt;a href="https://pages.awscloud.com/NeptunePreview.html"&gt;you can sign up for the limited preview here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KSnPVCg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-12.04.46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KSnPVCg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-12.04.46.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Neptune is intended for use cases where you need &lt;strong&gt;highly connected datasets&lt;/strong&gt; that do not fit well into a relational model. If you already have a graph database, you’ll be able to migrate it into Neptune because it already comes with Property Graph and W3C's RDF support. This means that you can query it with Gremlin (TinkerPop models) and SPARQL (RDF models).&lt;/p&gt;

&lt;p&gt;Technically, Neptune can store billions of nodes and edges, and it guarantees millisecond latency and high availability and throughput thanks to 15 replicas across multiple AZs, VPC security, automated backups into S3, point-in-time restore capabilities, automatic failover, and encryption at rest.&lt;/p&gt;

&lt;p&gt;In terms of pricing, Neptune is &lt;strong&gt;charged by the hour&lt;/strong&gt; based on the instance type (db.t2.medium or db.r4.*), the total &lt;strong&gt;number of requests&lt;/strong&gt; ($0.20 per million), the total &lt;strong&gt;amount of storage&lt;/strong&gt; ($0.10 per GB per month), and the data transfer (the transfer to CloudFront is free!).&lt;/p&gt;

&lt;h3&gt;
  
  
  6) Amazon S3 Select (Preview)
&lt;/h3&gt;

&lt;p&gt;Amazon S3 is the grandfather of every Amazon Service, and it keeps innovating how you think about data storage.&lt;/p&gt;

&lt;p&gt;So many use cases adopted S3 as their preferred solution to build a &lt;strong&gt;data lake&lt;/strong&gt;, and in many of those use cases, the ability to fetch the exact amount of data can drastically improve both performance and cost. There is an entire family of services that can automatically write into S3 (e.g. &lt;a href="https://cloudacademy.com/blog/everything-you-ever-wanted-to-know-about-amazon-kinesis-firehose/"&gt;Amazon Kinesis Firehose&lt;/a&gt;) or query S3 objects without managing servers (e.g. &lt;a href="https://cloudacademy.com/blog/amazon-athena-advantages-for-security-and-speed/"&gt;Amazon Athena&lt;/a&gt;). However, most of the time the data you need is spread across multiple S3 objects; the only way to retrieve it is to fetch all of those objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EKDJG_hM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-15.09.07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EKDJG_hM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-15.09.07.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;S3 Select will &lt;strong&gt;optimize data retrieval performance and costs&lt;/strong&gt; by enabling standard SQL expressions over your S3 objects. With S3 Select, you can write simple queries to extract a subset of the data. For example, you could fetch only specific sub-fields of every object or only fetch objects that satisfy a given condition. &lt;/p&gt;

&lt;p&gt;This way, all of the filtering logic is performed directly by S3, and the client will receive a binary-encoded response. The advantage is that less data will need to be transferred, reducing both latency and costs of most applications. For example, Lambda applications that read S3 data will be able to reduce the overall execution time. Similarly, EMR applications will be able to use the corresponding &lt;strong&gt;Presto connector&lt;/strong&gt; to drastically improve queries to S3 without changing the query code at all.&lt;/p&gt;

&lt;p&gt;S3 Select is still in limited preview and you can &lt;a href="https://pages.awscloud.com/amazon-s3-select-preview.html"&gt;sign up for the preview here&lt;/a&gt;. It already supports CSV and JSON files, whether gzipped or not. For now, encrypted objects are not supported, although I’m sure they will be once S3 Select becomes generally available.&lt;/p&gt;

&lt;p&gt;My favorite part of S3 Select is that &lt;strong&gt;it will be supported natively by other services such as Amazon Athena and Redshift Spectrum&lt;/strong&gt;. This means that it will automatically make your queries faster and cheaper.&lt;/p&gt;

&lt;h3&gt;
  
  
  7) Glacier Select (GA)
&lt;/h3&gt;

&lt;p&gt;The same S3 limitations related to query efficiency hold for Amazon Glacier, with the additional constraint of waiting a few minutes (or hours, depending on the retrieval type).&lt;/p&gt;

&lt;p&gt;Glacier Select is generally available and provides the same standard SQL interface of S3 Select over Glacier objects, effectively &lt;strong&gt;extending your Data Lake to cold storage&lt;/strong&gt; as well. The best part is that Glacier Select will also be integrated with Amazon Athena and Redshift Spectrum. It sounds great, but I’m not sure how an Athena query will run for three hours waiting for Glacier data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--osrjL2Gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-16.50.52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--osrjL2Gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/12/Screenshot-2017-12-14-16.50.52.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Glacier Select comes with the very same SQL interface, which allows you to reuse S3 Select queries once the objects have been archived into Glacier. Since Glacier won’t have to retrieve the entire archive, fetching archived data will become cheaper, and you still have the ability to choose the retrieval type (standard, expedited, or bulk).&lt;/p&gt;

&lt;p&gt;Depending on the retrieval type, Glacier Select will be charged based on the total number of Select queries, how much data has been scanned, and how much data has been retrieved. Of course, retrieved data will be more expensive than scanned data (between 25% and 150% more expensive, depending on the retrieval type), while requests are charged in the order of $0.01 per request in case of expedited retrieval, $0.00005 per request for standard retrieval, and $0.000025 per request for bulk retrieval (price depends on the region, too).&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s next?
&lt;/h3&gt;

&lt;p&gt;I’m looking forward to experimenting with these new services and features myself, as I’m still waiting to be waitlisted for the limited preview. I’m quite excited by the possibility to painlessly create a global data layer, either with SQL or NoSQL. &lt;/p&gt;

&lt;p&gt;As Amazon Aurora is becoming a sort of super cloud-native database, I’m especially curious about the many new features that will be announced. For example, as of re:Invent, it now supports &lt;a href="https://aws.amazon.com/about-aws/whats-new/2017/12/amazon-aurora-with-mysql-compatibility-natively-supports-synchronous-invocation-of-aws-lambda-functions/"&gt;synchronous invocations of AWS Lambda Functions&lt;/a&gt; in your MySQL queries.&lt;/p&gt;

&lt;p&gt;In the meantime, I will play with DynamoDB Global Tables and build &lt;a href="https://www.youtube.com/watch?v=6uijFRFURPQ"&gt;my first globally available serverless application backed by DynamoDB, Route53, Lambda, and API Gateway&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Of course, my favorite update in this list is Aurora Serverless, and I can’t wait to check it out and finally remove every fixed cost from my architecture (while still using our beloved SQL).&lt;/p&gt;

&lt;p&gt;Catch up with more of our re:Invent 2017 recaps for &lt;a href="https://cloudacademy.com/blog/amazon-rekognition-video-feature-announcement/"&gt;Amazon Rekognition Video&lt;/a&gt;, &lt;a href="https://cloudacademy.com/blog/aws-reinvent-amazon-guardduty/"&gt;Amazon GuardDuty&lt;/a&gt;, and &lt;a href="https://dev.to/alexcasalboni/aws-reinvent-2017-day-2-aws-appsync--graphql-as-a-service-2m38-temp-slug-2274054"&gt;AppSync&lt;/a&gt; on the Cloud Academy blog.&lt;/p&gt;

&lt;p&gt;We’d love to hear about your favorite AWS re:Invent announcement and what you are going to build with it!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>dynamodb</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>AWS AppSync – GraphQL as a Service</title>
      <dc:creator>Alex Casalboni</dc:creator>
      <pubDate>Wed, 29 Nov 2017 10:20:36 +0000</pubDate>
      <link>https://dev.to/alexcasalboni/aws-appsync--graphql-as-a-service-4j3o</link>
      <guid>https://dev.to/alexcasalboni/aws-appsync--graphql-as-a-service-4j3o</guid>
      <description>&lt;p&gt;Day two at re:Invent 2017 was incredibly packed, crowded, and exciting. My favorite announcement so far is &lt;strong&gt;the new AWS AppSync&lt;/strong&gt;, as it aligns with one of the most promising (yet somehow controversial) design principles adopted by the serverless community: &lt;strong&gt;GraphQL&lt;/strong&gt;. If you are not familiar with GraphQL, we recently explained &lt;a href="https://cloudacademy.com/blog/how-to-write-graphql-apps-using-aws-lambda/"&gt;how to write GraphQL Apps using AWS Lambda&lt;/a&gt;, and hosted a &lt;a href="https://cloudacademy.com/webinars/serverless-graphql-love-story-46/"&gt;webinar about the Love Story between Serverless and GraphQL&lt;/a&gt;. Here’s a quick look at what you need to know about AWS AppSync.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mobile and Web App Challenges
&lt;/h2&gt;

&lt;p&gt;I attended the first deep dive session on &lt;a href="https://aws.amazon.com/appsync/"&gt;AWS AppSync&lt;/a&gt;, which brilliantly summarized the main technical challenges faced by most mobile and web applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication &amp;amp; user management&lt;/li&gt;
&lt;li&gt;Efficient network usage&lt;/li&gt;
&lt;li&gt;Multi-device support&lt;/li&gt;
&lt;li&gt;Data synchronization between devices&lt;/li&gt;
&lt;li&gt;Offline data access&lt;/li&gt;
&lt;li&gt;Real-time data streams&lt;/li&gt;
&lt;li&gt;Cloud data conflict detection &amp;amp; resolution&lt;/li&gt;
&lt;li&gt;Running server-side code (without managing servers)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these challenges could merit its own article, but I am assuming that everyone has experienced a really bad mobile UX, and at least once from the user perspective (any implicit reference to the AWS re:Invent app is clearly unintentional). Moreover, most of these challenges are a direct consequence of how we’ve been designing &lt;strong&gt;RESTful interfaces&lt;/strong&gt; for web and mobile.&lt;/p&gt;

&lt;p&gt;If you are a web or mobile developer, GraphQL can help you solve many of these challenges, especially the ones related to network optimization, thanks to dynamic queries, and real-time data streams, thanks to GraphQL subscriptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  How is GraphQL better than REST?
&lt;/h3&gt;

&lt;p&gt;GraphQL can alleviate the pain caused by many complex problems that are pretty much unsolvable with REST only. This includes &lt;strong&gt;API resource relationships&lt;/strong&gt; , &lt;strong&gt;reduced or customized information in API responses&lt;/strong&gt; , &lt;strong&gt;dynamic query support&lt;/strong&gt; , &lt;strong&gt;advanced ordering and paging&lt;/strong&gt; , &lt;strong&gt;push notifications&lt;/strong&gt; , etc.&lt;/p&gt;

&lt;p&gt;GraphQL is basically a &lt;strong&gt;query language for APIs&lt;/strong&gt; , and it makes it very easy to add an abstraction layer on top of already existing data and APIs. Plus, it's &lt;strong&gt;strongly typed&lt;/strong&gt; and can act as a &lt;strong&gt;self-documenting&lt;/strong&gt;  &lt;strong&gt;contract&lt;/strong&gt; between client and server.&lt;/p&gt;

&lt;p&gt;Again, if you are not familiar with the concept of GraphQL Queries, Mutations, and Subscriptions, &lt;a href="https://cloudacademy.com/blog/how-to-write-graphql-apps-using-aws-lambda/"&gt;have a read here&lt;/a&gt; (it gets more technical from now on!).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N2W9oahb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/11/aws-appsync.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N2W9oahb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d2f9gqwlnfnjcb.cloudfront.net/blog/wp-content/uploads/2017/11/aws-appsync.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS AppSync Features &amp;amp; Gotchas
&lt;/h3&gt;

&lt;p&gt;AWS AppSync allows you to focus on building apps instead of managing all the infrastructure needed to run GraphQL (either with or without servers).&lt;/p&gt;

&lt;p&gt;AppSync will connect queries to AWS resources, and it provides &lt;strong&gt;built-in offline and real-time stream support via client-side libraries&lt;/strong&gt; , which also come with different strategies for &lt;strong&gt;data conflict resolution&lt;/strong&gt; (even custom Lambda-based implementations!). The service offers enterprise-level &lt;strong&gt;security features&lt;/strong&gt; such as API keys, IAM, and Cognito User Pools support.&lt;/p&gt;

&lt;p&gt;As with any GraphQL application, your development workflow would look like the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define your schema (queries, mutations, and subscriptions)&lt;/li&gt;
&lt;li&gt;Define resolvers (data sources)&lt;/li&gt;
&lt;li&gt;Use client tooling to fetch data via the GraphQL endpoint&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS AppSync provides built-in support for three data sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB&lt;/li&gt;
&lt;li&gt;ElasticSearch&lt;/li&gt;
&lt;li&gt;Lambda (for custom or generated fields)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AppSync client library is available both for &lt;strong&gt;mobile and web clients&lt;/strong&gt; (iOS, Android, JavaScript, and React Native). Offline support is pretty straightforward, and the client will automatically sync data when a network is available. Since GraphQL is an &lt;strong&gt;open standard&lt;/strong&gt; , you are not forced to use the AppSync client. In fact, &lt;strong&gt;any open-source GraphQL client will work&lt;/strong&gt;. Of course, AppSync endpoints can be used by servers, too.&lt;/p&gt;

&lt;p&gt;Mutation Resolvers can be mapped into data sources via &lt;strong&gt;mapping templates (Velocity)&lt;/strong&gt;. As with API Gateway templates, they come with a few utilities such as JSON conversion, unique ID generation, etc., and ready-to-use sample templates. Based on the specific data source, templates will allow for &lt;strong&gt;complete customization of the backend query&lt;/strong&gt; (e.g. DynamoDB PutItem, ElasticSearch geolocation queries, etc.).&lt;/p&gt;

&lt;p&gt;Since you’ll spend most of the time working with the Schema Editor, I was quite happy to notice that it is quite user-friendly, and it comes with &lt;strong&gt;search and auto-complete capabilities&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS AppSync and AWS Lambda
&lt;/h3&gt;

&lt;p&gt;You can use AWS Lambda to compute dynamic fields, eventually in a batch fashion. For example, imagine that your schema defines a “&lt;em&gt;paid&lt;/em&gt;” field that needs to be fetched from a third-party payment system for each invoice. You would probably define a &lt;em&gt;ComputePaid&lt;/em&gt; Function and a &lt;em&gt;ComputePaidBatch&lt;/em&gt; Function.&lt;/p&gt;

&lt;p&gt;The batch function will be automatically invoked to process the value of multiple records without incurring the N+1 problem. Of course, you could use the same Lambda Function for individual and batch processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS AppSync Pricing
&lt;/h3&gt;

&lt;p&gt;AppSync will be charged based on the &lt;strong&gt;total number of operations and real-time updates&lt;/strong&gt;. More detailed, real-time updates are charged based on the number of updates and the &lt;strong&gt;minutes of connection&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here are the numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$4 per million Query and Data Modification Operations&lt;/li&gt;
&lt;li&gt;$2 per million Real-time Updates&lt;/li&gt;
&lt;li&gt;$0.08 per million minutes of connection to the AWS AppSync service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please note that data transfer is charged at the EC2 data transfer rate and that real-time updates are priced per 5kb payload of data delivered (prorated). While the service is completely free during the preview phase, it will come with a &lt;strong&gt;free tier&lt;/strong&gt; once it is generally available.&lt;/p&gt;

&lt;p&gt;Also note that AppSync has &lt;strong&gt;no minimum fees or mandatory service usage&lt;/strong&gt; , compared to other similar offerings such as Graphcool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;I expect that AWS AppSync will be well received by the serverless community, and it will make GraphQL much easier to adopt for many use cases.&lt;/p&gt;

&lt;p&gt;It will be a great option for applications that need to drastically optimize data delivery, or to transparently manage many data sources under the hood. Moreover, it will also allow developers to design self-documenting APIs powered by a strong type system and a powerful set of developer tools that will simplify API evolution.&lt;/p&gt;

&lt;p&gt;Keep in mind that AWS AppSync is still in preview, but you can &lt;a href="https://pages.awscloud.com/awsappsyncpreview.html"&gt;start playing with it by signing up here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m looking forward to playing with it myself next week, and I can’t wait to share my first results with the community. If you already use GraphQL in production, let us know what you think of AWS AppSync in the comments below!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>graphql</category>
      <category>cloudcomputing</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
