<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Relay</title>
    <description>The latest articles on DEV Community by Relay (@relay).</description>
    <link>https://dev.to/relay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/relay"/>
    <language>en</language>
    <item>
      <title>Announcing Relay's General Availability Launch</title>
      <dc:creator>Melissa Sussmann</dc:creator>
      <pubDate>Tue, 06 Apr 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/announcing-relay-s-general-availability-launch-2gpm</link>
      <guid>https://dev.to/relay/announcing-relay-s-general-availability-launch-2gpm</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3-Bb2V7F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/30d518dc78236712275ec4d056b91538/6050d/relay-ga-cover.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3-Bb2V7F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/30d518dc78236712275ec4d056b91538/6050d/relay-ga-cover.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today we’re proud to announce the general availability of Relay, a cloud-native workflow automation platform. We launched our &lt;a href="https://relay.sh/blog/relay-public-beta/"&gt;public beta of Relay&lt;/a&gt; last June, and we’re now officially out of beta and open for business! We’ve been pretty busy during the beta period - early users have executed thousands of workflows, processed tons of events, and given us incredibly helpful feedback.&lt;/p&gt;

&lt;p&gt;We believe there is tremendous demand for a new kind of low-code, responsive automation product because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The cloud has radically changed how we build and operate systems.&lt;/li&gt;
&lt;li&gt;Lower-level infrastructure components are, for the most part, good enough at the problems they are intended to solve.&lt;/li&gt;
&lt;li&gt;Complexity has moved up the stack, beyond configuring operating systems and into how we tie services, APIs, and distributed systems together.&lt;/li&gt;
&lt;li&gt;DevOps teams encompass a wide range of skills, and it’s important to the business that their tiny number of automation specialists democratize their knowledge across the organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus, how we automate across our infrastructure must change accordingly.&lt;/p&gt;

&lt;p&gt;We’ve heard over and over from our users how their quotidian tasks are deceptively tricky and involve sequencing lots of actions across all manner of different services. Going through these tasks manually introduces room for error, even when they’re properly documented. Between responding to service-down incidents, rolling back failed deployments, and securing cloud resources… the struggle is real.&lt;/p&gt;

&lt;p&gt;Solving these problems involves gluing together a patchwork of existing scripts, bespoke in-house APIs, and 3rd-party services to get anything done. This emerging need for better orchestration is why Relay is built around event-driven workflows. What does that mean, exactly?&lt;/p&gt;

&lt;p&gt;Modern service architectures generate all kinds of events — mostly noise, but with important signals intermixed — and the ability to understand and respond to those events automatically is key. YAML-based workflows provide a readable, reusable abstraction that the whole team can comprehend and iterate on. They’re well-suited for assembling individual automation “steps” into an end-to-end solution. So, combining workflows and events leads to truly responsive automation that can cover the full continuum of scenarios ops folks are regularly faced with, at the velocity they need.&lt;/p&gt;

&lt;p&gt;Powerful abstractions like events and workflows are great, but not if they come at the expense of accessibility or if they present users with more of a learning cliff than a gentle curve. One thing we learned during the beta was that users wanted the best of both worlds: a simple workflow authoring experience that doesn’t require much coding, but that is also harmonious with their overall infrastructure-as-code approach. This is why Relay takes a low-code approach to workflow authoring. We’ve spent a lot of time making that experience quick and easy, but all the while changes are bi-directionally synced to human-friendly code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6CRn6wEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/1a87d06c3f626b8d498ed0aff4319e90/relay-ga-low-code.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6CRn6wEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/1a87d06c3f626b8d498ed0aff4319e90/relay-ga-low-code.gif" alt="Low-code graphical editor for workflow steps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s been wonderful seeing the automation problems that our users have solved with Relay. The major themes that have arisen are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing infrastructure&lt;/strong&gt; - using Relay to add intelligence to existing automation tools, driving them in response to high-signal events, and integrating them into higher-level, auto-remediation workflows, e.g. &lt;a href="https://relay.sh/blog/puppet-integration/"&gt;combining Relay with Puppet Enterprise&lt;/a&gt; or doing automated rollback of a complex deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt; - using Relay to continually verify the security posture of key cloud resources by receiving infrastructure events and then applying the right compliance policies. This is such an important issue for users that we plan to do a lot more on this front. Stay tuned for more on this soon!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident response&lt;/strong&gt; - using Relay to enrich alert data, automate incident communications, and trigger auto-remediation workflows, e.g. our partnerships with &lt;a href="https://relay.sh/blog/pagerduty-and-relay/"&gt;PagerDuty&lt;/a&gt;, &lt;a href="https://relay.sh/blog/ddog-relay/"&gt;DataDog&lt;/a&gt;, and &lt;a href="https://relay.sh/blog/victorops-incident-response/"&gt;Splunk&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vNy4tPLd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/dac654734e63f314105bcd19c4ccde5f/relay-ga-self-healing.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vNy4tPLd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/dac654734e63f314105bcd19c4ccde5f/relay-ga-self-healing.gif" alt="Self-healing Puppet Enterprise infrastructure through Relay"&gt;&lt;/a&gt;​&lt;/p&gt;

&lt;h2&gt;
  
  
  Case Study
&lt;/h2&gt;

&lt;p&gt;​ One of our current customers, &lt;a href="https://bryxx.eu"&gt;Bryxx NV&lt;/a&gt;, is a Belgium-based managed service provider that is consolidating its cloud automation stack onto Relay. Bryxx manages its customers’ multi-cloud infrastructure and collects telemetry in Grafana. We worked together to build a Grafana integration for Relay with two parts: Relay receives threshold alerts from Grafana when additional capacity is needed. After running workflows that handle the scale-up, Relay posts an annotation back into Grafana, so there’s a record of the workflow run overlaid on the dashboard, providing an audit trail and visual record of the changes.&lt;/p&gt;

&lt;p&gt;Before using Relay, Bryxx had parts of these operations automated but still had to manually coordinate and orchestrate the changes. Now, Bryxx’s DevOps Architect Dries Dams says, “Relay helps us connect all the dots, achieving true self-healing systems on cloud-native platforms. The ease with which we can create new workflows saves us countless hours of developing custom scripts leaving more time for our engineers to help our customers grow their business.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;The next step, and the best step, is to &lt;a href="https://relay.sh/"&gt;try it out&lt;/a&gt; (for free)! There are two &lt;a href="https://relay.sh/pricing/"&gt;additional tiers&lt;/a&gt; that complement the free Community tier:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Team&lt;/strong&gt; : ($20 user/month) For small to mid-sized teams, this tier provides access to up to 30 users, 500 active workflows, Role-Based Access Control (RBAC), and Single Sign-On (SSO).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise&lt;/strong&gt; : For large organizations with on-prem needs, this tier provides up to 5,000 active workflows and up to 5,000 users. It also provides RBAC and SSO and on-prem connectivity with Puppet Enterprise, Puppet’s flagship product. Contact sales for pricing. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’d like to learn more about Relay:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://relay.sh/blog/relay-and-open-source/"&gt;How to get involved&lt;/a&gt; to extend Relay to better meet your needs and become part of the Relay community.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://relay.sh/docs/"&gt;The documentation site&lt;/a&gt; introduces Relay, its usage, core concepts, and extension points.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppetcommunity.slack.com/archives/CMKBMAW2K"&gt;Join the Puppet community Slack&lt;/a&gt; - come linger in the #relay channel! The more the merrier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks, and happy automating!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Remediate Unencrypted S3 buckets</title>
      <dc:creator>Relay</dc:creator>
      <pubDate>Tue, 26 Jan 2021 19:29:05 +0000</pubDate>
      <link>https://dev.to/relay/how-to-remediate-unencrypted-s3-buckets-242n</link>
      <guid>https://dev.to/relay/how-to-remediate-unencrypted-s3-buckets-242n</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-8R_0UF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/328d480136efa5627b162d0c718063c4/6050d/remediate-unencrypted-s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-8R_0UF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/328d480136efa5627b162d0c718063c4/6050d/remediate-unencrypted-s3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Cloud environments are always susceptible to security issues. A significant contributor to this problem is misconfigured resources.&lt;/p&gt;

&lt;p&gt;Traditional IT Infrastructure was somewhat static; server hardware only changed every few years. With few changes occurring, security was also more static. The modern cloud environment is a much different challenge. In cloud environments, servers, services, and storage are created with automation, resulting in a dynamic and potentially ever-changing server environment.&lt;/p&gt;

&lt;p&gt;Standardized policies and regular enforcement of best practices are key to reducing security risks. New automation can be created to enforce these policies on an ongoing basis. Even if the configuration drifts, automation can pull systems back into compliance.&lt;/p&gt;

&lt;p&gt;Unencrypted S3 buckets are an example of a configuration setting that could expose enormous quantities of sensitive data. In this article, we will look at using Relay to enforce encryption that is easy to set up and monitor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Relay for AWS
&lt;/h2&gt;

&lt;p&gt;Relay is an online platform with Graphical User Interface (GUI) and Command Line Interface (CLI) for all your cloud automation use cases. In our example, we will be using the GUI web interface.&lt;/p&gt;

&lt;p&gt;You will need to set up a Relay account at &lt;a href="https://app.relay.sh/signup"&gt;https://app.relay.sh/signup&lt;/a&gt; to create and run a workflow. The workflow will force encryption on specified S3 buckets in your Amazon Web Services (AWS) account.&lt;/p&gt;

&lt;p&gt;The Relay AWS connection requires creating an Identity and Access Management (IAM) user with permissions to edit S3 buckets. In your AWS console, go to the IAM dashboard and set up an IAM user. For example, under “Access management,” I set up a group called “BucketGroup” and a user named “relaydemouser”. Make sure you keep a copy of the “Access key ID” and “Secret access key” as you will need these. The “Secret access key” is only shown once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/2bc6134609d714155b9e1d1dbc376fb6/ca2ce/Picture1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8tTNijfr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/2bc6134609d714155b9e1d1dbc376fb6/ca2ce/Picture1.png" alt="AWS IAM user" title="AWS IAM user"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user/group will need permissions to access the S3 buckets. For the example above, I added the policy “AmazonS3FullAccess” to the permissions of the BucketGroup. Do the same with the user/group you create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/c3583929cce2aad40bce682d44994f49/3fee3/Picture2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QnAEVsk7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/c3583929cce2aad40bce682d44994f49/3fee3/Picture2.png" alt="AWS permissions and managed policies" title="AWS permissions and managed policies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This policy gives the IAM user, or group of users, full access to the S3 buckets, including checking configuration, listing buckets, and encrypting S3 buckets.&lt;/p&gt;

&lt;p&gt;Once you have a user with the required permissions, you can set up the Relay connection configuration. Open the &lt;strong&gt;Connections&lt;/strong&gt; section of Relay and add a connection to AWS with this IAM user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/43ea011b5c7ae7f7c1123f7d827e82bf/44463/Picture3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E7_v6tYf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/43ea011b5c7ae7f7c1123f7d827e82bf/44463/Picture3.png" alt="Choose Relay connection" title="Choose Relay connection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can choose any name for the connection but will need to use the AWS “Access key ID” and “Secret access key” for your IAM user as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/7a19f7cea988b51277796b841fa6b124/52621/Picture4.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m13j8J39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/7a19f7cea988b51277796b841fa6b124/52621/Picture4.png" alt="Add AWS connection form" title="Add AWS connection form"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Workflow
&lt;/h2&gt;

&lt;p&gt;Relay has many sample workflows, including one to remediate unencrypted S3 buckets. &lt;a href="https://relay.sh/workflows/s3-remediate-unencrypted-buckets/"&gt;Use this link&lt;/a&gt; to find it, or, from inside the GUI, click on the workflow icon and then &lt;strong&gt;Explore workflows.&lt;/strong&gt; Under the &lt;strong&gt;Security&lt;/strong&gt; heading, you will see “Remediate unencrypted S3 buckets.”&lt;/p&gt;

&lt;p&gt;Looking at a new AWS S3 Storage bucket’s properties page, you will see that encryption is disabled by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/d59e3a82fbfc3f2ea7c39386e7a1b9c1/6a170/Picture5.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ZTPCk7z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/d59e3a82fbfc3f2ea7c39386e7a1b9c1/6a170/Picture5.png" alt="Default encryption disabled" title="Default encryption disabled"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow will enable server-side encryption. After running the workflow, your default encryption settings will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/49280fc4f35c4adb1e8067a1492f3c4e/11b02/Picture6.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jdKQ4ZOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/49280fc4f35c4adb1e8067a1492f3c4e/11b02/Picture6.png" alt="Default encryption enabled" title="Default encryption enabled"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can review the workflow contents (code, graph, and so on) then click on the “Use this workflow” button.&lt;/p&gt;

&lt;p&gt;Clicking &lt;strong&gt;Try this workflow&lt;/strong&gt; will bring you to a new workflow dialog with a suggested name. Note that workflow names must be unique within your account. Click on &lt;strong&gt;Create workflow&lt;/strong&gt; and Relay will display the workflow graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/beda670392d785bf4da69655ae3e4e2c/047b0/Picture7.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wnh_5vXf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/beda670392d785bf4da69655ae3e4e2c/047b0/Picture7.png" alt="Relay workflow graph" title="Relay workflow graph"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run the workflow, click &lt;strong&gt;Run&lt;/strong&gt; in the top right corner of the workflow page. Running the workflow will bring up a dialog showing &lt;code&gt;dryRun = true&lt;/code&gt;. Doing a dry run will test the logic but will not make any changes. Go ahead and run the workflow in dryRun mode. Visit the &lt;a href="https://relay.sh/docs/using-workflows/"&gt;Using Workflows&lt;/a&gt; section of the documentation for more information on workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/bdb10f38969bedd89cfb70716b58b034/08d47/Picture8.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gzN7jJNd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/bdb10f38969bedd89cfb70716b58b034/08d47/Picture8.png" alt="Dry run parameter" title="Dry run parameter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow will stop at the approval step and will not execute the action step to encrypt the S3 bucket. In dryRun mode, the Yes and No approval buttons are disabled. If you rerun the workflow and set the dryRun dialog to &lt;code&gt;false&lt;/code&gt;, Relay will stop the workflow at the approval step and allow you to click on “Yes” or “No.” If you click on “Yes,” the flow will continue and encrypt the S3 buckets.&lt;/p&gt;

&lt;p&gt;After the workflow is complete, go back to AWS and check the encryption status on your S3 bucket. The buckets that had previously been unencrypted should now be encrypted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up a Trigger
&lt;/h2&gt;

&lt;p&gt;Relay supports multiple ways to trigger a workflow. We have already walked through the manual trigger, running a workflow manually. You can also schedule a trigger to run automatically, using a process similar to that of a Linux cron job. Relay also supports &lt;a href="https://relay.sh/docs/using-workflows/using-triggers/"&gt;webhook and REST API triggers&lt;/a&gt;. Services which support webhooks can post a JSON payload to Relay when an event happens. The Relay REST API receives JWT-authenticated requests from a remote system and runs a workflow in response. Read all about triggers in the &lt;a href="https://relay.sh/docs/using-workflows/using-triggers"&gt;Using Triggers&lt;/a&gt; section of the documentation.&lt;/p&gt;

&lt;p&gt;For this example, we’ll use a simple schedule trigger to run the workflow at midnight every day. Click on the “Add trigger” button in the first block of the graph to display the Add trigger dialog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/4450f9717a63d2c180858dae94ce99d5/187ee/Picture9.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--duhKbmeJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/4450f9717a63d2c180858dae94ce99d5/187ee/Picture9.png" alt="Add a trigger" title="Add a trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/9925e462bb367ef072e7a5deccabb86c/78ab6/Picture10.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ggqcyGZ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/9925e462bb367ef072e7a5deccabb86c/78ab6/Picture10.png" alt="Choose the scheduled trigger" title="Choose the scheduled trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on “Run a trigger every day” will bring up a code snippet that will be added to your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/6d41bc3fbf94eda235173fad38a892ce/11b02/Picture11.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yz10-pYK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/6d41bc3fbf94eda235173fad38a892ce/11b02/Picture11.png" alt="Run a trigger every day" title="Run a trigger every day"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you save the modified code, you will see that the trigger block in the graph now says it will run every day. A summary of the five digits of Cron time setting ‘* * * * *’ is shown below—an asterisk (*) means no specific value.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Position&lt;/th&gt;
&lt;th&gt;Time Unit&lt;/th&gt;
&lt;th&gt;Possible Values&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Minute&lt;/td&gt;
&lt;td&gt;0 to 59 or *&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Hour&lt;/td&gt;
&lt;td&gt;0 to 23 or *&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Day of Month&lt;/td&gt;
&lt;td&gt;1 to 31 or *&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Month&lt;/td&gt;
&lt;td&gt;1 to 12 or *&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Day of Week&lt;/td&gt;
&lt;td&gt;0 to 7 or * Both 0 and 7 represent Sunday.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When you run the workflow, it will run immediately, plus run automatically every day. You would need to remove or modify the approval step if you want the workflow to run continuously.&lt;/p&gt;

&lt;p&gt;You can remove the schedule trigger by deleting the triggers section in the code and saving changes. The graph button will change back to “Add trigger.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;You have configured a workflow to add encryption to AWS S3 storage buckets, and learned to set up a scheduled trigger. As described in this article, many preconfigured workflows make it easy to get started with Relay. Plus, the code is readily available for modification and expansion outside the GUI interface.&lt;/p&gt;

&lt;p&gt;Other example Relay flows are available for many Azure, AWS, and GCP tasks.&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://relay.sh"&gt;Relay&lt;/a&gt; and start automating your DevOps maintenance today.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Save Time and Money by Automatically Deleting Unused Azure Load Balancers</title>
      <dc:creator>Relay</dc:creator>
      <pubDate>Tue, 12 Jan 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/save-time-and-money-by-automatically-deleting-unused-azure-load-balancers-236j</link>
      <guid>https://dev.to/relay/save-time-and-money-by-automatically-deleting-unused-azure-load-balancers-236j</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6oNsjt10--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/8b4ceff77a8c32b96fb20578ddb20f06/0ff54/dollar-4492709_1280.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6oNsjt10--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/8b4ceff77a8c32b96fb20578ddb20f06/0ff54/dollar-4492709_1280.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the cloud reduces on-premises infrastructure costs and related maintenance. Instead of deploying more servers, storage, and networking components to your own datacenter, you are now deploying these as cloud resources. Using the cloud is supposed to reduce infrastructure and maintenance costs. However, deploying cloud resources also risks over-commissioning, under-usage, and keeping resources running that are not always needed or, even worse, no longer in use.&lt;/p&gt;

&lt;p&gt;To help you avoid wasting these unused resources, Puppet created Relay. This tool enables you to automate DevOps cloud maintenance, including automatically cleaning up resources you no longer need. This reduces resource waste while saving DevOps time, helping your team focus on delivering exciting new product features.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleaning up Azure
&lt;/h1&gt;

&lt;p&gt;In this article, we walk you through a common scenario. You may be using Azure Infrastructure components, like Virtual Machines and related Virtual Networking resources, together with Azure Load Balancers. When you no longer need the Virtual Machine, and delete it, you may forget about the Azure Load Balancer. The Load Balancer then continues to use your valuable cloud computing resources. Relay workflow helps you clean up.&lt;/p&gt;

&lt;p&gt;To use Relay:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Activate your Relay account at &lt;a href="https://relay.sh"&gt;relay.sh&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Define a connection between Relay and Azure, using an Azure service principal.&lt;/li&gt;
&lt;li&gt;Create your Relay workflow, which then performs the cleanup.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s examine each of these steps with step-by-step guidance. If you already have an Azure subscription with administrative access, you can follow these steps in your own environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting Up Relay
&lt;/h1&gt;

&lt;p&gt;Setting up a Relay account is rather straightforward. It is a hosted-cloud service with nothing to download, install, update, or maintain.&lt;/p&gt;

&lt;p&gt;First, create a new Relay account by completing the required fields on the &lt;a href="https://app.relay.sh/signup"&gt;signup page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/016c2de45ba4f756051ff3ddf4bfb395/0342e/image1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0SYcSjzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/016c2de45ba4f756051ff3ddf4bfb395/0342e/image1.png" alt="Relay sign up screen" title="Relay sign up screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After accepting the activation link in your email, create a complex password, confirm it, and that’s all it takes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/7a54529869b1bc8659e60a44bf14d771/347c0/image2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o4hCKBdl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/7a54529869b1bc8659e60a44bf14d771/347c0/image2.png" alt="Welcome to Relay screen" title="Welcome to Relay screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, after successfully logging on to the Relay platform, you are ready to start. From the Relay portal, browse sample workflows or create a new one from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/5d528dc9fdc1bcf90ba78665a7272684/4c61e/image3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9CbpDC8Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/5d528dc9fdc1bcf90ba78665a7272684/4c61e/image3.png" alt="Relay Sidebar" title="Relay Sidebar"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is also an option to define &lt;em&gt;connections&lt;/em&gt;, where you specify the service account of your cloud platform. Relay supports many different public and private cloud environments, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Kubernetes. In our example, we will use Azure, but the process is similar in all environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/c886bf087c40d9386c590062247e1338/7ecec/image4.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3KJMN_kj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/c886bf087c40d9386c590062247e1338/7ecec/image4.png" alt="Add connection" title="Add connection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring Access to Azure
&lt;/h1&gt;

&lt;p&gt;Before creating a workflow, let’s start by defining a connection for Azure. This relies on an Azure Active Directory service principal. Think of this object as just another user account, used by an application and similar to an Azure administrative account.&lt;/p&gt;

&lt;p&gt;Once you create the service principal, apply Azure Role Based Access Control (RBAC) permissions, limiting this account’s administrative capabilities to keep your production environment secure. &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals"&gt;Optionally, you can read more about service principal objects.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several ways to create a service principal, such as Azure Portal, Azure Cloud Shell, PowerShell, ARM Template, or REST API. We will show you how to create the service principal using Azure Cloud Shell.&lt;/p&gt;

&lt;p&gt;First, navigate to the &lt;a href="https://portal.azure.com"&gt;Azure Portal&lt;/a&gt;, and open Azure Cloud Shell from the top right menu. Note: if this is the first time you use Azure Cloud Shell, it will ask you to create an Azure Storage Account and Azure FileShare – complete this step to continue. Select Bash as the interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/e1d8057a552568aaf91a06a420d83559/a66f5/image5.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kl0BN5it--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/e1d8057a552568aaf91a06a420d83559/64756/image5.png" alt="Azure Cloud Shell" title="Azure Cloud Shell"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Type the following Azure command-line interface (CLI) command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az ad sp create-for-rbac -n "Relaysp"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates the service principal in your Azure Active Directory, and shows the actual account credential details as Shell output. Copy this information for later use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/434fc493443fc467098289a782aeac40/f27d7/image6.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--swhSzc8I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/434fc493443fc467098289a782aeac40/64756/image6.png" alt="Azure account credentials" title="Azure account credentials"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the &lt;a href="https://app.relay.sh"&gt;Relay portal&lt;/a&gt; and select “Connections”. Next, click the “Add Connection” button and choose Azure from the list. This opens the “Set up your Azure Connection” window. Complete the fields, copying the information from the Cloud Shell output, as shown in the below example:&lt;/p&gt;

&lt;p&gt;Relay Names Azure Service Connection Names&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subscription ID - Subscription&lt;/li&gt;
&lt;li&gt;Client ID - appId&lt;/li&gt;
&lt;li&gt;Tenant ID - tenant&lt;/li&gt;
&lt;li&gt;Secret - password&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/62ed4eccbfed4efb68b2520630ffa716/65dc2/image7.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RLmrjP0d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/62ed4eccbfed4efb68b2520630ffa716/65dc2/image7.png" alt="Set up your Azure connection" title="Set up your Azure connection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the information. This creates your connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/6546b3fcbaaac2a87b3bfaa62f78e8a6/37cfc/image8.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zwaId-qj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/6546b3fcbaaac2a87b3bfaa62f78e8a6/64756/image8.png" alt="Azure connection created" title="Azure connection created"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are now ready to create our workflow.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating a Workflow
&lt;/h1&gt;

&lt;p&gt;There are a couple of different ways to create the workflow. Remember, it is a YAML dialect, allowing us to author the file in any intelligent text editor, like Visual Studio Code or similar.&lt;/p&gt;

&lt;p&gt;You could write the YAML from scratch, but Relay provides an extensive open source library of sample workflows on GitHub, integrated into the Relay portal. So, let’s have a look.&lt;/p&gt;

&lt;p&gt;First, from the Relay portal, navigate to Workflows. Select “Explore our workflow library_”._&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/740980c01291e7eee5389ff071ec9511/35d52/image9.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pGXdPSLE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/740980c01291e7eee5389ff071ec9511/64756/image9.png" alt="Explore our workflow Library" title="Explore our workflow Library"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the category list, select “Cost optimization”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/69b1f4313eda30ea1ff1d999aa2e2a15/2bea0/image10.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jJuxTlJH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/69b1f4313eda30ea1ff1d999aa2e2a15/64756/image10.png" alt="Select cost optimization" title="Select cost optimization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Locate the “Delete empty Azure Load Balancers” workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/fc1f715ed700241ec0ae57b980abfb2a/c807c/image11.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nQe9k0cL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/fc1f715ed700241ec0ae57b980abfb2a/64756/image11.png" alt="Delete empty Azure Load Balancers workflow" title="Delete empty Azure Load Balancers workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click “Use this workflow”, provide a unique name, and confirm by pressing “Create workflow”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/04a1e88780a15712ecb46af4fb1949d3/bb7f4/image12.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K6lH5YdT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/04a1e88780a15712ecb46af4fb1949d3/bb7f4/image12.png" alt="Create Relay workflow" title="Create Relay workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This imports the workflow into your dashboard. From here, you need to complete some minimal configuration settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/e348ff81f4f58fcf47fa1c99713a0a57/7bd25/image13.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R-oRp22f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/e348ff81f4f58fcf47fa1c99713a0a57/64756/image13.png" alt="Workflow created" title="Workflow created"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first warning we get is “missing required connection”. You need to specify which Azure Connection the workflow should use, namely, the one you created in the previous step. To fix this error, click the “Fill in missing connections” link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/dcf7f795aa729f37831501c7061d99f1/25327/image14.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nrebHWBh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/dcf7f795aa729f37831501c7061d99f1/25327/image14.png" alt="Ensure youre Azure connection is added" title="Ensure youre Azure connection is added"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, it shows a default Connection of “my-azure-account”. You could add a new connection here, however, we already created one. To use it, we need to switch back to the Code view of the workflow and make a change in the YAML.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/cb8881aa0932d72e1e992439dc7d8656/a1898/image15.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gc9w5FIP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/cb8881aa0932d72e1e992439dc7d8656/64756/image15.png" alt="Update your connection in the code view" title="Update your connection in the code view"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the code editor, search for the keyword “connection” (line 27 in the sample file), and replace the name “my-azure-account” with the name of the Azure Connection you created earlier (Azure-Relay in our setup). Make sure you save the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/0ffea7fb9e65808cd1818423e5fc6d96/55954/image16.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--63iqIQTh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/0ffea7fb9e65808cd1818423e5fc6d96/64756/image16.png" alt="Updated connection" title="Updated connection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how the connection is ‘recognized’ under the settings pane to the right.&lt;/p&gt;

&lt;p&gt;From here, let’s test the workflow to ensure it can successfully connect to our Azure subscription and detect Load Balancers. Click “Run” and confirm by pressing “run workflow” in the popup window. Also, notice this flow can run in dryRun mode, which means it won’t actually change anything in our environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/1d9e02f9b3c92bf6ddbb85f7fb2bffb0/684d5/image17.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EO1j-4Qo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/1d9e02f9b3c92bf6ddbb85f7fb2bffb0/64756/image17.png" alt="Run the workflow manually" title="Run the workflow manually"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow kicks off and displays the step-by-step sequence:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/4bea8381799a8493fa8f8b280edcfd17/66712/image18.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IYluKEh_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/4bea8381799a8493fa8f8b280edcfd17/66712/image18.png" alt="Queued worflow" title="Queued worflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/632c67936671167e9824f776c7c7d03d/afe45/image19.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kFwdNz1I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/632c67936671167e9824f776c7c7d03d/afe45/image19.png" alt="Running workflow" title="Running workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dry run workflow worked fine. To look at each step in a bit more detail, hover over the step, for example, “list-azure-load-balancers”, and select view logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/784815dcc328853a7d054ba6a705373b/8c7b4/image20.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NEgwCl0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/784815dcc328853a7d054ba6a705373b/8c7b4/image20.png" alt="View logs" title="View logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following output is displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/5fbcae8d05c8783f7f81facc17b27751/e69c5/image21.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VL1FSQaC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/5fbcae8d05c8783f7f81facc17b27751/64756/image21.png" alt="Final output" title="Final output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the workflow lists the valid load balancers. Note: we didn’t deploy a Load Balancer yet, so it is normal that the result is zero, but at least it ran fine.&lt;/p&gt;

&lt;p&gt;Remember, this workflow is checking for “empty load balancers”, which means it looks for Azure Load Balancers without any endpoint connection parameters. To make this a more viable test, let’s deploy a Load Balancer in Azure. If you need some assistance on how to do this, you might use this &lt;a href="https://azure.microsoft.com/en-us/resources/templates/101-load-balancer-standard-create/"&gt;sample from Azure QuickStart Templates&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This deploys a Standard-type Azure Load Balancer, together with three Virtual Machines (VMs) as a back-end pool. If you want, you can update the Azure deployment template to deploy only a single VM if you don’t want to test three.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/410cbf0644244710c3553432fab25359/f8041/image22.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3emIfXNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/410cbf0644244710c3553432fab25359/64756/image22.png" alt="Create a load balancer in Azure" title="Create a load balancer in Azure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After deploying this template, the setup looks similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/b7b5c36d3d881c83643164a2757ed011/cdade/image23.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E50GuoNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/b7b5c36d3d881c83643164a2757ed011/64756/image23.png" alt="Load balancer list page" title="Load balancer list page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s go back to Relay and run our workflow once more. Notice that, this time, the Load Balancer is actually detected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/8a1d091bc11e1046be96eada99301852/dc4d2/image24.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vSfMVe89--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/8a1d091bc11e1046be96eada99301852/64756/image24.png" alt="Logs for list load balancers step" title="Logs for list load balancers step"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, select the next step in the flow, filter-loadbalancers, and view logs. This reveals that the detected Load Balancer is not empty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/ce142c63d2c822e86a83394bd5afa774/27773/image25.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5ovSjge3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/ce142c63d2c822e86a83394bd5afa774/64756/image25.png" alt="Logs for filter step" title="Logs for filter step"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is interesting to know how Relay identifies if the Load Balancer configuration is empty or not. Let’s have a look at the actual YAML code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/4e6349911687b195bc8613caf8265f00/93b57/image26.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3p4bX0tU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/4e6349911687b195bc8613caf8265f00/64756/image26.png" alt="Input file for filtering" title="Input file for filtering"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The script includes a pointer to a Python script, which looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/1f65d7c5593185e6ef235632c2b18f7a/c8502/image27.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CXOvyGsl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/1f65d7c5593185e6ef235632c2b18f7a/64756/image27.png" alt="Logic within filtering input file" title="Logic within filtering input file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see it is checking for any Load Balancers with empty “backend_address_pools”.&lt;/p&gt;

&lt;p&gt;To run a more valid test, let’s go back to the Azure Load Balancer, and clear the backend pool configuration. The easiest way to remove this is by using the following PowerShell cmdlet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Remove-AzureRMLoadBalancerBackendAddressPoolConfig

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information on doing this, refer to the following Microsoft Doc:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/powershell/module/azurerm.network/remove-azurermloadbalancerbackendaddresspoolconfig?view=azurermps-6.13.0"&gt;Remove-AzureRmLoadBalancerBackendAddressPoolConfig&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: if you used the default settings from the Azure Quickstart Template sample deployment earlier, use the following PowerShell script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get-AzureRmLoadBalancer -Name "MyLB-lb" -ResourceGroupName "MyLBRG" | Remove-AzureRmLoadBalancerBackendAddressPool -Name "LoadBalancerBackendPool" | Set-AzureRmLoadBalancer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://relay.sh/static/ff831ee5e138edc00c42401bd88949ed/ef7d6/image28.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B3TgGel2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/ff831ee5e138edc00c42401bd88949ed/64756/image28.png" alt="Running the Azure PowerShell script" title="Running the Azure PowerShell script"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Load Balancer configuration shows an empty BackEndPool now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/5cd702b001e1577bf894ea2ee56f76de/2c0f3/image29.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9VMiNkCT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/5cd702b001e1577bf894ea2ee56f76de/64756/image29.png" alt="Azure load balancer configuration" title="Azure load balancer configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the Relay workflow once more to see what happens with the Azure Resource. To test, make sure you keep the DryRun value set as True. The result is the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/a1e2d312819aa065c06b57191d1b6ecc/cb670/image30.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OImGwi79--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/a1e2d312819aa065c06b57191d1b6ecc/64756/image30.png" alt="Logs for filter step" title="Logs for filter step"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, the empty Load Balancer is detected for removal.&lt;/p&gt;

&lt;p&gt;Next, trigger the workflow once more, this time setting the DryRun to False, which means it will actually force the removal of the Azure Load Balancer resource. You will also need to confirm “Yes” on the approval step in the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/72944be0e621febc5752e0f263e2f960/fc2b1/image31.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GBUHmMop--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/72944be0e621febc5752e0f263e2f960/64756/image31.png" alt="Logs for delete Azure load balancers step" title="Logs for delete Azure load balancers step"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After about a minute, we get positive confirmation saying all specified load balancers are deleted. Validating from the Azure Portal confirms this.&lt;/p&gt;

&lt;p&gt;As a last step in this process, I want to highlight the trigger option at the beginning of the workflow. Up till now, we kicked off the workflow manually. This works fine for testing, but doesn’t work well in production. You may want to schedule this cleanup, validating your environment every week or maybe every night.&lt;/p&gt;

&lt;p&gt;To schedule your load balancer cleanup, first, select the first step in the workflow, “Add trigger”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/7f7aba1f807bea31ac782ecdf198f2cb/cec09/image32.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QRZ4YG9t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/7f7aba1f807bea31ac782ecdf198f2cb/cec09/image32.png" alt="Add a trigger" title="Add a trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This opens a list with different triggers to choose from, like running a “per day” job or triggering the workflow using an HTTP interaction. This could be interesting if you want to automate the process from your systems management tool. Whenever you connect to the trigger-HTTP-URL, the workflow executes. Let’s go back to the day-trigger scenario for now.&lt;/p&gt;

&lt;p&gt;Select the “Run a trigger every day” option, which reflects the corresponding code. The most important setting here is “schedule”, using the standardized cron notation to define the scheduling format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/6be10daed8202f532eee30570a4d190f/b38af/image33.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TnDg8v4a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/6be10daed8202f532eee30570a4d190f/b38af/image33.png" alt="Add a scheduled trigger" title="Add a scheduled trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information on the cron definition, I can recommend this &lt;a href="https://cron.help/examples"&gt;helpful link&lt;/a&gt;. You now have one less thing to monitor. You can check it off your busy DevOps team’s to-do list.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;In this article, we introduced you to Relay, a product Puppet created to automate cloud management tasks. From first configuring the cloud connection, we explored an example scenario, cleaning up empty Azure Load Balancers. We then learned how to trigger the workflow as a daily scheduled task.&lt;/p&gt;

&lt;p&gt;Other &lt;a href="https://github.com/puppetlabs/relay-workflows"&gt;example workflows&lt;/a&gt; enable you to easily set up maintenance of VMs, network interface controllers (NICs), and Disks.&lt;/p&gt;

&lt;p&gt;While this example detailed how to clean up Azure, Relay also works with other cloud environments. You can adjust the example above to develop workflows for any Azure, AWS or GCP resources.&lt;/p&gt;

&lt;p&gt;Automating your load balancer cleanup and other DevOps tasks saves you time and money as your team instead focuses on creating great new features for your software applications. Check out &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; to automate your DevOps maintenance today.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deployment Rollbacks via FireHydrant Runbook</title>
      <dc:creator>🌈 eric sorenson 🎹🔈🎚</dc:creator>
      <pubDate>Tue, 24 Nov 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/deployment-rollbacks-via-firehydrant-runbook-2jp1</link>
      <guid>https://dev.to/relay/deployment-rollbacks-via-firehydrant-runbook-2jp1</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mHPdactU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/2af27cbed92142e9574be5446392ff04/6e670/firehydrant-hero.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mHPdactU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/2af27cbed92142e9574be5446392ff04/6e670/firehydrant-hero.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://firehydrant.io"&gt;FireHydrant&lt;/a&gt; has a sophisticated set of response actions for coordinating communications, activities, and retrospectives for incidents that affect your services. &lt;a href="https://relay.sh"&gt;Relay&lt;/a&gt; helps by automating remediations that involve orchestrating actions across your infrastructure. In this example workflow, an incident that affects an application deployed on Kubernetes can trigger a rollback to a previous version automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/0577e8560ade4fff3973e25173d5b283/56e1b/firehydrant-graph.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rHxJGmaF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/0577e8560ade4fff3973e25173d5b283/56e1b/firehydrant-graph.png" alt="Graph from relay showing the workflow steps" title="Graph from relay showing the workflow steps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow makes a couple of assumptions about your infrastructure that will need to be true to work out of the box; we’d &lt;a href="https://github.com/puppetlabs/relay/issues/new"&gt;love to work with you&lt;/a&gt; if you need additional flexibility! Specifically, it maps FireHydrant “Services” to Kubernetes &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/deployment"&gt;deployments&lt;/a&gt;, and FireHydrant Environments to Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/"&gt;namespaces&lt;/a&gt;. Your deployment and rollback process is likely different from the one modelled here, but this should provide a good starting point for automating incident response activity with Relay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting the Services
&lt;/h2&gt;

&lt;p&gt;In order to enable two-way communication between FireHydrant and Relay, we’ll need to do some work on both sides: Relay posting to FireHydrant requires a FireHydrant API key, and FireHydrant triggering Relay workflows uses a dynamically-generated Relay webhook URL.&lt;/p&gt;

&lt;p&gt;First, add the Relay workflow to your account &lt;a href="https://app.relay.sh/create-workflow?workflowName=firehydrant-rollback&amp;amp;initialContentURL=https%3A%2F%2Fraw.githubusercontent.com%2Fpuppetlabs%2Frelay-workflows%2Fmaster%2Ffirehydrant-rollback%2Ffirehydrant-rollback.yaml"&gt;using this link&lt;/a&gt;. When you click “Save”, Relay will both create the webhook URL you’ll need and prompt that you’re missing a Secret and a Connection - we’ll get to those in a moment.&lt;/p&gt;

&lt;p&gt;In FireHydrant, we’ll create a Runbook that will trigger the workflow by sending a webhook to Relay. Create a new Runbook and add a &lt;strong&gt;Send a Webhook&lt;/strong&gt; step. For the &lt;strong&gt;Endpoint URL&lt;/strong&gt; , paste the webhook address from the Relay’s &lt;strong&gt;Settings&lt;/strong&gt; sidebar. The &lt;strong&gt;HMAC Secret&lt;/strong&gt; field is an arbitrary string (not currently used). For the &lt;strong&gt;JSON Payload&lt;/strong&gt; field, paste the following template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "incident_id": "{{ incident.id }}",
  "name": "{{ incident.name }}",
  "summary": "{{ incident.summary }}",
  "service": "{{ incident.services[0].name | downcase }}",
  "environment": "{{ incident.environments[0].name | downcase }}",
  "channel_id": "{{ incident.channel_id }}",
  "channel_name": "{{ incident.channel_name }}"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a FireHydrant API key for Relay to post information back into the incident timeline. Under &lt;strong&gt;Integrations&lt;/strong&gt; - &lt;strong&gt;Bot users&lt;/strong&gt; in FireHydrant, create a new &lt;strong&gt;Bot user&lt;/strong&gt; with a memorable name and description. Save the resulting API token into a Relay secret on the Relay workflow’s &lt;strong&gt;Settings&lt;/strong&gt; sidebar named &lt;code&gt;apiKey&lt;/code&gt; (case-sensitive).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/a7f2f46b01ba758ce4bf728fc30bcd24/765bd/firehydrant-bot-user.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HilTDhJN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/a7f2f46b01ba758ce4bf728fc30bcd24/765bd/firehydrant-bot-user.png" alt="The Integrations - Bot Users screen in FireHydrant" title="The Integrations - Bot Users screen in FireHydrant"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  GCP Authentication Setup
&lt;/h2&gt;

&lt;p&gt;This workflow uses a GCP Connection type on Relay’s end, which requires a &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens"&gt;service account&lt;/a&gt;configured on your cluster. Follow the GCP guide to API Server Authentication’s &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication#service_in_other_environments"&gt;“Service in other environments”&lt;/a&gt; section to set one up. This workflow will require the service account have the role &lt;code&gt;roles/container.developer&lt;/code&gt; attached to it; if you re-use the connection for other workflows it may require additional permissions. Once you’ve gotten the service account JSON file downloaded, add a GCP Connection in Relay, name it &lt;code&gt;relay-service-account&lt;/code&gt; and paste the contents of the JSON file into the dialog. Under the hood, Relay stores this securely in our Vault service and makes the contents available to workflow containers through the &lt;a href="https://relay.sh/docs/using-workflows/managing-connections/"&gt;!Connection custom type&lt;/a&gt; in the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/eac574df6085b89b4c8297f1f450c736/21062/firehydrant-new-connection.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g0QP-0ah--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/eac574df6085b89b4c8297f1f450c736/21062/firehydrant-new-connection.png" alt="The New Connection dialog in Relay for GCP connections" title="The New Connection dialog in Relay for GCP connections"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For non-GCP clusters, you can use Relay’s Kubernetes connection type, which requires less setup. GCP rotates access tokens every hour, making them unsuitable for automated use. The Kubernetes connection type needs an access token, the cluster URL, and the CA certificate for the cluster; there are more detailed instructions accompanying &lt;a href="https://github.com/puppetlabs/relay-workflows/tree/master/kubectl-apply-on-dockerhub-push"&gt;this deployment workflow example&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Services and Environments
&lt;/h2&gt;

&lt;p&gt;One of the FireHydrant’s big benefits is its awareness of your infrastructure. It takes a little bit of up-front work, but if you invest the time to map out your services and environments, you can dramatically streamline your incident response. In this example, we’ve enumerated the microservices that make up our Sock Shop application and associated them with different runbooks. Fortunately for the demo, the &lt;code&gt;sockshop-frontend&lt;/code&gt; service is a simple stateless Deployment in GKE, which makes new releases easy to manage with the &lt;code&gt;kubectl rollout&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/4e55b2fd3f9325a310d6e96ec059acab/44e31/firehydrant-sockshop-service.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KKznQqAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/4e55b2fd3f9325a310d6e96ec059acab/44e31/firehydrant-sockshop-service.png" alt="The Infrastructure - Services screen in FireHydrant" title="The Infrastructure - Services screen in FireHydrant"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, the Environments section lets you enumerate the instances of your service, to better characterize the impact of an incident, help assign owners for remediation actions, and message outage information to the appropriate audiences. Check out this FireHydrant &lt;a href="https://help.firehydrant.io/en/articles/4192249-inventory-management-functionalities-services-and-environments"&gt;helpdesk article on inventory management&lt;/a&gt; for more details on infrastructure organization. For our purposes, the goal of defining environments is to map them onto Kubernetes namespaces where our application is running. (For production workloads, it’s more likely that your environments map to distinct clusters; that’s totally possible to handle in Relay but is beyond the scope of this introduction!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/112131d19063b6d0ea71998e484b28b4/38095/firehydrant-environments.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sa1Gopry--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/112131d19063b6d0ea71998e484b28b4/38095/firehydrant-environments.png" alt="The Infrastructure - Environments screen in FireHydrant" title="The Infrastructure - Environments screen in FireHydrant"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Incident Creation and Response
&lt;/h2&gt;

&lt;p&gt;Now for the exciting part. Let’s say an update bumped the image on the frontend pods from a pinned version to &lt;code&gt;latest&lt;/code&gt; and everything broke.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% kubectl set image deployment/sockshop-frontend nginx-1=nginx:latest \
   --record --namespace production deployment.apps/sockshop-frontend
image updated
% kubectl rollout history deployment sockshop-frontend --namespace production
deployment.apps/sockshop-frontend
REVISION CHANGE-CAUSE
1 kubectl set image deployment/sockshop-frontend nginx:1.18.0=nginx:latest --record=true --namespace=production
2 kubectl set image deployment/sockshop-frontend nginx-1=nginx:latest --record=true --namespace=production

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But unbeknownst to the deployer, all was not well. After some troubleshooting, we determined that the application was degraded and rollout was to blame. In FireHydrant, the on-call person declares an incident and indicates the service and environment that were affected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/d8a03ae55caf835fe15319c9c7a666c6/f6909/firehydrant-new-incident.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7eAbw8RR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/d8a03ae55caf835fe15319c9c7a666c6/f6909/firehydrant-new-incident.png" alt="Create a new Incident in FireHydrant, with the sockshop-frontend service affected in the production environment" title="Create a new Incident in FireHydrant, with the sockshop-frontend service affected in the production environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the incident is created, we can attach the &lt;code&gt;rollback-via-relay&lt;/code&gt; Runbook that contains our webhook to the incident. The benefit of doing it this way is that the credentials are stored in Relay rather than needing a comamnd-line &lt;code&gt;kubectl&lt;/code&gt; setup as above, and you don’t have to remember the exact syntax to type if something’s broken at 4AM! The correct steps are stored in the workflow, reducing the possibility for an error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/0c281766fe6e25263be45e97b5c00be8/cfc60/firehydrant-attach.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IleJItqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/0c281766fe6e25263be45e97b5c00be8/cfc60/firehydrant-attach.png" alt="In the Remediation tab, attach the rollback-via-relay runbook" title="In the Remediation tab, attach the rollback-via-relay runbook"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The incident timeline and associated Slack channel show these action taking place, and in Relay we can see the webhook come in, the workflow kick off, and ultimately post back into the timeline with the output of the rollback command. Thanks to FireHydrant’s &lt;a href="https://help.firehydrant.io/en/articles/2862753-integrating-with-slack"&gt;awesome Slack integration&lt;/a&gt;, the updates roll into the channel in real time and chat messages are mirrored back into the incident so teams can coordinate their activities and keep a record of what happened. In this case, the rollback worked and we can resolve the issue quickly!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/676b42348ee67865222436c3d02c6599/f843c/firehydrant-slack-resolved.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mjIE_uTW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/676b42348ee67865222436c3d02c6599/f843c/firehydrant-slack-resolved.png" alt="Slack channel showing incident updates from Relay and a successful rollback" title="Slack channel showing incident updates from Relay and a successful rollback"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Next Steps
&lt;/h2&gt;

&lt;p&gt;FireHydrant’s Runbook based system for coordinating actions in response to incidents is extremely powerful. The more you teach it about your infrastructure, the faster you’ll be able to respond when something goes wrong. And linking it to Relay via Runbooks enables another level of automated response and remediation. In this example, we were able to roll back a deployment without someone needing to run manual commands and potentially making things worse!&lt;/p&gt;

&lt;p&gt;There are lots of &lt;a href="https://relay.sh/workflows/"&gt;existing Relay workflows&lt;/a&gt; that can act as building blocks or examples to construct your own incident response workflow. By combining them with clear processes codified in FireHydrant, responders can solve issues more quickly, reduce downtime, and get back to higher-value work.&lt;/p&gt;

&lt;p&gt;To try this out for yourself, sign up for free &lt;a href="https://firehydrant.io"&gt;FireHydrant&lt;/a&gt; account and &lt;a href="https://relay.sh"&gt;Relay&lt;/a&gt; accounts, and get started automating!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Happy Hacktoberfest</title>
      <dc:creator>Melissa Sussmann</dc:creator>
      <pubDate>Tue, 27 Oct 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/happy-hacktoberfest-28h</link>
      <guid>https://dev.to/relay/happy-hacktoberfest-28h</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2mH30b1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/8eec6bb8ea6610675dd542eff8dce071/6050d/happy-hacktoberfest-cover.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2mH30b1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/8eec6bb8ea6610675dd542eff8dce071/6050d/happy-hacktoberfest-cover.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Halloween! ‘Tis the spooky season and this year, we are kicking off a Hacktoberfest challenge just in time for Halloween. All you have to do for this Hacktoberfest challenge is run a workflow and fill out a brief survey about Relay.&lt;/p&gt;

&lt;p&gt;In return we’ll send you a great Relay-themed t-shirt, inspired by the upcoming game, Cyberpunk 2077. The first 75 respondents will get a free t-shirt. We can confidently promise it won’t be as delayed as Cyberpunk. Sadly, we can only offer this shirt to those people living in the continental USA.&lt;/p&gt;

&lt;p&gt;As you may already know, Relay (by Puppet) is an event-driven automation platform that pulls together all the tools and technologies DevOps engineers need to effectively manage their environment. It works by listening to signals from DevOps tools and apps people already use and then triggers workflows to orchestrate any required downstream service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Here’s what you need to do to get your free t-shirt:
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Sign up or login to Relay
&lt;/h4&gt;

&lt;p&gt;If you don’t already have an account you can &lt;a href="https://app.relay.sh/signup"&gt;create one for free&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/4507895c48543c9eea88580d958dad51/ee2da/happy-hacktoberfest-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JM5UvywI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/4507895c48543c9eea88580d958dad51/ee2da/happy-hacktoberfest-1.png" alt="Sign up for a Relay account" title="Sign up for a Relay account"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Run &lt;a href="https://app.relay.sh/create-workflow?workflowName=get-free-t-shirt&amp;amp;initialContentURL=https://gist.githubusercontent.com/kenazk/9a55d33e54b3dd5bd05376f853d49ac2/raw/b7d6ee66eb2ac5a8d24621fc7a900a4cb38d0358/workflow.yaml"&gt;this workflow&lt;/a&gt;!
&lt;/h4&gt;

&lt;p&gt;Install the workflow and then click the run button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/e45ce00bb2fd7748b93d57d7018c230f/ee2da/happy-hacktoberfest-2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jbaWKtNR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/e45ce00bb2fd7748b93d57d7018c230f/ee2da/happy-hacktoberfest-2.png" alt="Run the newly installed workflow" title="Run the newly installed workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Click the “View Logs” button
&lt;/h4&gt;

&lt;p&gt;In the logs you will find the URL for the survey. Head there to get started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/e389d59779d51cacd243504b2fb71007/ee2da/happy-hacktoberfest-3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y6HGPAEj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/e389d59779d51cacd243504b2fb71007/ee2da/happy-hacktoberfest-3.png" alt="View the logs to get the url for the survey" title="View the logs to get the url for the survey"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Fill out the survey
&lt;/h4&gt;

&lt;p&gt;This should only take a few minutes.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. We’ll send you a t-shirt!
&lt;/h4&gt;

&lt;p&gt;Thanks for making our product really special and please see this as a thank you for your continued support.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Reduce MTTR with PagerDuty and Relay</title>
      <dc:creator>Melissa Sussmann</dc:creator>
      <pubDate>Mon, 21 Sep 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/how-to-reduce-mttr-with-pagerduty-and-relay-13nh</link>
      <guid>https://dev.to/relay/how-to-reduce-mttr-with-pagerduty-and-relay-13nh</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ho0UY1Eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/111c65012010433985fec299e8157094/6e670/relay-and-pagerduty-blog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ho0UY1Eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/111c65012010433985fec299e8157094/6e670/relay-and-pagerduty-blog.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DevOps and SRE teams are under intense pressure to reduce the Mean Time to Recovery (MTTR) in resolving incidents. With the proliferation of cloud services and the increasing complexity of DevOps toolchains, engineers today need to not only learn how to use these services but also troubleshoot them when an incident is raised at 2 AM. Incident response is still manual today – cobbling together runbooks and ad hoc scripts and orchestrating people to respond. This “digital duct tape” approach results in what we call the “&lt;a href="https://relay.sh/blog/fix-your-devops-dumping-ground/"&gt;DevOps Dumping Ground&lt;/a&gt;”, which ultimately extends MTTR.&lt;/p&gt;

&lt;h2&gt;
  
  
  How PagerDuty &amp;amp; Relay Work Together
&lt;/h2&gt;

&lt;p&gt;PagerDuty is the industry-leading incident management platform that provides reliable notifications, automatic escalations, on-call scheduling, and other functionality to help teams detect and fix infrastructure problems quickly.&lt;/p&gt;

&lt;p&gt;Relay by Puppet is an event-driven automation platform that pulls together all the tools and technologies DevOps engineers need to effectively manage a cloud environment. Unlike many existing workflow automation tools, Relay can intelligently respond to external signals by combining event-based triggers with a powerful workflow engine in a single platform.&lt;/p&gt;

&lt;p&gt;The latest integration between Relay and PagerDuty eliminates the “digital duct tape” by creating reusable, event-driven workflows to close the loop on incidents faster through Relay’s event-based automation approach. PagerDuty users can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enrich alert data&lt;/strong&gt; : Using the new &lt;a href="https://support.pagerduty.com/docs/change-events"&gt;Change Events&lt;/a&gt; launched at PagerDuty Summit, Relay enhances alerts with diagnostic information to speed time-to-resolution by presenting more context around the alert.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate incident communication&lt;/strong&gt; : Whether it’s creating a Slack room, updating a Jira ticket, or notifying team members, Relay ensures that communication is timely and updated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger Auto-Remediation Workflows&lt;/strong&gt; : Raising PagerDuty incidents can initiate Relay workflow runs to fix troubleshoot &amp;amp; remediate common problems securely and quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: How to Automate Incident Communication Plans
&lt;/h2&gt;

&lt;p&gt;A key way to reduce MTTR is to formalize an incident communication plan. Making sure that teams have a robust plan for understanding roles and opening communication channels is key to reducing incident response time. Relay can automate this workflow for you by contacting the on-call engineer with a message detailing content from the incident.&lt;/p&gt;

&lt;p&gt;Relay uses “triggers” and “steps” to automate a set of actions. Steps are reusable, modular, and composable – things like getting a user’s info, sending Slack and Twilio messages, and using the PagerDuty Event API to provide more information on an incident. “Triggers” are based on cloud events, git events, monitoring alerts, tickets, and incidents. In the example below, we see how a PagerDuty incident triggers the following incident response workflow utilizing the steps mentioned.&lt;/p&gt;

&lt;p&gt;When a new PagerDuty incident is raised, Relay looks up the on-call person’s email address, identifies that user in Jira and Slack, and creates a Jira ticket for the production incident. Relay then creates a Slack room as a production incident command center, invites the on-call in, along with the pertinent engineering manager, and sets the topic of the room with a link to the Jira ticket that has been created. Finally, it sends a message to the Slack room and posts a note with the expectations of how a production incident policy should be followed.&lt;/p&gt;

&lt;p&gt;Using PagerDuty’s exciting new Change Events, Relay elaborates on content from the incident with enriched alert data. This enables the individual on call to respond to the incident quickly, with less toil required for ticket creation and communication on what triggered the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/workflows/pagerduty-production-incident-policy"&gt;Try out this workflow here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MZHzhWnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/517ed5dec8c7b265eb7947f5e3bc80c8/pagerduty-workflow-scroll.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MZHzhWnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/517ed5dec8c7b265eb7947f5e3bc80c8/pagerduty-workflow-scroll.gif" alt="Relay PagerDuty Production Incident Policy Workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Customize your Incident Response
&lt;/h2&gt;

&lt;p&gt;There are several starter workflows available for PagerDuty users, which you can find on their &lt;a href="https://relay.sh/integrations/pagerduty/"&gt;integration page&lt;/a&gt;. You can use these workflows to create an issue in Jira, send a message to slack, and send a Twillo SMS automatically when a PagerDuty incident is triggered.&lt;/p&gt;

&lt;p&gt;Everyone’s workflow is a little different, so Relay workflows are customizable for use cases. Relay provides contextual help within its sidebar. This feature lets you browse the library of integrations and steps to make it easy to customize your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--47W2ENy4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/91343396345a48b1bb50e869bac967c2/relay-library.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--47W2ENy4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/91343396345a48b1bb50e869bac967c2/relay-library.gif" alt="Relay Workflow Authoring Library"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Sign Up for Relay!
&lt;/h2&gt;

&lt;p&gt;Use Relay with PagerDuty to reduce your incident response time and improve observability. Reducing your mean time to resolution (MTTR) is key to successful DevOps management and enabling event-driven automation will mean that your incident response time is much shorter. Relay makes this easier by using workflows that fix more common and well-understood problems that teams have already identified. To learn more about Relay, visit our site at &lt;a href="//relay.sh"&gt;relay.sh&lt;/a&gt; and sign up for our free beta!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Event-Driven Web is Not the Future</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Wed, 19 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/the-event-driven-web-is-not-the-future-5f1l</link>
      <guid>https://dev.to/relay/the-event-driven-web-is-not-the-future-5f1l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YRINiD9q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/83c5851ae381a2fe5bcdbc750b93dcb8/6050d/the-event-driven-web-is-not-the-future.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YRINiD9q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/83c5851ae381a2fe5bcdbc750b93dcb8/6050d/the-event-driven-web-is-not-the-future.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you see a notification on your smartphone, your brain processes the request quickly and determines how to react. It’s an efficient process and your nervous system is built for this use case. By contrast, most Internet-connected systems work in a less event-driven architecture. If there’s a change in one service, you won’t know about it until you check. It’s the equivalent of reloading an app to see if there’s something new—it works eventually, but it’s not efficient.&lt;/p&gt;

&lt;p&gt;You might expect that the event-driven web should be the future. If systems knew about updates immediately, they could seamlessly make changes in reaction to the new information. New servers could be provisioned, unneeded resources could be turned off, and your microwave clock could always be accurate (ok, that might be asking too much).&lt;/p&gt;

&lt;p&gt;The truth is: real-time patterns have been around for years. The evented web is not the future because the present is fully capable of what it offers. Most developers aren’t taking advantage of event-driven development. While the pieces are there, not every service supports events. Perhaps most importantly, there are few tools to easily consume events, because development is stuck in a client-server mentality.&lt;/p&gt;

&lt;p&gt;As developers, it’s time to embrace this entirely un-new, but useful, approach to building Internet-connected systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Real-time Patterns Can Accomplish
&lt;/h2&gt;

&lt;p&gt;Whenever a change occurs in one system or new data is available in another, all of that context should be shared with systems that have declared an interest. In the same way that we expect smartphone notifications, developers can design for events. However, rather than causing more distractions for us to triage, they can save us time. Real-time patterns provide &lt;a href="https://relay.sh/blog/building-the-future-of-devops-automation/"&gt;immediate updates without the manual button-pushing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Consider the many tools required to build, deploy, monitor, and improve applications today. AWS itself has over 200 services and that’s just one cloud provider. Once you consider other cloud services and the ecosystem around all of it, you’ll be working with more tools than you’ll typically count, each with its own handful of API-driven knobs.&lt;/p&gt;

&lt;p&gt;When those knobs are turned via thoughtful automation, you start to see what’s possible. You can streamline your deploy processes toward the promise of continuous delivery. You can trigger events based on pull request activity or system monitoring and scale up and down cloud needs in response to incidents.&lt;/p&gt;

&lt;p&gt;Too often companies take event-based operations halfway. More instrumentation without automation is not the goal. Your team could very well spend all of its time dousing cloud-bourne fires. Each alert becomes a new task on a never-ending list. Even though we have the technology to reach the real-time opportunity, the momentum of how we’ve done things for decades holds us back.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Are Stuck in a Client-Server Mentality
&lt;/h2&gt;

&lt;p&gt;Since the early days of the Web, tools have operated on a simple model: a browser requests a resource and the server responds in kind. Front-end advances have given us interfaces that emulate real-time, but behind the scenes, these technologies often look a lot like the client-server model. It’s from that mindset that many of our tools and development processes are created.&lt;/p&gt;

&lt;p&gt;If you’ve said “try reloading it” in recent memory, you recognize the issue. Servers respond to events, clients don’t. To move into the real-time present, servers must also &lt;em&gt;send&lt;/em&gt; events, which means a client must be able to &lt;em&gt;receive&lt;/em&gt; events.&lt;/p&gt;

&lt;p&gt;There are current solutions to implement real-time patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Polling&lt;/strong&gt; , where you check for new data every minute or more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhooks&lt;/strong&gt; , where you subscribe to receive updates as available&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Websockets&lt;/strong&gt; , a two-way protocol, and proposed standard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each requires changes in how you architect your applications. You need to organize how you receive events, store the data, react to their contents, and chain the results to other services. Despite being an API-driven process, it’s unlikely to fit the model of your existing API integrations, which reside in the client-server mentality.&lt;/p&gt;

&lt;p&gt;To break into the real-time paradigm requires tooling that supports the shift in thinking, without putting the additional architectural burden on your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Best Teams Will Use Real-time Tools
&lt;/h2&gt;

&lt;p&gt;The evented web, which allows for real-time patterns, is very much available now. You can bring its efficiency to your team if you organize the right tools. It is unlikely you’ll want to build the infrastructure yourself unless you have unique needs or a team of engineers waiting for their next project.&lt;/p&gt;

&lt;p&gt;Some important features to look for when implementing real-time patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for webhooks and polling&lt;/li&gt;
&lt;li&gt;Integrations with the DevOps tools you already use&lt;/li&gt;
&lt;li&gt;Audit trails of each run of the workflow&lt;/li&gt;
&lt;li&gt;API secret management support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we didn’t find anything to meet those needs, we built &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt;. You can create automation with support for a growing number of DevOps and business tools. Write workflows in a familiar YAML syntax and run them in our secure environment. &lt;a href="https://relay.sh/"&gt;Try Relay for free now&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Provision Cloud Infrastructure</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Wed, 12 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/how-to-provision-cloud-infrastructure-b6i</link>
      <guid>https://dev.to/relay/how-to-provision-cloud-infrastructure-b6i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5oI6Jn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/94d91478fc80ef998897557913794e8d/af370/how-to-provision-cloud-infrastructure.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5oI6Jn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/94d91478fc80ef998897557913794e8d/af370/how-to-provision-cloud-infrastructure.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning Cloud Infrastructure
&lt;/h2&gt;

&lt;p&gt;One of the best things about cloud computing is how it converts technical efficiencies into cost-savings. Some of those efficiencies are just part of the tool kit, like pay-per-use Lambda jobs. Good DevOps brings a lot of savings to the cloud, as well. It can smooth out high-friction state management challenges. Sprucing up how you provision cloud services, for example, speeds up deployments. That’s where treating infrastructure the same as workflows from the rest of your codebase comes in.&lt;/p&gt;

&lt;p&gt;Treating infrastructure as code opens the doors to tons of optimization opportunities. One standout approach is standardization, which can simplify operational challenges. When you deploy from a configuration document, you decrease risk and speed up development. You also can employ those configuration files in automated DevOps workflows. In this post, we’ll give some examples of how you can leverage these benefits using Terraform for the deployment of cloud resources and Bolt for configuring them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy From Documentation
&lt;/h2&gt;

&lt;p&gt;Terraform is great for building and destroying temporary resources. It can simplify an ad-hoc data processing workflow, for example. Let’s say you’re doing on-demand data processing in AWS. You need to spin up an EMR cluster, transform your data, and destroy the cluster immediately. This transient cluster workflow pattern saves you a ton. But manually deploying the cluster for each job slows down development time. With Terraform, you can write that cluster’s specifications once and check it into git to ensure you deploy the same version each time.&lt;/p&gt;

&lt;p&gt;Terraform configurations are incredibly easy to write and read. They can also be easily modularized for reuse. Rather than plugging all of the configurations into one file, templatize the resource and the value for each argument from a &lt;code&gt;tfvars&lt;/code&gt; file, which acts as a config.&lt;/p&gt;

&lt;p&gt;Here is a truncated example of a templatized EMR resource that you might put in your &lt;code&gt;main&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_emr_cluster" "cluster" {
  # required args:
  name = "${var.name}"
  release_label = "${var.release_label}"
  applications = "${var.applications}"
  service_role = "${var.service_role}"

  master_instance_group {
    instance_type = "${var.master_instance_type}"
  }

  core_instance_group {
    instance_type = "${var.core_instance_type}"
    instance_count = "${var.core_instance_count}"
  }
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;var&lt;/code&gt;s are referenced from a &lt;code&gt;terraform.tfvars&lt;/code&gt; file that inherits variable declarations from a &lt;code&gt;variables.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform.tfvars&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name = "spark-app"
release_label = "emr-5.30.0"
applications = ["Hadoop", "Spark"]
master_instance_type = "m3.xlarge"
core_instance_type = "m3.xlarge"
core_instance_count = 1

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {}
variable "release_label" {}
variable "applications" {
  type = "list"
}
variable "master_instance_type" {}
variable "core_instance_type" {}
variable "core_instance_count" {}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice how easy it is to modify an instance type. They’re all well-documented and centrally managed in the code. No one has to look up a Wiki or previous version of the application. Just check it out of git and refer to a single, deployable config. Note that this is an incomplete list of arguments. For a full list of optional and required arguments see Terraform’s &lt;a href="https://www.terraform.io/docs/providers/aws/r/emr_cluster.html"&gt;&lt;code&gt;aws_emr_cluster&lt;/code&gt; documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Furthermore, by storing your Terraform repo in git, you can leverage event-driven automation workflows, such as redeploying the resource on merges into your master branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Config Management
&lt;/h2&gt;

&lt;p&gt;Now let’s look at how to conveniently update persistent infrastructure such as a fleet of always-on EC2 instances. Applying new provisioning actions to each one can be time-consuming. Bolt by Puppet helps you manage multiple remote resources at once. You can use it to perform scheduled uptime monitoring or you can run one-off patching tasks. In either case, Bolt tools can be captured within your projects and maintained in git. That allows you to apply the benefits of infrastructure as code to your configuration and maintenance programs.&lt;/p&gt;

&lt;p&gt;Bolt actions are either tasks or plans. Tasks are on-demand actions. Plans are orchestration scripts. Let’s start with a simple task. Suppose your development team needs a Docker engine installed on a suite of EC2 instances. It would look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bolt task run package action=install name=docker --targets my-ec2-fleet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The installation will be applied to all of the resources declared as targets in the projects &lt;code&gt;inventory&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Plans are declarative workflows written in YAML that run one or more tasks. That makes them easy to read and modify. A simple plan to provision newly deployed web servers with nginx would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parameters:
  targets:
    type: TargetSpec

steps:
  - resources:
    - package: nginx
      parameters:
        ensure: latest
    - type: service
      title: nginx
      parameters:
        ensure: running
    targets: $targets
    description: "Set up nginx on the web servers"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice that &lt;code&gt;targets&lt;/code&gt; is parameterized. That allows you to dynamically apply a list of resources when the plan is executed. You can leverage that further by integrating Bolt with other DevOps workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consolidate Into a Workflow
&lt;/h2&gt;

&lt;p&gt;Now we’ve covered provisioning with both Terraform and Bolt. Both are great tools that help you standardize infrastructure and configuration processes as code. You can even string them together in a modular event-driven workflow to reliably reuse and modify. Relay, a workflow automation tool from Puppet, provides integrations with Terraform, Bolt, and AWS. For example, declaratively map successful Terraform deployment as triggers that pass AWS resource IDs to Bolt for further configuration.&lt;/p&gt;

&lt;p&gt;Check out other &lt;a href="https://relay.sh/integrations"&gt;integrations&lt;/a&gt; and see how &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; can streamline your cloud provisioning workflow.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Take Control of your DevOps Dumping Ground with Relay!</title>
      <dc:creator>Melissa Sussmann</dc:creator>
      <pubDate>Thu, 30 Jul 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/take-control-of-your-devops-dumping-ground-with-relay-3a0e</link>
      <guid>https://dev.to/relay/take-control-of-your-devops-dumping-ground-with-relay-3a0e</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0YxqswAG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/c34642cf047d5f88893b21aae1cc75d8/6050d/fix-your-devops-dumping-ground.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0YxqswAG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/c34642cf047d5f88893b21aae1cc75d8/6050d/fix-your-devops-dumping-ground.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=e3v4Rw-rSHM&amp;amp;feature=youtu.be"&gt;click here for the webinar&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn How to Use Relay to Clean up Your DevOps Dumping Ground
&lt;/h2&gt;

&lt;p&gt;As the automation surface area grows to accommodate hundreds of interconnected APIs on the cloud, developers are using their own, home-grown “digital duct tape” to manage a growing “DevOps dumping ground”. For a lot of organizations, home-grown glue logic is inconsistent, not repeatable, and expensive to maintain hundreds of event-based workflows and thousands of combinations. We believe that the answer lies in automation workflows. In particular, workflows-as-code that can be triggered by events. We want to replace engineers’ home-grown digital duct tape with reusable, event-driven workflows.&lt;/p&gt;

&lt;p&gt;In an effort to deal with ad-hoc deployments and devops infrastructure CI/CD management, many devs try to create their own one-off automation tools or integration hubs, usually per team or per project. Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using Lambda and writing functions for EC2 management tasks&lt;/li&gt;
&lt;li&gt;Running scheduled jobs for EBS cleanup&lt;/li&gt;
&lt;li&gt;Repurposing a CI/CD tool like Jenkins for incident response workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But this “dumping ground” current approach is Inefficient, expensive and risky&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inefficient because work is done as one offs that last forever, no reusability or repeatability&lt;/li&gt;
&lt;li&gt;Expensive because spending time building tools and integrations isn’t directly delivering customer value&lt;/li&gt;
&lt;li&gt;Risky because sidestepping governance to get stuff done can lead to exposure and failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This webinar presentation from Melissa Sussmann and Kenaz Kwa at Puppet gives viewers a peek into the beta version of Relay, a product for managing containerized apps. This presentation goes over what the team has learned in the process of working on Relay, the underpinnings of the product, and demonstrates a few example workflows to help you save time and money.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why “No-Code” Tools are a Non-Starter for Developers</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Thu, 16 Jul 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/why-no-code-tools-are-a-non-starter-for-developers-56gd</link>
      <guid>https://dev.to/relay/why-no-code-tools-are-a-non-starter-for-developers-56gd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Fh6PJYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/cd33f2b131c8001722794f0627e117bb/6050d/no-code-cover-image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Fh6PJYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/cd33f2b131c8001722794f0627e117bb/6050d/no-code-cover-image.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Will Developers Use “No Code” Tools?
&lt;/h1&gt;

&lt;p&gt;Many attempts to simplify programming lead to visual interfaces that provide approachable settings for common tasks. These simplifications may appeal to non-developers, but they send experienced coders running for the command line. Yet, no code tools are rapidly expanding. Zapier, Integromat, and Workato are becoming more popular options and some believe developers won’t be needed for most integrations in the future. However, it seems unlikely that coders will adopt these tools in their current forms, as they are not looking to fully automate away their creative autonomy. This raises the question: which parts of currently available no-code tools are useful for the future developer? Furthermore, how can developers influence the production of these integrations so that they’re compatible with the true coder’s workflow?&lt;/p&gt;

&lt;p&gt;Developers may be able to strike a balance between automating appropriate responses while still leaving space for them to impart creative nuance upon the end product. Certain patterns can be brought into a typical developer’s workflow which can streamline the way apps are built, deployed, and tuned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Don’t Want Drag-and-Drop Interfaces
&lt;/h2&gt;

&lt;p&gt;Slick user interfaces with plug and play features do worlds to bring new layman users into the fold. However, these aren’t necessarily good for app developers. Modern coders want to be able to control their code down to the line, while still automating away repetitive maintenance processes. When drag-and-drop is a requirement, developers can’t have as much control or efficiency. As a result, the creative development process is interrupted and you’re unlikely to end up with the most innovative applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/6b44ec308ef311c6d91cb203c97589ca/9490d/scratch.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jTibcYxU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/6b44ec308ef311c6d91cb203c97589ca/9490d/scratch.png" alt="Scratch application" title="Scratch application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like Scratch (and similar, more professionally-oriented, tools) are great for learning and simple projects. They are typically not granular enough in their controls to create meaningful business applications. And they usually slow down professional developers.&lt;/p&gt;

&lt;p&gt;Developers thrive on the command line. They want to be efficient and automate away the stuff that keeps them from moving quickly. Part of what makes that possible is they can “see inside” and tweak things at a low level. Developers are likely to always want to design the components of their systems themselves, rather than dragging them out of a panel in a UI.&lt;/p&gt;

&lt;p&gt;However, this is not to say there is nothing to learn from these no code tools. For example, repeatable workflows are useful if a dev can plug in their real code. This permits developers to write individual components of a stack, but with assistance from a system that automates the redundant parts.&lt;/p&gt;

&lt;p&gt;Since much of the coding process is devoted to editing and retesting, there are plenty of pieces that can be streamlined. This becomes especially clear when you look at everything involved with backend maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Don’t Want to Babysit Servers
&lt;/h2&gt;

&lt;p&gt;No code integrations aren’t all bad. They can be extremely useful for developers when properly employed. They are a boon to company efficiency in two key cases: when they can save labor hours and when they can save server capacity. Developers of varying skill levels would surely benefit from automating much of their backend maintenance. Periodic functions could happen in the background without taking up valuable time coders could spend building.&lt;/p&gt;

&lt;p&gt;Some unnecessary developer time stealers include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost optimizing servers&lt;/li&gt;
&lt;li&gt;Wiring up continuous integration&lt;/li&gt;
&lt;li&gt;Connecting tools around incident response&lt;/li&gt;
&lt;li&gt;Auditing cloud security permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cost optimization is the elephant in the room for cloud computing. How companies acquire, build, utilize, and adapt their server space can hugely impact the efficiency of their spend. Currently, companies spend a boatload on labor hours for developers to monitor their systems.&lt;/p&gt;

&lt;p&gt;It’s important to keep in mind that there are two kinds of jobs for developers, and indeed anyone selling a product: &lt;em&gt;those that make your product more unique&lt;/em&gt; and &lt;em&gt;routine functions which must be kept up in order to be a responsible administrator&lt;/em&gt;. Clearly, developers should focus as much of their energy and resources on advancing the company’s core value proposition. The less time developers spend getting distracted with mundane maintenance tasks, the better. This brings us to our next efficiency factor, minimizing interruptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Do Want to Automate Their Interruptions
&lt;/h2&gt;

&lt;p&gt;A tap on the shoulder. Expanding Slack notifications. Unnecessary pages for non-incidents. These are some of the things that keep developers from doing their best work. These can’t all be avoided, but automation can help limit them.&lt;/p&gt;

&lt;p&gt;For example, a developer might get a notification when their cloud development servers have been running idle for too long. Those messages are well-meaning, but they are a distraction from focused work. The actions taken during these times may only take minutes, but then it takes time to get back into a productive flow. Perhaps most maddeningly, the actions needed here are always likely to follow the same considerations. It’s a perfect opportunity for automation.&lt;/p&gt;

&lt;p&gt;Or consider code review, an important part of working on a dev team. The reviewer should not need to manually create a staging server with running code. Nor should they need to worry about shutting it down when they’re finished. Between continuous integration tools, code repositories, and a way to describe the ideal flow, you can limit the time a developer is interrupted.&lt;/p&gt;

&lt;p&gt;The rise of “no code” gives development teams an opportunity to look where they’re wasting developer time. The answer is not to give drag-and-drop interfaces to developers. Let them use the tools they know best and connect those pieces with real code.&lt;/p&gt;

&lt;p&gt;Using a combination of event-based triggers and automated protocols, &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; listens to signals from your existing DevOps tools and then triggers workflows to orchestrate actions on downstream services. Developers can get a taste of no code without giving up their code. Be efficient with the things that can be automated and get your team to get back to building out your organization’s core value.&lt;/p&gt;

&lt;p&gt;Get started today with our &lt;a href="https://relay.sh/"&gt;single platform for all your cloud automation use cases&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Relay Public Beta</title>
      <dc:creator>Deepak Giridharagopal</dc:creator>
      <pubDate>Wed, 24 Jun 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/relay-public-beta-3d9c</link>
      <guid>https://dev.to/relay/relay-public-beta-3d9c</guid>
      <description>&lt;p&gt;Today we announce &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt;, an event-driven automation platform. &lt;a href="https://app.relay.sh/signup"&gt;Sign up now&lt;/a&gt; and try it out! Relay connects infrastructure and operations platforms, APIs, and tools together into a cohesive, easy-to-automate whole. Relay is simple enough for you to start automating common, &lt;em&gt;if-this-then-that&lt;/em&gt; (IFTTT) style DevOps tasks in minutes and powerful enough to model multi-step, branching, parallelized DevOps processes when the need arises.&lt;/p&gt;

&lt;p&gt;Why bother? Because for all the progress we’ve made as builders and operators, &lt;a href="https://landscape.cncf.io/"&gt;things are more complicated than ever&lt;/a&gt;. Modern applications comprise a growing variety of runtimes, clouds, infrastructure platforms, 3rd party services, and APIs. Mounting sophistication (&lt;a href="https://www.youtube.com/watch?v=dtI5dMpBmQo"&gt;and complexity&lt;/a&gt;) of how applications are constructed complicates how we operate and manage them. As a result, accomplishing many basic operational tasks can involve touching many different components, with different APIs, different semantics, from different upstreams, vendors and dev teams. Connecting all of these components together is tough, and automating anything across them all can range from tedious to nightmare fuel.&lt;/p&gt;

&lt;p&gt;At layers above the plumbing, &lt;a href="https://relay.sh/blog/rise-of-the-apis/"&gt;managing infrastructure stops looking like classic configuration management and starts looking like orchestrating workflows&lt;/a&gt;. However, workflows can be tricky. Connectivity, secrets handling, event listening, ordering, parallelism, error handling, and control flow all conspire to make writing workflows from scratch pretty gnarly. The complexity adds up fast. We can do better!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Automated workflows are the bedrock of all software organizations.”&lt;br&gt;&lt;br&gt;
— Jason Warner, CTO @ GitHub&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Relay lets you represent any DevOps workflow as code, composed of triggers that listen for incoming events, and steps that define the task you’re automating. Relay does not limit what you can talk to. A single workflow could listen for alerts from PagerDuty, query metrics from DataDog, reconfigure infrastructure with Terraform, and send a notification via Slack. It’s easy to leverage pre-existing triggers, steps, and workflows, and it’s simple to make your own if the need arises.&lt;/p&gt;

&lt;p&gt;As a hosted service, Relay supervises things on your behalf. It will automatically trigger your workflow based on incoming events, execute your workflow’s steps in parallel, notify you if you need to intervene, and keep meticulous records of everything done. Relay does this all automatically, so you don’t have to.&lt;/p&gt;

&lt;p&gt;Today, we’re proud to announce &lt;a href="https://relay.sh/"&gt;beta availability&lt;/a&gt; for Relay. Read on to see how it works!&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflows
&lt;/h2&gt;

&lt;p&gt;Relay’s core method of automation is &lt;a href="https://relay.sh/docs/using-workflows/"&gt;&lt;em&gt;the workflow&lt;/em&gt;&lt;/a&gt;. Workflows combine useful activities together to accomplish a particular task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When we detect an unused Azure Disk, delete it &lt;em&gt;(so we can save money)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;When they go unused, nuke any AWS authentication keypairs &lt;em&gt;(so we can reduce our attack surface)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;When a PagerDuty alert fires with a certain severity, create tickets in Jira and a room in Slack &lt;em&gt;(so we can more quickly troubleshoot issues)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Relay lets you succinctly express these types of workflows, and beyond, &lt;a href="https://relay.sh/docs/reference/relay-workflows/"&gt;as code&lt;/a&gt;. And like code, workflows can be versioned, reviewed, refactored, and reused. We’re Puppet; &lt;a href="https://www.google.com/search?hl=en&amp;amp;q=puppet%20infrastructure%20as%20code"&gt;we wouldn’t have it any other way&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Running your first workflow is easy, and should only take you about a minute. As tradition demands, here’s “Hello, world” (&lt;a href="https://app.relay.sh/login"&gt;log in&lt;/a&gt; and follow along!):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TLqKnZNq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/debec11953b60317076234251dc8c4f0/hello-world.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TLqKnZNq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/debec11953b60317076234251dc8c4f0/hello-world.gif" alt="Hello, world!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because workflows are code, you can treat them like code. Modifying a workflow is straightforward. Let’s change the workflow, adding a step to emit the current date:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gPQXTQx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/2105fe13ad95afadea7d0e9dc45e3976/cli.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gPQXTQx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/2105fe13ad95afadea7d0e9dc45e3976/cli.gif" alt="Change the workflow using the CLI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That covered &lt;a href="https://relay.sh/docs/getting-started/#install-the-cli"&gt;getting the CLI installed&lt;/a&gt;, authenticating against the service, downloading your workflow, modifying the logic, and then letting Relay know the code is updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Triggers and steps
&lt;/h2&gt;

&lt;p&gt;Workflows contain &lt;em&gt;triggers&lt;/em&gt; and &lt;em&gt;steps&lt;/em&gt;: Triggers determine when Relay should execute your workflow: manually, on a schedule, or when pinged by an external source. Steps represent the set of actions and activities necessary to make your workflow accomplish its goals. Steps are just &lt;a href="https://www.docker.com/resources/what-container"&gt;containers&lt;/a&gt;, so you’re pretty unconstrained when it comes to what a step can do. Both triggers and steps &lt;a href="https://relay.sh/docs/integrating-with-relay/"&gt;are easy to create, remix, and share&lt;/a&gt;. With these building blocks, Relay is capable of modeling a huge variety of workflows, and executing them on your behalf. There are a bunch already written, and it’s straightforward to &lt;a href="https://relay.sh/docs/getting-started/"&gt;make your own&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here’s a more interesting example workflow that &lt;a href="https://relay.sh/workflows/ec2-reaper/"&gt;cleans up some unneeded EC2 instances&lt;/a&gt;. It has more steps, including some that consume AWS credentials, and one which represents a &lt;em&gt;manual approval&lt;/em&gt; gate:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--22fWE56c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/18f91058f6a39345a357bc6d72855ed8/ec2-reaper.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--22fWE56c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://relay.sh/18f91058f6a39345a357bc6d72855ed8/ec2-reaper.gif" alt="Cleaning up some EC2 instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s easy to add, remove, or replace triggers and steps to suit your liking. Some modifications for the preceding example could include adding a webhook-based trigger for the workflow, adding a notification step at the end, or better integrating it into your GitOps setup. Perhaps a step that takes the list of terminated instances, computes their money you just saved, then buys an equivalent amount of stuff from your Amazon wish list? Ops is hard work - treat yourself!&lt;/p&gt;

&lt;p&gt;Our examples thus far have shown short, linear sequences of steps but you can also express some pretty elaborate processes just as easily. Here’s a picture of the execution graph of one of workflows we use to manage Relay itself:&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/ed787c9f007e7856c5846c98c4c7f6fa/065ce/relay-graph.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R983QG2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/ed787c9f007e7856c5846c98c4c7f6fa/64756/relay-graph.png" alt="A more complex workflow graph" title="A more complex workflow graph"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Relay service
&lt;/h2&gt;

&lt;p&gt;Listening for events and running workflows might appear conceptually simple, but there are a lot of practical details that need to be worked out. Relay’s execution environment (and &lt;a href="https://github.com/puppetlabs/relay-core"&gt;underlying engine&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manages &lt;a href="https://relay.sh/docs/using-workflows/adding-connections/"&gt;connections&lt;/a&gt; to upstream/downstream APIs and services, making them securely available to the workflows that need them&lt;/li&gt;
&lt;li&gt;Automatically creates &lt;a href="https://relay.sh/docs/reference/relay-workflows/#push"&gt;&lt;em&gt;push triggers&lt;/em&gt;&lt;/a&gt; for your workflows, complete with workflow-specific security tokens, so you can easily kick it off from all kinds of other tools&lt;/li&gt;
&lt;li&gt;Automatically constructs an environment for running webhooks, so your workflows can respond to events from webhook-only services&lt;/li&gt;
&lt;li&gt;Sandboxes workflow and step execution, for fault isolation&lt;/li&gt;
&lt;li&gt;Manages your workflows with all the necessary &lt;em&gt;ops accoutrements&lt;/em&gt; (e.g. monitoring, logging, error handling)&lt;/li&gt;
&lt;li&gt;Supervises the execution of your workflows, invoking steps in the right order (with automatic parallelization)&lt;/li&gt;
&lt;li&gt;Standardizes the interfaces between all these pieces so steps, triggers, and connections are reusable and remixable across workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Relay takes care of this stuff so you don’t have to. Instead, you can focus on the logic of your workflow, the core of what you’re trying to automate.&lt;/p&gt;

&lt;p&gt;After all, isn’t that the point?&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation for everyone
&lt;/h2&gt;

&lt;p&gt;How many unique applications are running out there across the planet (&lt;a href="https://twitter.com/lkanies/status/1182350689529298944"&gt;or above it&lt;/a&gt;)? Thousands? Millions? &lt;a href="https://www.merriam-webster.com/dictionary/bajillion"&gt;Bajillions&lt;/a&gt;? How many of them are running on identical infrastructure stacks, built with identical technology stacks, managed in identical ways at an identical scale? There’s a truly staggering variety of approaches and constraints.&lt;/p&gt;

&lt;p&gt;If there’s no &lt;em&gt;One True Stack&lt;/em&gt;, then there’s no &lt;em&gt;One True Way To Manage It&lt;/em&gt;. The tools you employ should thrive in this sort of world because that’s the world we’ve got.&lt;/p&gt;

&lt;p&gt;Relay’s core value lies in letting you tie a &lt;a href="https://relay.sh/integrations/"&gt;wide variety of services, APIs, and platforms&lt;/a&gt; together. It’s constructed in a deliberately pluggable way. Users can readily extend the system to talk to new technologies, respond to new kinds of events, and take action in new ways…no CS degree required. Those extensions should be easy to share, so the entire user community can benefit. The ecosystems around the tools we use are every bit as important as the tools themselves.&lt;/p&gt;

&lt;p&gt;Even though it’s early days, Relay can already do quite a lot. The future holds many possibilities: new workflows, more integrations with more tools and platforms, higher-level workflow syntax, a more streamlined authoring experience, simplified input/output from steps, and more. Early users have already given us a ton of great suggestions, and we’d love to hear yours!&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;The next step (and best step) is to &lt;a href="https://relay.sh/"&gt;try it out&lt;/a&gt;! And if you’d like to learn more about Relay, you can check out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://relay.sh/blog/relay-and-open-source/"&gt;How to get involved&lt;/a&gt;, extend Relay to better meet your needs, and become part of Relay community&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://relay.sh/docs/"&gt;The documentation&lt;/a&gt; does a great job of introducing Relay, its usage, core concepts, and extension points&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://relay.sh/workflows/"&gt;Peruse some workflows&lt;/a&gt; to see what they can do. The code and its graphical execution plan are available on every workflow’s page.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppetcommunity.slack.com/archives/CMKBMAW2K"&gt;Slack&lt;/a&gt; - come linger in the #relay channel! The more the merrier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks, and let a thousand workflows bloom!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>User-defined Webhooks in Puppet Relay with Knative and Ambassador API Gateway</title>
      <dc:creator>Noah Fontes</dc:creator>
      <pubDate>Tue, 23 Jun 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/user-defined-webhooks-in-puppet-relay-with-knative-and-ambassador-api-gateway-2m7m</link>
      <guid>https://dev.to/relay/user-defined-webhooks-in-puppet-relay-with-knative-and-ambassador-api-gateway-2m7m</guid>
      <description>&lt;h2&gt;
  
  
  User-defined Webhooks in Puppet Relay with Knative and Ambassador API Gateway
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This article is a technical deep dive originally published on the &lt;a href="https://blog.getambassador.io/"&gt;Ambassador blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Puppet Relay?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://relay.sh"&gt;Relay&lt;/a&gt; is an event-driven automation platform designed to make wrangling diverse operational environments easy. Relay executes workflows, which consist of multiple related steps, to perform actions like opening Jira tickets, merging pull requests, or even deploying an application to a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Relay is built on containerization and leverages &lt;a href="https://tekton.dev"&gt;Tekton&lt;/a&gt; to execute workflows. Each step in a Relay workflow runs an &lt;a href="https://opencontainers.org"&gt;OCI&lt;/a&gt;-compatible container image. Unlike conventional workflow automation tools, this gives you the ability to make your own completely custom steps; you’re not restricted to our curated steps or even to a particular programming language.&lt;/p&gt;

&lt;p&gt;When we set out to implement triggers to automatically run workflows from event data, we wanted to make sure you’d have similar flexibility as within a workflow to receive external data. We decided to provide three initial options: schedule triggers, which run your workflows on a specified interval; push triggers, which allow your services to directly send data to Relay; and webhook triggers, which integrate with lots of external services that can push event data to arbitrary HTTP endpoints.&lt;/p&gt;

&lt;p&gt;Webhook triggers presented the biggest technical challenge to implement, as every service provides slightly different payloads representing their events. Keeping with our container-based approach, we let you define webhook triggers by running your own web server in a container you provide to us. In this post, we’ll walk through how we implemented our webhook trigger handling using &lt;a href="https://knative.dev/docs/serving/"&gt;Knative Serving&lt;/a&gt; and the &lt;a href="https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/"&gt;Ambassador API Gateway&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can see &lt;a href="https://github.com/relay-integrations/relay-pagerduty/blob/master/triggers/pagerduty-trigger-incident-triggered/handler.py"&gt;an example of a complete webhook trigger implementation&lt;/a&gt; in our PagerDuty integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Knative Serving
&lt;/h3&gt;

&lt;p&gt;We chose Knative Serving as a reverse proxy for webhook triggers. We could have configured a Kubernetes deployment for each webhook trigger, but we felt the resource burden on our cluster would be too high to have every customer pod running all the time.&lt;/p&gt;

&lt;p&gt;Knative Serving’s unique model uses an activator and autoscaler to dynamically provision pods when it receives requests. A rough state machine for a Knative service looks like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When a service is created or updated, a revision is created initially in an inactive state. This corresponds to a Kubernetes deployment scaled to zero pods. In this inactive state, HTTP requests are routed to the Knative activator.&lt;/li&gt;
&lt;li&gt;When an HTTP request is received for the service, the Knative activator queues the request and switches the current revision to an active state. Knative then scales the deployment up and switches the service to route to the deployment’s pods. Once the service is pointing at the deployment, the queued HTTP request is dispatched.&lt;/li&gt;
&lt;li&gt;The Knative autoscaler monitors inbound request rate and scales the deployment as needed.&lt;/li&gt;
&lt;li&gt;If the service doesn’t receive any HTTP requests after a configured timeout, the deployment is scaled back to zero, the revision is marked as inactive, and the service is switched back to the activator.&lt;/li&gt;
&lt;li&gt;Repeat from the beginning!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For a cached container image, the whole activation process generally takes less than 2 seconds, quickly enough for our webhook handling use case. And for webhook triggers that only receive events every few minutes or less frequently, it saves us considerable cluster resources.&lt;/p&gt;

&lt;p&gt;Installing Knative Serving is straightforward. You need their &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;CRDs&lt;/a&gt; and core components like the activator. Here we’ll use version 0.13.0, but check their &lt;a href="https://knative.dev/docs/install/any-kubernetes-cluster/"&gt;installation instructions&lt;/a&gt; for the latest version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.13.0/serving-crds.yaml
$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.13.0/serving-core.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you need to pick a networking layer, or gateway, to route requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing a Knative Serving Gateway
&lt;/h3&gt;

&lt;p&gt;Like most Knative Serving users, we started by evaluating &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt;, a popular service mesh and ingress gateway offering for Kubernetes. However, Istio’s focus on connecting microservices didn’t really support our use case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We didn’t need the service mesh component of Istio at all, only its Envoy gateway. A complete Istio installation on our cluster consumed a lot of resources, most of which were ultimately going to waste.&lt;/li&gt;
&lt;li&gt;Because we’re configuring webhook triggers dynamically from our database, we put our own reverse proxy in front of Knative Serving. It is difficult (although not impossible) to change the behavior of Istio to run as an internal-facing service instead of a public gateway.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most other networking layer options for Knative Serving were positioned more strongly as ingress gateways, focusing mainly on exposing services directly to the internet.&lt;/p&gt;

&lt;p&gt;Ultimately we settled on Ambassador because its lightweight single-container model made deploying it for our internal use case easy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Ambassador and Configuring Knative to Use it
&lt;/h3&gt;

&lt;p&gt;For Relay, we use a custom Helm chart to set up the Ambassador API Gateway. We have a single deployment, an optional &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;horizontal pod autoscaler&lt;/a&gt;, a service account with corresponding role bindings, and finally a ClusterIP service to use as the target networking layer for Knative.&lt;/p&gt;

&lt;p&gt;Our deployment YAML is largely the same as the one from the &lt;a href="https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/#1-deploying-the-ambassador-api-gateway"&gt;Ambassador installation instructions&lt;/a&gt; in &lt;code&gt;ambassador-rbac.yaml&lt;/code&gt;. However, we must explicitly enable Knative support:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ...
template:
  spec:
    containers:
    - image: datawire/ambassador:1.5.2
      env:
      - name: AMBASSADOR_KNATIVE_SUPPORT
        value: 'true'
      # ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Likewise, our service account and role bindings are similar to those in &lt;code&gt;ambassador-rbac.yaml&lt;/code&gt;, but use &lt;code&gt;{{ .Release.Namespace }}&lt;/code&gt; instead of &lt;code&gt;default&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For the gateway service, note especially:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;app.kubernetes.io/component&lt;/code&gt; label must be present and set exactly to the value &lt;code&gt;ambassador-service&lt;/code&gt; or Ambassador won’t pick it up to set the Knative service target correctly.&lt;/li&gt;
&lt;li&gt;We use &lt;code&gt;type: ClusterIP&lt;/code&gt; to make the service cluster-local. This instance of Ambassador won’t be reachable from the internet.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- $name := include "ambassador.name" . -}}
{{- $fullname := include "ambassador.fullname" . -}}
{{- $namespace := .Release.Namespace -}}
apiVersion: v1
kind: Service
metadata:
  name: {{ $fullname }}
  namespace: {{ $namespace }}
  labels:
    app.kubernetes.io/name: {{ $name }}
    # Per the Ambassador source code, this must be specified explicitly exactly
    # like this.
    app.kubernetes.io/component: ambassador-service
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: "{{ .Values.image.tag }}"
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    helm.sh/chart: {{ include "ambassador.chart" . }}
spec:
  type: ClusterIP
  ports:
   - port: 80
     targetPort: 8080
  selector:
    app.kubernetes.io/name: {{ $name }}
    app.kubernetes.io/instance: {{ .Release.Name }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a reference, you can find our &lt;a href="https://github.com/puppetlabs/relay-helm-ambassador-knative"&gt;entire Ambassador Helm chart on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, we need to configure Knative to use cluster-local routing and Ambassador as its default gateway. Simply apply this manifest to your cluster using &lt;code&gt;kubectl apply&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: config-domain
  namespace: knative-serving
data:
  svc.cluster.local: ''
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-network
  namespace: knative-serving
data:
  ingress.class: ambassador.ingress.networking.knative.dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying, Testing, and Managing Internal-Only Knative Services
&lt;/h3&gt;

&lt;p&gt;Now you can create a cluster-local Knative service. Use &lt;code&gt;kubectl apply&lt;/code&gt; on a manifest like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-test-service
  namespace: default
  labels:
    serving.knative.dev/visibility: cluster-local
spec:
  template:
    spec:
      containers:
      - image: gcr.io/knative-samples/helloworld-go
        name: helloworld-go

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within a few seconds, Ambassador will process the service and set up a mapping for it. Assuming you installed Ambassador to the ambassador namespace, you’ll see this service when you run &lt;code&gt;kubectl get svc&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-test-service ExternalName &amp;lt;none&amp;gt; ambassador.ambassador.svc.cluster.local &amp;lt;none&amp;gt; 112s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don’t see the service pointed at your Ambassador deployment, inspect the Knative service and ingress using &lt;code&gt;kubectl describe ksvc my-test-service&lt;/code&gt; and &lt;code&gt;kubectl describe king my-test-service&lt;/code&gt;. The status conditions and events should provide useful hints to remediate any problems.&lt;/p&gt;

&lt;p&gt;Now we can try sending a request to the service by running a one-off pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl run \
    --generator=run-pod/v1 internal-requester \
    --image=alpine --rm -ti --attach --restart=Never \
    -- wget -q -O - http://my-test-service.default.svc.cluster.local
Hello World!
pod "internal-requester" deleted

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yay! Your service works. You’ll also see a pod running to handle the request you just made. If you don’t make any more requests, within a few minutes, that pod will be automatically terminated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME READY STATUS RESTARTS AGE
my-test-service-cw9v2-deployment-59c889f74b-74rwk 2/2 Running 0 8s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view all the mappings Ambassador has configured for your Knative services by forwarding the admin endpoint of your deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward -n ambassador deployment/ambassador 8877
Forwarding from 127.0.0.1:8877 -&amp;gt; 8877
Forwarding from [::1]:8877 -&amp;gt; 8877

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then navigate to &lt;a href="http://localhost:8877/ambassador/v0/diag/"&gt;http://localhost:8877/ambassador/v0/diag/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For Relay, we manage our Knative services by writing out a higher-level CRD that &lt;a href="https://github.com/puppetlabs/relay-core"&gt;our operator processes&lt;/a&gt;. This lets us perform lifecycle management operations more efficiently. For example, we create and clean up webhook triggers and workflow runs in batches we call tenants. We get a ton of value from the combination of Tekton, Knative, Ambassador, and our own operator, with relatively little cluster resource overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Scenarios for Ambassador and Knative
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://blog.getambassador.io/self-serverless-why-run-knative-functions-on-your-kubernetes-cluster-4914c706a083"&gt;developer use cases for Knative&lt;/a&gt; fall into three categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Replace glue/aggregation functions with Knative&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Function as a Service (FaaS) offerings have become popular as a way to deploy and run services that “glue” functionality together. The main challenge for development teams is that the workflow for deploying cloud-based FaaS is different than that for Kubenetes. If you’ve already invested in training engineers to work with Kubernetes, then the added time and money to train them is extraneous.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Build smaller microservices as functions&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some simple functions are event-driven and provisioning (or running) an entire microservice/app framework seems unnecessary. Knative provides “just enough” framework to deploy and manage the lifecycle of a very simple microservice or “nanoservice” using the primitives provided within modern language stacks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Deploy high-volume functions, cost-effectively&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pay-as-you go serverless offerings can be very cost-effective for certain use cases. For longer-running or high-volume functions, pay-as-you-go isn’t as practical. Running Knative on your own hardware, or even running this via Kubernetes deployed on cloud VMs, can enable the easier costing of execution when you know that you will be running a service that has high-volume traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Running on-demand microservices in your infrastructure is easier now than ever before. With the help of Knative and Ambassador, you can drastically reduce costs and resource utilization while maintaining clean separations of APIs across your environment.&lt;/p&gt;

&lt;p&gt;At the frontier of cloud-native experience, Knative also unlocks some very exciting opportunities that haven’t been practical in conventional deployment environments. In this post, we explored low-trust user-defined workloads using custom containers as one example, but there are many more!&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install the &lt;a href="https://www.getambassador.io/docs/latest/tutorials/getting-started/"&gt;Ambassador Edge Stack&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://knative.dev/docs/install/any-kubernetes-cluster/"&gt;Knative with Ambassador&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Read the &lt;a href="https://www.getambassador.io/docs/latest/howtos/knative/#using-knative-and-ambassador"&gt;documentation for using Knative and Ambassador&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.getambassador.io/contact/"&gt;Contact the Ambassador team&lt;/a&gt; to learn more about using Ambassador with Knative.&lt;/li&gt;
&lt;li&gt;Sign up for &lt;a href="https://relay.sh"&gt;Relay&lt;/a&gt; to try webhook triggers yourself.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
  </channel>
</rss>
