<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adam DuVander</title>
    <description>The latest articles on DEV Community by Adam DuVander (@adamd).</description>
    <link>https://dev.to/adamd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adamd"/>
    <language>en</language>
    <item>
      <title>The Event-Driven Web is Not the Future</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Wed, 19 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/the-event-driven-web-is-not-the-future-5f1l</link>
      <guid>https://dev.to/relay/the-event-driven-web-is-not-the-future-5f1l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YRINiD9q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/83c5851ae381a2fe5bcdbc750b93dcb8/6050d/the-event-driven-web-is-not-the-future.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YRINiD9q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/83c5851ae381a2fe5bcdbc750b93dcb8/6050d/the-event-driven-web-is-not-the-future.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you see a notification on your smartphone, your brain processes the request quickly and determines how to react. It’s an efficient process and your nervous system is built for this use case. By contrast, most Internet-connected systems work in a less event-driven architecture. If there’s a change in one service, you won’t know about it until you check. It’s the equivalent of reloading an app to see if there’s something new—it works eventually, but it’s not efficient.&lt;/p&gt;

&lt;p&gt;You might expect that the event-driven web should be the future. If systems knew about updates immediately, they could seamlessly make changes in reaction to the new information. New servers could be provisioned, unneeded resources could be turned off, and your microwave clock could always be accurate (ok, that might be asking too much).&lt;/p&gt;

&lt;p&gt;The truth is: real-time patterns have been around for years. The evented web is not the future because the present is fully capable of what it offers. Most developers aren’t taking advantage of event-driven development. While the pieces are there, not every service supports events. Perhaps most importantly, there are few tools to easily consume events, because development is stuck in a client-server mentality.&lt;/p&gt;

&lt;p&gt;As developers, it’s time to embrace this entirely un-new, but useful, approach to building Internet-connected systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Real-time Patterns Can Accomplish
&lt;/h2&gt;

&lt;p&gt;Whenever a change occurs in one system or new data is available in another, all of that context should be shared with systems that have declared an interest. In the same way that we expect smartphone notifications, developers can design for events. However, rather than causing more distractions for us to triage, they can save us time. Real-time patterns provide &lt;a href="https://relay.sh/blog/building-the-future-of-devops-automation/"&gt;immediate updates without the manual button-pushing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Consider the many tools required to build, deploy, monitor, and improve applications today. AWS itself has over 200 services and that’s just one cloud provider. Once you consider other cloud services and the ecosystem around all of it, you’ll be working with more tools than you’ll typically count, each with its own handful of API-driven knobs.&lt;/p&gt;

&lt;p&gt;When those knobs are turned via thoughtful automation, you start to see what’s possible. You can streamline your deploy processes toward the promise of continuous delivery. You can trigger events based on pull request activity or system monitoring and scale up and down cloud needs in response to incidents.&lt;/p&gt;

&lt;p&gt;Too often companies take event-based operations halfway. More instrumentation without automation is not the goal. Your team could very well spend all of its time dousing cloud-bourne fires. Each alert becomes a new task on a never-ending list. Even though we have the technology to reach the real-time opportunity, the momentum of how we’ve done things for decades holds us back.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Are Stuck in a Client-Server Mentality
&lt;/h2&gt;

&lt;p&gt;Since the early days of the Web, tools have operated on a simple model: a browser requests a resource and the server responds in kind. Front-end advances have given us interfaces that emulate real-time, but behind the scenes, these technologies often look a lot like the client-server model. It’s from that mindset that many of our tools and development processes are created.&lt;/p&gt;

&lt;p&gt;If you’ve said “try reloading it” in recent memory, you recognize the issue. Servers respond to events, clients don’t. To move into the real-time present, servers must also &lt;em&gt;send&lt;/em&gt; events, which means a client must be able to &lt;em&gt;receive&lt;/em&gt; events.&lt;/p&gt;

&lt;p&gt;There are current solutions to implement real-time patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Polling&lt;/strong&gt; , where you check for new data every minute or more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhooks&lt;/strong&gt; , where you subscribe to receive updates as available&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Websockets&lt;/strong&gt; , a two-way protocol, and proposed standard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each requires changes in how you architect your applications. You need to organize how you receive events, store the data, react to their contents, and chain the results to other services. Despite being an API-driven process, it’s unlikely to fit the model of your existing API integrations, which reside in the client-server mentality.&lt;/p&gt;

&lt;p&gt;To break into the real-time paradigm requires tooling that supports the shift in thinking, without putting the additional architectural burden on your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Best Teams Will Use Real-time Tools
&lt;/h2&gt;

&lt;p&gt;The evented web, which allows for real-time patterns, is very much available now. You can bring its efficiency to your team if you organize the right tools. It is unlikely you’ll want to build the infrastructure yourself unless you have unique needs or a team of engineers waiting for their next project.&lt;/p&gt;

&lt;p&gt;Some important features to look for when implementing real-time patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for webhooks and polling&lt;/li&gt;
&lt;li&gt;Integrations with the DevOps tools you already use&lt;/li&gt;
&lt;li&gt;Audit trails of each run of the workflow&lt;/li&gt;
&lt;li&gt;API secret management support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we didn’t find anything to meet those needs, we built &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt;. You can create automation with support for a growing number of DevOps and business tools. Write workflows in a familiar YAML syntax and run them in our secure environment. &lt;a href="https://relay.sh/"&gt;Try Relay for free now&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Provision Cloud Infrastructure</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Wed, 12 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/how-to-provision-cloud-infrastructure-b6i</link>
      <guid>https://dev.to/relay/how-to-provision-cloud-infrastructure-b6i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5oI6Jn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/94d91478fc80ef998897557913794e8d/af370/how-to-provision-cloud-infrastructure.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5oI6Jn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/94d91478fc80ef998897557913794e8d/af370/how-to-provision-cloud-infrastructure.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning Cloud Infrastructure
&lt;/h2&gt;

&lt;p&gt;One of the best things about cloud computing is how it converts technical efficiencies into cost-savings. Some of those efficiencies are just part of the tool kit, like pay-per-use Lambda jobs. Good DevOps brings a lot of savings to the cloud, as well. It can smooth out high-friction state management challenges. Sprucing up how you provision cloud services, for example, speeds up deployments. That’s where treating infrastructure the same as workflows from the rest of your codebase comes in.&lt;/p&gt;

&lt;p&gt;Treating infrastructure as code opens the doors to tons of optimization opportunities. One standout approach is standardization, which can simplify operational challenges. When you deploy from a configuration document, you decrease risk and speed up development. You also can employ those configuration files in automated DevOps workflows. In this post, we’ll give some examples of how you can leverage these benefits using Terraform for the deployment of cloud resources and Bolt for configuring them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy From Documentation
&lt;/h2&gt;

&lt;p&gt;Terraform is great for building and destroying temporary resources. It can simplify an ad-hoc data processing workflow, for example. Let’s say you’re doing on-demand data processing in AWS. You need to spin up an EMR cluster, transform your data, and destroy the cluster immediately. This transient cluster workflow pattern saves you a ton. But manually deploying the cluster for each job slows down development time. With Terraform, you can write that cluster’s specifications once and check it into git to ensure you deploy the same version each time.&lt;/p&gt;

&lt;p&gt;Terraform configurations are incredibly easy to write and read. They can also be easily modularized for reuse. Rather than plugging all of the configurations into one file, templatize the resource and the value for each argument from a &lt;code&gt;tfvars&lt;/code&gt; file, which acts as a config.&lt;/p&gt;

&lt;p&gt;Here is a truncated example of a templatized EMR resource that you might put in your &lt;code&gt;main&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_emr_cluster" "cluster" {
  # required args:
  name = "${var.name}"
  release_label = "${var.release_label}"
  applications = "${var.applications}"
  service_role = "${var.service_role}"

  master_instance_group {
    instance_type = "${var.master_instance_type}"
  }

  core_instance_group {
    instance_type = "${var.core_instance_type}"
    instance_count = "${var.core_instance_count}"
  }
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;var&lt;/code&gt;s are referenced from a &lt;code&gt;terraform.tfvars&lt;/code&gt; file that inherits variable declarations from a &lt;code&gt;variables.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform.tfvars&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name = "spark-app"
release_label = "emr-5.30.0"
applications = ["Hadoop", "Spark"]
master_instance_type = "m3.xlarge"
core_instance_type = "m3.xlarge"
core_instance_count = 1

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {}
variable "release_label" {}
variable "applications" {
  type = "list"
}
variable "master_instance_type" {}
variable "core_instance_type" {}
variable "core_instance_count" {}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice how easy it is to modify an instance type. They’re all well-documented and centrally managed in the code. No one has to look up a Wiki or previous version of the application. Just check it out of git and refer to a single, deployable config. Note that this is an incomplete list of arguments. For a full list of optional and required arguments see Terraform’s &lt;a href="https://www.terraform.io/docs/providers/aws/r/emr_cluster.html"&gt;&lt;code&gt;aws_emr_cluster&lt;/code&gt; documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Furthermore, by storing your Terraform repo in git, you can leverage event-driven automation workflows, such as redeploying the resource on merges into your master branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Config Management
&lt;/h2&gt;

&lt;p&gt;Now let’s look at how to conveniently update persistent infrastructure such as a fleet of always-on EC2 instances. Applying new provisioning actions to each one can be time-consuming. Bolt by Puppet helps you manage multiple remote resources at once. You can use it to perform scheduled uptime monitoring or you can run one-off patching tasks. In either case, Bolt tools can be captured within your projects and maintained in git. That allows you to apply the benefits of infrastructure as code to your configuration and maintenance programs.&lt;/p&gt;

&lt;p&gt;Bolt actions are either tasks or plans. Tasks are on-demand actions. Plans are orchestration scripts. Let’s start with a simple task. Suppose your development team needs a Docker engine installed on a suite of EC2 instances. It would look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bolt task run package action=install name=docker --targets my-ec2-fleet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The installation will be applied to all of the resources declared as targets in the projects &lt;code&gt;inventory&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Plans are declarative workflows written in YAML that run one or more tasks. That makes them easy to read and modify. A simple plan to provision newly deployed web servers with nginx would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parameters:
  targets:
    type: TargetSpec

steps:
  - resources:
    - package: nginx
      parameters:
        ensure: latest
    - type: service
      title: nginx
      parameters:
        ensure: running
    targets: $targets
    description: "Set up nginx on the web servers"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice that &lt;code&gt;targets&lt;/code&gt; is parameterized. That allows you to dynamically apply a list of resources when the plan is executed. You can leverage that further by integrating Bolt with other DevOps workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consolidate Into a Workflow
&lt;/h2&gt;

&lt;p&gt;Now we’ve covered provisioning with both Terraform and Bolt. Both are great tools that help you standardize infrastructure and configuration processes as code. You can even string them together in a modular event-driven workflow to reliably reuse and modify. Relay, a workflow automation tool from Puppet, provides integrations with Terraform, Bolt, and AWS. For example, declaratively map successful Terraform deployment as triggers that pass AWS resource IDs to Bolt for further configuration.&lt;/p&gt;

&lt;p&gt;Check out other &lt;a href="https://relay.sh/integrations"&gt;integrations&lt;/a&gt; and see how &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; can streamline your cloud provisioning workflow.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why “No-Code” Tools are a Non-Starter for Developers</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Thu, 16 Jul 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/why-no-code-tools-are-a-non-starter-for-developers-56gd</link>
      <guid>https://dev.to/relay/why-no-code-tools-are-a-non-starter-for-developers-56gd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Fh6PJYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/cd33f2b131c8001722794f0627e117bb/6050d/no-code-cover-image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Fh6PJYe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/cd33f2b131c8001722794f0627e117bb/6050d/no-code-cover-image.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Will Developers Use “No Code” Tools?
&lt;/h1&gt;

&lt;p&gt;Many attempts to simplify programming lead to visual interfaces that provide approachable settings for common tasks. These simplifications may appeal to non-developers, but they send experienced coders running for the command line. Yet, no code tools are rapidly expanding. Zapier, Integromat, and Workato are becoming more popular options and some believe developers won’t be needed for most integrations in the future. However, it seems unlikely that coders will adopt these tools in their current forms, as they are not looking to fully automate away their creative autonomy. This raises the question: which parts of currently available no-code tools are useful for the future developer? Furthermore, how can developers influence the production of these integrations so that they’re compatible with the true coder’s workflow?&lt;/p&gt;

&lt;p&gt;Developers may be able to strike a balance between automating appropriate responses while still leaving space for them to impart creative nuance upon the end product. Certain patterns can be brought into a typical developer’s workflow which can streamline the way apps are built, deployed, and tuned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Don’t Want Drag-and-Drop Interfaces
&lt;/h2&gt;

&lt;p&gt;Slick user interfaces with plug and play features do worlds to bring new layman users into the fold. However, these aren’t necessarily good for app developers. Modern coders want to be able to control their code down to the line, while still automating away repetitive maintenance processes. When drag-and-drop is a requirement, developers can’t have as much control or efficiency. As a result, the creative development process is interrupted and you’re unlikely to end up with the most innovative applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/static/6b44ec308ef311c6d91cb203c97589ca/9490d/scratch.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jTibcYxU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/6b44ec308ef311c6d91cb203c97589ca/9490d/scratch.png" alt="Scratch application" title="Scratch application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like Scratch (and similar, more professionally-oriented, tools) are great for learning and simple projects. They are typically not granular enough in their controls to create meaningful business applications. And they usually slow down professional developers.&lt;/p&gt;

&lt;p&gt;Developers thrive on the command line. They want to be efficient and automate away the stuff that keeps them from moving quickly. Part of what makes that possible is they can “see inside” and tweak things at a low level. Developers are likely to always want to design the components of their systems themselves, rather than dragging them out of a panel in a UI.&lt;/p&gt;

&lt;p&gt;However, this is not to say there is nothing to learn from these no code tools. For example, repeatable workflows are useful if a dev can plug in their real code. This permits developers to write individual components of a stack, but with assistance from a system that automates the redundant parts.&lt;/p&gt;

&lt;p&gt;Since much of the coding process is devoted to editing and retesting, there are plenty of pieces that can be streamlined. This becomes especially clear when you look at everything involved with backend maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Don’t Want to Babysit Servers
&lt;/h2&gt;

&lt;p&gt;No code integrations aren’t all bad. They can be extremely useful for developers when properly employed. They are a boon to company efficiency in two key cases: when they can save labor hours and when they can save server capacity. Developers of varying skill levels would surely benefit from automating much of their backend maintenance. Periodic functions could happen in the background without taking up valuable time coders could spend building.&lt;/p&gt;

&lt;p&gt;Some unnecessary developer time stealers include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost optimizing servers&lt;/li&gt;
&lt;li&gt;Wiring up continuous integration&lt;/li&gt;
&lt;li&gt;Connecting tools around incident response&lt;/li&gt;
&lt;li&gt;Auditing cloud security permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cost optimization is the elephant in the room for cloud computing. How companies acquire, build, utilize, and adapt their server space can hugely impact the efficiency of their spend. Currently, companies spend a boatload on labor hours for developers to monitor their systems.&lt;/p&gt;

&lt;p&gt;It’s important to keep in mind that there are two kinds of jobs for developers, and indeed anyone selling a product: &lt;em&gt;those that make your product more unique&lt;/em&gt; and &lt;em&gt;routine functions which must be kept up in order to be a responsible administrator&lt;/em&gt;. Clearly, developers should focus as much of their energy and resources on advancing the company’s core value proposition. The less time developers spend getting distracted with mundane maintenance tasks, the better. This brings us to our next efficiency factor, minimizing interruptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developers Do Want to Automate Their Interruptions
&lt;/h2&gt;

&lt;p&gt;A tap on the shoulder. Expanding Slack notifications. Unnecessary pages for non-incidents. These are some of the things that keep developers from doing their best work. These can’t all be avoided, but automation can help limit them.&lt;/p&gt;

&lt;p&gt;For example, a developer might get a notification when their cloud development servers have been running idle for too long. Those messages are well-meaning, but they are a distraction from focused work. The actions taken during these times may only take minutes, but then it takes time to get back into a productive flow. Perhaps most maddeningly, the actions needed here are always likely to follow the same considerations. It’s a perfect opportunity for automation.&lt;/p&gt;

&lt;p&gt;Or consider code review, an important part of working on a dev team. The reviewer should not need to manually create a staging server with running code. Nor should they need to worry about shutting it down when they’re finished. Between continuous integration tools, code repositories, and a way to describe the ideal flow, you can limit the time a developer is interrupted.&lt;/p&gt;

&lt;p&gt;The rise of “no code” gives development teams an opportunity to look where they’re wasting developer time. The answer is not to give drag-and-drop interfaces to developers. Let them use the tools they know best and connect those pieces with real code.&lt;/p&gt;

&lt;p&gt;Using a combination of event-based triggers and automated protocols, &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; listens to signals from your existing DevOps tools and then triggers workflows to orchestrate actions on downstream services. Developers can get a taste of no code without giving up their code. Be efficient with the things that can be automated and get your team to get back to building out your organization’s core value.&lt;/p&gt;

&lt;p&gt;Get started today with our &lt;a href="https://relay.sh/"&gt;single platform for all your cloud automation use cases&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>API Design Patterns for REST Web Services</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Mon, 08 Jun 2020 20:57:12 +0000</pubDate>
      <link>https://dev.to/stoplight/api-design-patterns-for-rest-web-services-43i0</link>
      <guid>https://dev.to/stoplight/api-design-patterns-for-rest-web-services-43i0</guid>
      <description>&lt;p&gt;REST turns 20 years old this year. In addition to the architecture and recommendations outlined in Roy Fielding’s dissertation, we now have two decades of practical application. When designing APIs, it makes sense to build upon the best practices already implemented by countless others.&lt;/p&gt;

&lt;p&gt;This post identifies the most common REST API design patterns across several categories. Rather than start anew, build upon this foundation of API guidelines from thousands of successful API companies.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTP Methods and Status Codes
&lt;/h2&gt;

&lt;p&gt;By the strict definition of REST, you don’t need to use the HTTP protocol. However, the two developed alongside each other, and almost every RESTful API relies upon HTTP. For that reason, it makes sense to structure your API around the built-in methods and status codes that are already well-defined in HTTP.&lt;/p&gt;

&lt;p&gt;Each HTTP request includes a method, sometimes called “HTTP verbs,” that provides a lot of context for each call. Here’s a look at the most common HTTP methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GET&lt;/strong&gt;: read data from your API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST&lt;/strong&gt;: add &lt;em&gt;new data&lt;/em&gt; to your API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PUT&lt;/strong&gt;: update &lt;em&gt;existing data&lt;/em&gt; with your API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PATCH&lt;/strong&gt;: updates a &lt;em&gt;subset of existing data&lt;/em&gt; with your API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DELETE&lt;/strong&gt;: remove data (usually a single resource) from your API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you design your API, you’ll want to rely on the methods to express the primary purpose of a call. For that reason, you don’t want to use a POST to simply retrieve data. Nor would you want a GET to create or remove data.&lt;/p&gt;

&lt;p&gt;Much as these methods provide the request context from client to server, HTTP status codes help describe the response in the reverse direction.&lt;/p&gt;

&lt;p&gt;Some common HTTP status codes include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;200&lt;/strong&gt;: Successful request, often a GET&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;201&lt;/strong&gt;: Successful request after a create, usually a POST&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;204&lt;/strong&gt;: Successful request with no content returned, usually a PUT or PATCH&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;301&lt;/strong&gt;: Permanently redirect to another endpoint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;400&lt;/strong&gt;: Bad request (client should modify the request)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;401&lt;/strong&gt;: Unauthorized, credentials not recognized
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;403&lt;/strong&gt;: Forbidden, credentials accepted but don’t have permission&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;404&lt;/strong&gt;: Not found, the resource does not exist&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;410&lt;/strong&gt;: Gone, the resource previously existed but does not now&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;429&lt;/strong&gt;: Too many requests, used for rate limiting and should include retry headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;500&lt;/strong&gt;: Server error, generic and worth looking at other 500-level errors instead&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;503&lt;/strong&gt;: Service unavailable, another where retry headers are useful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many more HTTP status codes and methods to consider, but the above lists should get you well on your way for most APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Friendly Endpoint Names
&lt;/h2&gt;

&lt;p&gt;A typical design pattern with REST APIs is to build your endpoints around resources. These are the “nouns” to HTTP method verbs. Your API design will be much easier to understand if these names are descriptive.&lt;/p&gt;

&lt;p&gt;For example, if you’re working on a cookbook API, you might include the following endpoint:&lt;br&gt;
&lt;code&gt;/recipes/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As you add new recipes, you would POST them to the endpoint. To get a list, you use the GET method on the same endpoint. To retrieve a specific recipe, you could call it by its identifier in the URL:&lt;br&gt;
&lt;code&gt;/recipes/42&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;One thing to specifically avoid with friendly REST endpoint names is describing actions. For example, a verb within the endpoint (i.e., &lt;code&gt;/getRecipes/&lt;/code&gt;) would run counter to relying on HTTP to provide that context.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://stoplight.io/blog/crud-api-design/"&gt;CRUD API Design Recommendations&lt;/a&gt; goes into more detail, including popular topics like plurals and versioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Support Use Cases with API Parameters
&lt;/h2&gt;

&lt;p&gt;Naive or simplistic API design can follow all the guidelines above and still not support the use cases that developers will need. It’s important to thoroughly understand how an API will be used and get feedback from collaborators, such as with &lt;a href="https://stoplight.io/mocking"&gt;mock API servers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Often, when use cases are discovered after an API is built, engineers will create new endpoints to support these unearthed requirements. For example, your cookbook API may need to return only recipes from a specific category, or you want to show the recipes with the least prep time. Rather than create redundant endpoints, plan for smart parameters from the start.&lt;/p&gt;

&lt;p&gt;There are three common types of parameters to consider for your API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Filtering&lt;/strong&gt;: Return only results that match a filter by using field names as parameters. For example: &lt;code&gt;/recipes/?category=Cookies&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pagination&lt;/strong&gt;: Don’t overload clients and servers by providing everything. Instead, set a limit and provide &lt;code&gt;prev&lt;/code&gt; and &lt;code&gt;next&lt;/code&gt; links in your response. Example: &lt;code&gt;/recipes/?limit=100&amp;amp;page=3&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sorting&lt;/strong&gt;: Provide a way to sort or some use cases will still require paging through all results to find what’s needed. Example: &lt;code&gt;/recipes/?sort=prep_time&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These three approaches can be used together to support very specific queries. For example, this API request would retrieve one cookie recipe with the shortest preparation time: &lt;code&gt;/recipes/?category=Cookies&amp;amp;sort=prep_time&amp;amp;limit=1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In some cases, you’ll need additional parameters or a special syntax to fully support API consumer expectations. You will likely want to provide a sort direction (i.e., &lt;code&gt;order=desc&lt;/code&gt; or &lt;code&gt;sort=prep_time:asc&lt;/code&gt;), and may have times when you want to filter or sort by multiple fields. Understanding your use cases will help determine the complexity of your parameters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Borrow From Existing Conventions
&lt;/h2&gt;

&lt;p&gt;While this post does its best to cover overall API design patterns, you’ll want to look at standards and conventions specific to your industry or a specific feature. Very few of us are building completely unique APIs, so there is a lot to learn from others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stoplight.io/blog/rest-api-standards-do-they-even-exist/"&gt;Many API standards&lt;/a&gt; are built around REST APIs. When you implement authentication for your API, for example, don’t blaze a new trail. There are many options, including the well-trod OAuth path, when providing user-associated data. You’ll find standards for API headers and a handful around data formats like JSON and XML, among others.&lt;/p&gt;

&lt;p&gt;You may be &lt;a href="https://stoplight.io/blog/designing-apis-for-microservices/"&gt;designing microservices APIs&lt;/a&gt;, which has its own set of considerations. Everything covered in this post likely still applies, but you’ll want to pay extra careful attention when designing microservices. Each will need to make sense on its own, yet benefit from a combination (loose coupling).&lt;/p&gt;

&lt;p&gt;On the other hand, &lt;a href="https://stoplight.io/blog/open-banking-guide/"&gt;open banking APIs&lt;/a&gt; require their own treatment. European standards are the most mature and have a set of design patterns based around those regulations.&lt;/p&gt;

&lt;p&gt;Your industry may have its own set of standards or conventions. Even if they aren’t as strict as banking regulations, it’s worth giving proper consideration to a pattern with which developers will already be familiar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Document with An OpenAPI Definition
&lt;/h2&gt;

&lt;p&gt;As you design your API, it will be extremely useful to maintain an OpenAPI definition as the source of truth. This format, the next generation of the older Swagger file, describes endpoints, request data, responses, error codes, and more. In addition, it can be used to automate with tooling across the API lifecycle.&lt;/p&gt;

&lt;p&gt;Perhaps the most common use of an OpenAPI document is to &lt;a href="https://stoplight.io/documentation/"&gt;generate API documentation&lt;/a&gt;, especially an API reference. Since the format outlines the ways an API can be called, it contains all the information a developer needs to integrate with the API. Plus, some API references don’t include essential details like error codes, so OpenAPI encourages accurate documentation. Further, you can generate new docs every time your API changes, so they’ll always be up-to-date.&lt;/p&gt;

&lt;p&gt;You can also use your OpenAPI definition to &lt;a href="https://stoplight.io/mocking/"&gt;create mock HTTP servers&lt;/a&gt;, which allows you to try out your API before you write any code. Circulate the interface amongst your team for early feedback, or validate the requests from your API client.&lt;/p&gt;

&lt;p&gt;Those are just two potential uses for your machine-readable API definition, which you can create OpenAPI definition files using YAML or JSON. Or, create them much faster with a visual OpenAPI editor. &lt;a href="https://stoplight.io/studio/"&gt;Stoplight Studio&lt;/a&gt; can read existing OpenAPI files from any git repo, and you can make edits—or start from scratch—within a beautiful editing environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Style Guides for Consistency
&lt;/h2&gt;

&lt;p&gt;Some design patterns are a matter of preference. Ideally, you can codify your organization’s approach once, rather than revisiting it each time you create an API. A style guide can keep your company on the same page with API design. In addition to being consistent between APIs, it’s even more important to maintain consistency within a single API.&lt;/p&gt;

&lt;p&gt;Some organizations will create a written API style guide. A document that is easily accessible within your intranet helps everyone understand the design patterns you’ve already adopted. However, you can go even farther by enforcing your style guide programmatically. Using a tool like an &lt;a href="https://stoplight.io/open-source/spectral"&gt;open source linter&lt;/a&gt;, you can define rulesets for your OpenAPI documents.&lt;/p&gt;

&lt;p&gt;When you &lt;a href="https://stoplight.io/blog/api-style-guide/"&gt;automate your API style guide&lt;/a&gt;, you can look for any number of API characteristics: resource and field names, capitalization formats, how you use punctuation, and versioning, among others.&lt;/p&gt;

&lt;p&gt;Your style guide, whether written or programmatic, becomes your own guidelines for the design patterns covered here. Help ensure your organization uses HTTP methods correctly, returns appropriate status codes, implements friendly endpoint names, uses smart parameters, and borrows from the existing conventions you’ve already identified.&lt;/p&gt;

&lt;p&gt;Now you’re ready to create fantastic APIs, so join the world’s leading API-first companies on &lt;a href="https://stoplight.io/"&gt;Stoplight’s API design management platform&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>api</category>
      <category>apidesign</category>
      <category>rest</category>
    </item>
    <item>
      <title>Devops Automation Examples in Practice</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Thu, 14 May 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/relay/devops-automation-examples-in-practice-4ocf</link>
      <guid>https://dev.to/relay/devops-automation-examples-in-practice-4ocf</guid>
      <description>&lt;p&gt;&lt;a href="///static/45712450e7ba7d28e1dd8b143232dbd0/6a068/student-laptop.jpg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ornD5Tnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://relay.sh/static/45712450e7ba7d28e1dd8b143232dbd0/4b190/student-laptop.jpg" alt="Student typing on a laptop at a desk" title="Student typing on a laptop at a desk "&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://relay.sh/blog/devops-automation-examples-in-practice/"&gt;Original post&lt;/a&gt; By: Adam Duvander&lt;/p&gt;

&lt;p&gt;Cloud native engineering is a flexible discipline. However, the many DevOps tool options and integration methods can be confusing. More complexity often means more difficulty automating tasks. DevOps programs should evolve toward process improvement. Improvements in code delivery will drive greater transparency in project communication, wiser adoption of shared resources, and make more room for quality assurance.&lt;/p&gt;

&lt;p&gt;In this post, we’ll look at three examples for DevOps automation. They use Jira, Jenkins, and Docker—but the concepts of event-triggered workflows will translate to whatever tools you use.&lt;/p&gt;
&lt;h2&gt;
  
  
  Build Resources (and Transparency) From Work Tickets
&lt;/h2&gt;

&lt;p&gt;Transparency is key to open communication within DevOps teams. Imagine a world where any project manager can see and build information in a business-friendly context. Taking some time to synchronize your Jenkins pipelines with JIRA, for example, would give a PM real-time insight to EC2 deployment jobs, and prevent hail storms of status pulse checks. The following steps will promote Jira from a 2D digital cork board into a mission control interface, giving orders and taking receipts. You could use a similar approach with your project management and build tools.&lt;/p&gt;

&lt;p&gt;To set up the automation, prepare your Jira project to trade information to Jenkins through a webhook. Let’s say your project is an app migration and you need to repeatedly provision a suite of AWS services for testing. Instead of manually uploading CloudFormation scripts, add custom fields in the Issue configuration console that will tell Jenkins where to find those scripts and what parameters to provision them with. Then create a Build Status field that will be controlled by updates from Jenkins. Verify that those new fields now appear in your tickets. Next, under the Advanced configuration menu, create a connection with Jenkins by plugging its server into Webhooks configuration form. Select “Created” and “Updated” as Events triggers.&lt;/p&gt;

&lt;p&gt;Now it’s Jenkins’ turn. We’re going to set it up to receive build parameters and pass build statuses. We are assuming the pipelines are already built and we simply need to get them working with the Jira webhook. To do so, install the following plugins and configure them to connect to your Jira account, and more specifically, the project prefix for the app migration board:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jira&lt;/li&gt;
&lt;li&gt;Jira Pipeline Steps&lt;/li&gt;
&lt;li&gt;Jira Trigger Plugin&lt;/li&gt;
&lt;li&gt;jira-ext Plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These will allow Jenkins to listen for the appropriate updates from Jira, as well as expose functionality to communicate with the triggering ticket from the build.&lt;/p&gt;

&lt;p&gt;With both applications now locked into each other, update your pipeline to handle the Jira payload and hand it off to the build. Write a JQL filter in the pipeline configuration so it can only be triggered by “Create” and “Update” webhook events. Next, map the CloudFormation template location and the other parameters as Custom Fields and make sure the Jira ticket key is also captured as an Issue Attribute Path field. The CloudFormation template and parameter are now available as environmental variables to guide that build. Also, the ticket key value is available for the new functions exposed by the Jira plugins to build status updates and log info directly into the ticket for all stakeholders to see.&lt;/p&gt;

&lt;p&gt;An automation like this can help you do less repetitive work. It does take some effort to set up and maintain the connections between your tools. To simplify tedious workflows and find example designs, be sure to check out the &lt;a href="https://relay.sh/"&gt;event-driven automation tool&lt;/a&gt; that Puppet is building.&lt;/p&gt;
&lt;h2&gt;
  
  
  Streamline Your Docker Builds with Dynamic Variables
&lt;/h2&gt;

&lt;p&gt;Docker, like CloudFormation, is a powerful tool for rebuilding apps and infrastructure on demand. Here we’re going to extend the power of parameterization into Docker-based pipeline builds. Ideally, you could transport a Docker container as-is into any environment. In practice, however, there will be differences. Maybe you have different database endpoints for development, testing, and production. If your application’s integration tests rely on hitting the right endpoint per environment, we can use build-specific parameters to keep your Docker containers environment-agnostic.&lt;/p&gt;

&lt;p&gt;Here’s how you can route parameters for a test environment build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    // ...
    agent {
        dockerfile {
            filename "my-dockerfile"
            label "my-docker-label"
            args "-v DB_URL= ${env.DB_URL}" // set to 'test_db.example.com'
        }
    }
    // ...
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s assume we’re passing a controlled set of parameters from the triggering action into Jenkins, like in the previous example. We’re running a test pipeline, so Jenkins received the database endpoints as &lt;code&gt;DB_URL=test_db.example.com&lt;/code&gt; from the triggering action. We can access this value in our Jenkinsfile from the &lt;code&gt;env&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;Now we must funnel the &lt;code&gt;DB_URL&lt;/code&gt; value into the Docker build. We begin by declaring a build agent. To maximize the use of your team’s Docker program, pull in a custom-built Dockerfile, by declaring your custom Dockerfile as your agent. Next we must pass the environment variable as an interpolated string to the args: &lt;code&gt;args “-v DB_URL= ${env.DB_URL}”&lt;/code&gt;. In your custom Dockerfile, map the &lt;code&gt;DB_URL&lt;/code&gt; ARG instruction to an ENV instruction. Now that URL will be accessible as a variable in your containerized test scripts without having to modify the file at all.&lt;/p&gt;

&lt;p&gt;Of course, these are the sorts of common cloud wiring that &lt;a href="https://relay.sh/"&gt;Relay&lt;/a&gt; is built to support. Anything that happens frequently based on events is a good match, which is why testing is another area ripe for automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make Testing Routine Inside of your Containerized Build
&lt;/h2&gt;

&lt;p&gt;Testing is a non-negotiable pillar of any software development program, but it can become a bottleneck to deployment. DevOps Teams often mitigate the logjam with a less-than-comprehensive testing regime. For example, a combination of functional and unit testing on updates to an API may suffice to verify the CRUD functionality with a minimum passable integration to a locally spun up datasource. However, you can replicate all of the APIs’ dependencies and run an extensible suite of tests within a one-off docker-compose job with little overhead.&lt;/p&gt;

&lt;p&gt;For this example, let’s imagine you are building a small Node app that reads and writes to MongoDB. It will be easy to represent the stack in a docker-compose file. Docker-compose files are YAML scripts that tell Docker what to inject into a Dockerfile build. In this case, we’re going to build a Node.js image in the Dockerfile and declare two services in our docker-compose.yml: the API service and a MondoDB instance. Save this docker-compose file and a Node.js Dockerfile to build it into an &lt;code&gt;integration-test&lt;/code&gt; subdirectory in the project repo.&lt;/p&gt;

&lt;p&gt;You can now perform some sanity checks by running docker-compose against the &lt;code&gt;integration-test&lt;/code&gt; directory by running Postman actions against the local API and checking MongoDB on whichever port you exposed it to. More importantly, we’re just a few short files away from automatically testing the full integrated build. Create an &lt;code&gt;index.js&lt;/code&gt; file with your test scripts. Point your API calls and database operations at the names you gave your services in the docker-compose file and return results within the container. Lastly, create a shell script to manage the docker containers and output the test results. Now you can automatically trigger the tests in other CI/CD pipelines, version control them, and add more services or tests as the business logic evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quickly Deploy Event-driven Workflows
&lt;/h2&gt;

&lt;p&gt;As you can see, your workflows are more efficient when you build enough overhead to scale out event-triggered behavior. We’ve seen how dynamically orchestrating Docker builds from Jira can kick off extensible testing programs within Jenkins. A good optimization principle to consider now is iteration. Some of these integrations may work or falter for your needs. Being able to swap out and remap your services without turning off the pipelines keeps your DevOps workflows flowing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://relay.sh/"&gt;Relay by Puppet&lt;/a&gt; seeks to manage that for you by automating modular event-driven workflows. If one integration stops meeting your needs, you can swap tools or reroute the workflow and automate your DevOps environment.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>API Development Demystified</title>
      <dc:creator>Adam DuVander</dc:creator>
      <pubDate>Mon, 23 Mar 2020 15:04:11 +0000</pubDate>
      <link>https://dev.to/stoplight/api-development-demystified-2nc1</link>
      <guid>https://dev.to/stoplight/api-development-demystified-2nc1</guid>
      <description>&lt;p&gt;It’s clear that APIs are an important part of modern software. Companies frequently develop APIs to share data, functionality, and business processes. However, it’s not always clear where the backend API development ends and work begins on the website, app, or other clients to use the API.&lt;/p&gt;

&lt;p&gt;In fact, the most efficient organizations don’t require such a distinct hand-off. When your APIs are built thoughtfully, teams can work in parallel. In this post, we’ll look at the benefits of a forward-thinking API approach and how it is impacted across the full lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build APIs?
&lt;/h2&gt;

&lt;p&gt;It’s important to remember the benefits that APIs provide for enterprises. The alternative is large, all-knowing applications, which are slow to develop. In many cases, teams duplicate efforts, because there is not a method to communicate that a problem has already been solved elsewhere. This lack of knowledge limits a company’s ability to work with external parties, as well.&lt;/p&gt;

&lt;p&gt;APIs enable collaboration at all levels. A simple example is the interaction between frontend and backend teams. With an interface defined, neither team needs to wait on the other’s work. Rather than sequential efforts, they can work simultaneously. And, in true collaboration, the process can be iterative, sharing what they’ve learned to inform the next version. The same approach works within departments, across business units, and with partner companies.&lt;/p&gt;

&lt;p&gt;You can also stop reinventing the wheel. When an API already exists, teams can build upon others’ work. For example, an API to access a customer’s account details could be useful for a website, a mobile app, and many other consumers. With appropriate permissions controls in place, the same API could even be used in a partner’s application.&lt;/p&gt;

&lt;p&gt;Indeed, empowering partnerships is a major benefit to public APIs. However, internal API development can even enable external collaboration. Once you make internal teams aware of an API, they may see opportunities with partner companies.&lt;/p&gt;

&lt;h2&gt;
  
  
  The API Lifecycle
&lt;/h2&gt;

&lt;p&gt;Depending on the terminology you use, API development may refer to the entire lifecycle of an API or one phase within it. Either way, you’ll want to understand each phase and how they work together. While APIs help you move faster as an organization, that doesn’t mean you should create APIs without a thoughtful process. Plus, even hurried APIs will go through the API lifecycle, possibly with more headaches along the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstoplight.io%2Fimages%2Fapi-lifecycle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstoplight.io%2Fimages%2Fapi-lifecycle.png" alt="API Lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gartner defines the API development process as three major phases: Design, Build, and Run. Too often, teams will think of API development as only the build phase. It’s a natural misconception because that’s when the bulk of the code is written. However, that skips over the important design phase. There’s a lot more than code that goes into developing an API for the long term.&lt;/p&gt;

&lt;p&gt;As discussed in &lt;a href="https://stoplight.io/blog/api-integration-testing/" rel="noopener noreferrer"&gt;testing across your API lifecycle&lt;/a&gt;, there are three other important phases: Maintain, Support, and Update. While not necessarily sequential, every API will need these additional phases after it is pushed to production.&lt;/p&gt;

&lt;p&gt;As you look to update an API, you end up back where any API development should start: with API design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design-First API Development
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://stoplight.io/api-design-guide/basics/" rel="noopener noreferrer"&gt;API design guide&lt;/a&gt; we discuss the “design-second oxymoron.” It’s during the design phase that important decisions are made about how your API works and what it makes possible. Good API development practices will start with a collaborative design phase.&lt;/p&gt;

&lt;p&gt;When designing an API, you’ll need to keep teams on the same page about the decisions you make. The industry has rallied around the OpenAPI specification as a way to detail REST APIs. Sometimes referred to by the outdated term Swagger, OpenAPI is a document format to describe API endpoints and their related data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstoplight.io%2Fimages%2Fstudio%2Fhero.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstoplight.io%2Fimages%2Fstudio%2Fhero.png" alt="Stoplight Studio"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stoplight.io/studio/" rel="noopener noreferrer"&gt;Stoplight Studio&lt;/a&gt; is a visual API design editor, which helps you quickly produce OpenAPI documents without memorizing syntax or writing any code. By describing an API during the design phase, teams can make important decisions about reusable data models, which HTTP methods to support, and how to handle error conditions.&lt;/p&gt;

&lt;p&gt;OpenAPI is a machine-readable format that can help you in the later phases of your API development, as well. As you build your API, you can &lt;a href="https://stoplight.io/p/docs/gh/stoplightio/prism/docs/guides/01-mocking.md" rel="noopener noreferrer"&gt;generate mock servers&lt;/a&gt; using your OpenAPI document as the source of truth for decisions made during design. These definitions can also create documentation and serve as a central object when discussing updates to your APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stoplight.io/studio/" rel="noopener noreferrer"&gt;Get started with API design&lt;/a&gt; and create your first OpenAPI document or start with an existing repository. You’ll build and run better APIs as a result.&lt;/p&gt;

&lt;p&gt;♻️ This post was originally posted on the &lt;a href="https://stoplight.io/blog/" rel="noopener noreferrer"&gt;Stoplight Blog&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
