<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tatiana Barrios</title>
    <description>The latest articles on DEV Community by Tatiana Barrios (@tiannymonti).</description>
    <link>https://dev.to/tiannymonti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tiannymonti"/>
    <language>en</language>
    <item>
      <title>Managing secrets with SSM</title>
      <dc:creator>Tatiana Barrios</dc:creator>
      <pubDate>Mon, 03 Aug 2020 03:51:29 +0000</pubDate>
      <link>https://dev.to/tiannymonti/managing-secrets-with-ssm-3i79</link>
      <guid>https://dev.to/tiannymonti/managing-secrets-with-ssm-3i79</guid>
      <description>&lt;p&gt;There is an obvious rule we all know as software engineers: Don't let your sensitive data go into the repository. We have heard about those horror stories where an ill-intentioned developer goes through a repository, gets the credentials for production databases, and erases everything important there. Therefore, the correct management of secrets and sensitive variables is a crucial concern if we want to improve the security of our projects.&lt;/p&gt;

&lt;p&gt;This is done on frontend and backend repositories by avoiding anything sensitive on the .env files. Besides, we can share a development or playground version of those variables via a secured s3 bucket, for example. But it gets more complicated than that when we are developing infrastructure as code because it's not enough with a simple .env file, therefore we need to get the data we need from somewhere else.&lt;/p&gt;

&lt;p&gt;There are several ways to achieve this. The Vault by Hashicorp or GCP Secret Manager, for instance, are popular tools if you are working with tools from the Google Cloud platform or if you want to instantiate a machine to allocate the Vault server. However, if you are into AWS, there is another very simple resource for these purposes: &lt;strong&gt;AWS Systems Manager Parameter Store&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Parameter store behaves almost like an API and as I said, it's very easy to use. It is not the only resource from AWS destined to manage sensitive data, but it's the one I prefer due to its simplicity. There you can store parameters as simple strings (&lt;em&gt;String&lt;/em&gt;) or encrypted strings (&lt;em&gt;SecureString&lt;/em&gt;), to use them on your code. But the strongest feature from the parameter store is that it allows you to tag variables and store them with hierarchy 😎.&lt;/p&gt;

&lt;p&gt;On the pricing side, every API interaction, until 10000 interactions, costs $0.05. There is a limit of 10000 standard parameters (4KB per parameter) on each region and 100000 advanced parameters (8KB per parameter), but the latter ones have a base cost of $0.05 per month. In the case of boto3 or the Serverless framework, the parameters are only retrieved when we deploy or execute the scripts, so, unless you have a big quantity of parameters, the final billing for this service should be very low. &lt;/p&gt;

&lt;h3&gt;
  
  
  How do we use this?
&lt;/h3&gt;

&lt;p&gt;There are a lot of ways to upload a variable to the SSM parameter store, but my preferred one is just using this command from the AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm put-parameter --cli-input-json '{"Type": "String/SecureString", "Name": "/this/is/hierarchy/NAME_OF_VARIABLE", "Value": "****", "Overwrite": true}' 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To know more about the other options you have for this command, please visit the &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ssm/put-parameter.html"&gt;AWS CLI documentation&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use case #1: the Serverless Framework
&lt;/h3&gt;

&lt;p&gt;Retrieving and applying the SSM parameters differs with every code tool we are using. First of all, in the case of the Serverless framework, a plugin like &lt;a href="https://github.com/jeremydaly/serverless-stage-manager"&gt;serverless-stage-manager&lt;/a&gt; would be very convenient if we want to divide the environment variables by stage. Afterward, when we declare the variables in the &lt;em&gt;custom&lt;/em&gt; field or directly on a function &lt;em&gt;environment&lt;/em&gt; field, we just call the SSM parameter like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  stages:
    - development
    - qa
  testVariable:
    development: ${ssm:/development/TEST_VARIABLE}
    qa: ${ssm:/qa/TEST_VARIABLE}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;functions:
  testFunction:
    ...
    environment:
      TEST_VARIABLE: ${ssm:/testFunction/TEST_VARIABLE}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Use case #2: Boto3 (Python)
&lt;/h3&gt;

&lt;p&gt;If you create the code for your infrastructure in Python, you can use Boto3 on your advantage, by just starting a session for SSM and retrieving the variables (one by one) with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session = boto3.Session()
ssm = session.client('ssm')

TEST_VARIABLE = ssm.get_parameters(Names=["/stage/TEST_VARIABLE"], WithDecryption=True/False)['Parameters'][0]['Value']

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Use case #3: Terraform
&lt;/h3&gt;

&lt;p&gt;Last but not least, in the case of Terraform, you can use a data source to get a variable from SSM and later reference it on another resource. Just like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ssm_parameter" "test_variable" {
  name = "TEST_VARIABLE"
  with_decryption = true/false
}

resource "aws_lambda_function" "test_lambda" {
  function_name    = "test_lambda"
  ...

  environment {
    variables = {
      TEST_VARIABLE = data.aws_ssm_parameter.test_variable.value
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this case, we should be careful with the &lt;em&gt;TF State&lt;/em&gt; or the &lt;em&gt;TF Plan&lt;/em&gt; file because the data source value might be there on plain sight. Also, in the case of the Serverless framework or anything related to Cloudformation (e.g SAM, Troposphere) we should be sure that the &lt;em&gt;.cf&lt;/em&gt; files are being listed on the .gitignore.&lt;/p&gt;

&lt;p&gt;I really hope this post has been clear to all of you, and that you can use this for your projects. &lt;/p&gt;

&lt;p&gt;Happy week, everyone! 🌻&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ssm</category>
      <category>terraform</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Getting started with Terraform: Practical advice</title>
      <dc:creator>Tatiana Barrios</dc:creator>
      <pubDate>Mon, 29 Jun 2020 15:46:09 +0000</pubDate>
      <link>https://dev.to/tiannymonti/getting-started-with-terraform-practical-advice-2n8e</link>
      <guid>https://dev.to/tiannymonti/getting-started-with-terraform-practical-advice-2n8e</guid>
      <description>&lt;p&gt;I started using TF for AWS infrastructure in the summer of 2019, and it was a big deal for me. Going from the AWS console and the AWS CLI to just write code and see those lines reflected in AWS resources felt like magic. To my shock, Terraform is very simple and I got the grasp of the language just by reading documentation. However, there are some things that I would have liked to know back then and cannot go unnoticed when creating a new project with TF. For that reason, in this post, I will try to give you some advice so you can start your projects in Terraform as easy as pie 😎.&lt;/p&gt;

&lt;h3&gt;
  
  
  First things first: Folder structure.
&lt;/h3&gt;

&lt;p&gt;One of the best things I bought last year was the book &lt;em&gt;Terraform Up &amp;amp; Running&lt;/em&gt;, written by Yevgeniy Brikman. In this book, you will get everything you need to implement Terraform on your infrastructure projects at its maximum power. Here is a piece of advice you will encounter in the fourth chapter of the aforementioned book. Terraform will create a state file in every folder you run &lt;strong&gt;terraform init&lt;/strong&gt;, so if you want to differentiate between infrastructure for &lt;strong&gt;development&lt;/strong&gt; and &lt;strong&gt;production&lt;/strong&gt;, you should create a folder for both environments. Also, there is a huge possibility that you will implement some resources that are going to be used across all environments, so a global folder will be nice to have. &lt;/p&gt;

&lt;p&gt;Summarizing what I already wrote, a nice folder structure to start in Terraform should look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u9fi8QLs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kyjju6dakqdylw58io55.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u9fi8QLs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kyjju6dakqdylw58io55.jpeg" alt="Terraform Folder Structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Modules for the win!
&lt;/h3&gt;

&lt;p&gt;Modules are for TF what functions are for regular coding. Without them, you would only repeat yourself ad infinitum. You can get modules from existing Github repositories, like this one to give a static IP to a lambda: &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ainestal"&gt;
        ainestal
      &lt;/a&gt; / &lt;a href="https://github.com/ainestal/terraform-lambda-fixed-ip"&gt;
        terraform-lambda-fixed-ip
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Provide a fixed IP (ElasticIP) to your AWS Lambdas
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;
; or you can create them by yourself. Once you get a grasp of the common properties shared by a group of resources, creating modules is very simple. 

&lt;p&gt;The idea of the Terraform modules is to provide you with a container for a certain architecture so you only configure what is necessary. In your project file system, there should be a &lt;em&gt;modules&lt;/em&gt; folder where you can have all the modules you need for your infrastructure. Then, inside the folder for a specific module, the following files should exist:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IGTfjA-3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/er9a9e09b8tif4c43uvy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IGTfjA-3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/er9a9e09b8tif4c43uvy.jpeg" alt="Module Files"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;main.tf&lt;/em&gt; file contains all the configuration for the resources; the &lt;em&gt;vars.tf&lt;/em&gt; file is where the input variables for the module are defined, and the &lt;em&gt;outs.tf&lt;/em&gt; file is a collection of parameters returned by the module that you can use on other modules or resources. Like everything on TF, those three files can actually just be one, but the recommendation is to have different files to apply separation of concerns.&lt;/p&gt;

&lt;p&gt;Last but not least, to use the modules in your infrastructure, instead of declaring resources you can declare the used module as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "name-of-module" {
    source               = "../path/of/module" (can also be a Git repo)
    var1                 = var.var1
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform on Teamwork
&lt;/h3&gt;

&lt;p&gt;More often than not, you are going to be pushing your TF code into a repository, where your team will review it. To make the review process easier, aside from tests, but also to verify the changes plan, you think you could use the command &lt;strong&gt;terraform plan -out=path/to/output&lt;/strong&gt;. However, this output will be used by &lt;strong&gt;terraform apply&lt;/strong&gt; and it won't be easy for a human to understand. &lt;/p&gt;

&lt;p&gt;If you want to write the &lt;strong&gt;terraform plan&lt;/strong&gt; console output to a file, you should execute something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -no-color | tee output.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;-no-color&lt;/em&gt; particle is important because, without it, the command would write a lot of nonsense to the file. This output file will contain your TF plan to be approved before going to a CI/CD process. Speaking of which, here it comes the second advice on the teamwork issue: Centralize the terraform state. &lt;/p&gt;

&lt;p&gt;As I previously said, TF creates a state file on every folder you do &lt;strong&gt;terraform init&lt;/strong&gt; to. Let's suppose you use something like Jenkins for the CI/CD process, and you changed the code and did an emergency &lt;strong&gt;terraform apply&lt;/strong&gt; if that code doesn't get into the repository and someone else triggers Jenkins, then your recently deployed infrastructure will be destroyed.&lt;/p&gt;

&lt;p&gt;One way to avoid this is by creating a &lt;em&gt;terraform backend&lt;/em&gt;. A Backend in TF manages the state and the locks remotely, so you won't have issues with state corruption or unveiling of sensitive data. By default, Terraform offers its own cloud where the state can be stored but on AWS, you can achieve this with an S3 bucket and a dynamo table. Here is a code snippet you can use as a template for an AWS backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    profile        = "default"
    bucket         = "test-terraform-state"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-1"  
    dynamodb_table = "test-terraform-locks"
    encrypt        = true
  }
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should have this backend written on every folder you plan to do &lt;strong&gt;terraform init&lt;/strong&gt; in, just like the profile. This way your team will rarely have issues with the terraform state locks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Import to the rescue
&lt;/h3&gt;

&lt;p&gt;Sometimes, it might happen that you want to implement some resource but you are not sure about its properties, because documentation is not enough, or you are not very clear on what you want; however you know how to set up that exact same resource on the AWS console. It can also happen that you already have some existing resources on the cloud, and you want to start managing them with TF. Here is where &lt;strong&gt;terraform import&lt;/strong&gt; comes to play. This is a very powerful command because you are giving TF permission to manage a resource that might be accidentally deleted or changed later on the road. &lt;/p&gt;

&lt;p&gt;To import a resource you can run something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform import resource_name.resource_tf_name my_test_resource
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Where &lt;em&gt;my_test_resource&lt;/em&gt; is the actual name of it on the cloud, and the &lt;em&gt;resource_name&lt;/em&gt; as well as the &lt;em&gt;resource_tf_name&lt;/em&gt; is how you map it on the Terraform code, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "resource_name" "resource_tf_name" {

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Before importing, you need to let terraform know that there will be an incoming resource. Therefore, you should map that resource in the code (empty, this is important), and later on &lt;strong&gt;terraform plan&lt;/strong&gt; you can accommodate its properties so that there are no changes in it. According to the Terraform documentation, in the future, it will import the resource configuration so this will be no longer necessary.&lt;/p&gt;

&lt;p&gt;If you want Terraform to stop managing this resource, you can always use the &lt;strong&gt;terraform state rm 'resource_name.resource_tf_name'&lt;/strong&gt; command. 😉&lt;/p&gt;

&lt;h3&gt;
  
  
  Divide resources and conquer
&lt;/h3&gt;

&lt;p&gt;Last but not least, this advice has more to do with code organization rather than functioning. With modules, there are several ways to organize your code in terraform, and you can choose the most convenient one given your project characteristics. As far as I can see, there are two main ways.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Organize by resource type: useful when the number of resources is not that big, and the resources don't depend on other resources explicitly, i.e.: a group of resources for an ECS cluster. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Organize by resource group: useful when there are a lot of resources and the resources depend on each other, i.e.: a group of resources for lambdas. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those would be all the insights I would have like to know when I was just starting projects with Terraform. TF is a wonderful tool, and it's even better when used correctly. I hope you can use everything I said in your favor.&lt;/p&gt;

&lt;p&gt;Happy terraforming! 😄&lt;/p&gt;

&lt;p&gt;Cheers 😎 🍺&lt;/p&gt;

</description>
      <category>aws</category>
      <category>gcp</category>
      <category>infrastructure</category>
      <category>terraform</category>
    </item>
    <item>
      <title>The Secrets of API Gateway Caching</title>
      <dc:creator>Tatiana Barrios</dc:creator>
      <pubDate>Mon, 25 May 2020 23:56:54 +0000</pubDate>
      <link>https://dev.to/tiannymonti/the-secrets-of-api-gateway-caching-3nfd</link>
      <guid>https://dev.to/tiannymonti/the-secrets-of-api-gateway-caching-3nfd</guid>
      <description>&lt;p&gt;*AWS pricing applies.&lt;/p&gt;

&lt;p&gt;Latency on Rest APIs has always been an issue. Maybe your response body is growing longer and longer, therefore taking more time. Or maybe you want to serve your users around the world quicker. To solve these situations there is an obvious answer: Implement caching. Cache, pronounced /kaʃ/, can be done in the backend, but a code solution is not always straightforward. However if you use API Gateway from AWS, it can make a big difference. &lt;/p&gt;

&lt;p&gt;Here we are going to discuss a couple of approaches to caching on AGW: one manual because you need to understand the intrinsics of it and, one via the Serverless framework. &lt;/p&gt;

&lt;h2&gt;
  
  
  Manual Approach
&lt;/h2&gt;

&lt;p&gt;For the first approach, let's suppose you have a Rest API on AGW for a bookstore. A pretty basic one, which on the console looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xxNaM9qO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bdp1ug4l8jb5m6gp9nut.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xxNaM9qO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bdp1ug4l8jb5m6gp9nut.jpeg" alt="API Gateway Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To activate the cache, you would have to go to the stage settings and just enable the cache. Be careful about specifying the Cache capacity, its TTL (Time To Live), if it is encrypted and if it requires authorization because all of that will influence the price 😉. For this post purposes, we won't need the encrypting or authorization, and we will settle for the lowest Cache capacity available (0.5Gb). Also, we'll set the TTL on 60 seconds (TTL is always in seconds, I have a little anecdote about that 😅). Please keep in mind, caching is not covered by the AWS free trial. When enabled, it has to look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V4yoZhXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9g582i1ihi01gifw2swe.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V4yoZhXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9g582i1ihi01gifw2swe.jpeg" alt="Enabled Cache on AWS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before moving on to test, you have to &lt;em&gt;Save Changes&lt;/em&gt; but more important: Deploy the API stage. If you don't deploy the API, it will appear as if you never enabled the cache, so please remember that. To deploy the API, you just have to go back to &lt;em&gt;Resources&lt;/em&gt;, &lt;em&gt;Actions&lt;/em&gt;, and then &lt;em&gt;Deploy API&lt;/em&gt;. On the console, it should look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KCDzHCyt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9emibry68ywdbgb5cwyu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KCDzHCyt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9emibry68ywdbgb5cwyu.jpeg" alt="Deploy API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now caching is enabled. Wait, is that it? 🤔 NO. First of all, let's keep in mind that Caching is directed to GET requests, and not POST or other requests. So, let's suppose you have a GET request, that depends on &lt;em&gt;Query parameters&lt;/em&gt;, &lt;em&gt;Path parameters&lt;/em&gt;, and even &lt;em&gt;Headers&lt;/em&gt;. You enable the caching, call it a day and suddenly: &lt;strong&gt;BAM!&lt;/strong&gt; all the responses are coming the same, no matter how much you change the parameters. That has a reason: enabling cache on the stage doesn't work alone, we need to override the caching settings on all necessary requests. For that, let's go again to the stage configurations, but this time, we will expand the stage resources. You should get something like this on the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cdq0zT-X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xsqx8ozic8wk6h1vnxkw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cdq0zT-X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xsqx8ozic8wk6h1vnxkw.jpeg" alt="Stage resources"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will select the resource and the method we are interested to override. Let's pick the one that says: &lt;em&gt;GET /books/{id}&lt;/em&gt;. It will give us 2 options, from which we will choose &lt;em&gt;Override for this method&lt;/em&gt;. The console should look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTwcRu-P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xnh0lrc250yqhnpmuvyr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTwcRu-P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xnh0lrc250yqhnpmuvyr.jpeg" alt="Individual caching settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can disable Cloudwatch Logging from there, and even disable the throttling, but we are not focusing on that. In the caching settings, we can change the TTL, encrypt the data, and change the authorization parameters for that resource, but we are not interested in that. I know it doesn't make sense now, but it is necessary to enable the overriding for the next step. So, we will live it like it is, and save changes. &lt;/p&gt;

&lt;p&gt;Before deploying the stage, we will need to go back to the &lt;em&gt;Resources&lt;/em&gt; menu, go to the resource we picked in the last step, and then, to the method. The console should look like this, in part: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WsVgIRcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nm6etg28c4kcy0fy1vzb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WsVgIRcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nm6etg28c4kcy0fy1vzb.jpeg" alt="Method settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, we select &lt;em&gt;Method Request&lt;/em&gt;. This view will let us configure every parameter we need to make the caching right, from &lt;em&gt;Request Path&lt;/em&gt; to &lt;em&gt;Request Body&lt;/em&gt;. For now, we will register the path &lt;strong&gt;id&lt;/strong&gt;, and add query strings called &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;author&lt;/strong&gt;, and also header &lt;strong&gt;Accept-Language&lt;/strong&gt;. Here, what is important is to check the parameter &lt;em&gt;Caching&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DAgAjXPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mqwbw5dgiou33h1ddyr6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAgAjXPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mqwbw5dgiou33h1ddyr6.jpeg" alt="Parameter caching"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will live it like that and finally, deploy the API 😎. After all of this, on every requested change, you should get something different. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: There is a known AGW bug related to configuring the cache on header parameters. There is no straightforward solution for that unless you are using an Infrastructure as Code approach. In this case, reload the page and try again until AGW approves the request. &lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Approach
&lt;/h2&gt;

&lt;p&gt;If we use serverless there is an easier approach to enable caching than what I previously wrote. First of all, you will need to have Serverless installed with everything it implies. Second, you will need to install the &lt;a href="https://www.npmjs.com/package/serverless-api-gateway-caching"&gt;Serverless API Gateway Caching plugin&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless plugin install --name serverless-api-gateway-caching
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;On the serverless YAML file, there is an optional &lt;strong&gt;custom&lt;/strong&gt; key used to apply all configurations from plugins. There we enable the caching for the whole API, just like we did some paragraphs ago, but on the code. Later, in the &lt;strong&gt;functions&lt;/strong&gt; key, for the function linked to our desired resource and method, we need to set its specific caching parameters. And that would be it. Much simpler than the previous approach 😅. By now, the file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: bookstore

provider:
  name: aws   
  runtime: nodejs12.x
  stage: ${opt:stage, 'dev'}
  region: us-east-1
  stackName: Bookstore-API-${self:provider.stage}
  apiName: Bookstore-API-${self:provider.stage}
  timeout: 30

custom:
  apiGatewayCaching:
    enabled: true
    clusterSize: '0.5' 
    ttlInSeconds: 60 
    dataEncrypted: false 
    perKeyInvalidation:
      requireAuthorization: false

functions:
  getBooks:
    name: getBooks-${self:provider.stage}
    handler: get/index.handler
    events:
      - http:
          path: books/{id}
          method: get
          cors: true
          caching:
            enabled: true
            cacheKeyParameters:
              - name: request.path.id
              - name: request.querystring.name
              - name: request.querystring.author
              - name: request.header.Accept-Language
  postBooks:
    name: postBooks-${self:provider.stage}
    handler: post/index.handler
    events:
      - http:
          path: books
          method: post
          cors: true
          caching:
            enabled: false

plugins:
  - serverless-api-gateway-caching

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Please don't forget to do &lt;em&gt;serverless deploy&lt;/em&gt; to apply all your changes on the cloud, otherwise, we did nothing 🙃. &lt;/p&gt;

&lt;p&gt;Thank you so much for reading me ☺️. If you like, you can follow me on &lt;a href="https://twitter.com/villaintianny"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/tatiana-barrios-montenegro-b80415127"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Laters! ✌🏽 &lt;/p&gt;

</description>
      <category>aws</category>
      <category>apigateway</category>
      <category>restapis</category>
      <category>caching</category>
    </item>
  </channel>
</rss>
