<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adam Abdelaziz</title>
    <description>The latest articles on DEV Community by Adam Abdelaziz (@adamabdelaziz).</description>
    <link>https://dev.to/adamabdelaziz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adamabdelaziz"/>
    <language>en</language>
    <item>
      <title>AWS SPA routing — The bad, the ugly, and the uglier</title>
      <dc:creator>Adam Abdelaziz</dc:creator>
      <pubDate>Wed, 07 Feb 2024 22:13:59 +0000</pubDate>
      <link>https://dev.to/adamabdelaziz/aws-spa-routing-the-bad-the-ugly-and-the-uglier-4b7n</link>
      <guid>https://dev.to/adamabdelaziz/aws-spa-routing-the-bad-the-ugly-and-the-uglier-4b7n</guid>
      <description>&lt;p&gt;TLDR - Routing with aws is hard &lt;/p&gt;

&lt;p&gt;There are so many kinds of web applications. Backend, frontend, full-stack, e-commerce, blogs, serverless, etc, the list goes on. While they are all special and unique in their own right, there is a lot of very common functionality that needs to be implemented for a large majority of these applications.&lt;/p&gt;

&lt;p&gt;To this day, accomplishing some of this common “boiler plate” functionality using major cloud providers remains shrouded in mystery…&lt;/p&gt;

&lt;p&gt;Let’s begin our journey to full-stack (API + SPA) application routing on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We have a full stack application, consisting of a backend API only service, and a single page application frontend.&lt;/li&gt;
&lt;li&gt;All services should be accessible from the same domain but different paths (e.g. /api/)&lt;/li&gt;
&lt;li&gt;The frontend application will be served from an s3 bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Frontend 404 bucket errors should ultimately result in a 200 status and serve the app index&lt;/li&gt;
&lt;li&gt;Backend 404 errors should not be modified&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’re starting with a cloudfront distribution that has two origins, one is the s3 bucket (the frontend SPA) and the other is the backend API service. In this state routing works but we see 404 “key not found” errors from the frontend (at any route other than /index.html).&lt;/p&gt;

&lt;h2&gt;
  
  
  Attempt #1
&lt;/h2&gt;

&lt;p&gt;Cloudfront allows some custom error handling and it seems pretty straightforward, so I tried something like this:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41b4f0nev76ny57of37y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41b4f0nev76ny57of37y.png" alt="Image description" width="800" height="681"&gt;&lt;/a&gt;&lt;br&gt;
‍This does the trick for the frontend, but now backend 404 errors are swallowed and instead we get a 200 response along with the frontend app index.&lt;/p&gt;
&lt;h2&gt;
  
  
  Attempt #2
&lt;/h2&gt;

&lt;p&gt;Since Cloudfront doesn’t allow that custom error behavior to be specified per origin, my next idea was to rely on the s3 bucket settings. I set an error page for the frontend:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyve1jjk4sad3uts9an5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyve1jjk4sad3uts9an5z.png" alt="Image description" width="800" height="635"&gt;&lt;/a&gt;&lt;br&gt;
‍This actually didn’t change any routing behavior, and I later learned that any bucket level rules are mostly ignored when using an s3 origin with cloudfront.&lt;/p&gt;

&lt;p&gt;To work around that limitation, I replaced the s3 origin with a custom origin pointing to the website endpoint of the s3 bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs46jswuu7rok3ykix51i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs46jswuu7rok3ykix51i.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;br&gt;
‍This seems closer to what we want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend 404s ultimately return the frontend app index&lt;/li&gt;
&lt;li&gt;Backend 404s remain unmodified
The issue here is that while the frontend returns the index correctly, it still returns a 404 status code.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Attempt #3
&lt;/h2&gt;

&lt;p&gt;One thing I was trying to avoid here was adding unnecessary complexity… &lt;/p&gt;

&lt;p&gt;After a decent amount of research I came to the conclusion that I can’t avoid using a Lambda function for this. Lambda functions can be used as sort of a “middleware” for cloudfront requests/responses. Since this is fairly common functionality that we’re trying to accomplish there were plenty of examples of what this lambda function would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use strict';

const http = require('https');

const indexPage = 'index.html';

exports.handler = async (event, context, callback) =&amp;gt; {
    const cf = event.Records[0].cf;
    const request = cf.request;
    const response = cf.response;
    const statusCode = response.status;

    // Only replace 403 and 404 requests typically received
    // when loading a page for a SPA that uses client-side routing
    const doReplace = request.method === 'GET'
                    &amp;amp;&amp;amp; (statusCode == '403' || statusCode == '404');

    const result = doReplace 
        ? await generateResponseAndLog(cf, request, indexPage)
        : response;

    callback(null, result);
};

async function generateResponseAndLog(cf, request, indexPage){

    const domain = cf.config.distributionDomainName;
    const appPath = getAppPath(request.uri);
    const indexPath = `/${appPath}/${indexPage}`;

    const response = await generateResponse(domain, indexPath);

    console.log('response: ' + JSON.stringify(response));

    return response;
}

async function generateResponse(domain, path){
    try {
        // Load HTML index from the CloudFront cache
        const s3Response = await httpGet({ hostname: domain, path: path });

        const headers = s3Response.headers || 
            {
                'content-type': [{ value: 'text/html;charset=UTF-8' }]
            };

        return {
            status: '200',
            headers: wrapAndFilterHeaders(headers),
            body: s3Response.body
        };
    } catch (error) {
        return {
            status: '500',
            headers:{
                'content-type': [{ value: 'text/plain' }]
            },
            body: 'An error occurred loading the page'
        };
    }
}

function httpGet(params) {
    return new Promise((resolve, reject) =&amp;gt; {
        http.get(params, (resp) =&amp;gt; {
            console.log(`Fetching ${params.hostname}${params.path}, status code : ${resp.statusCode}`);
            let result = {
                headers: resp.headers,
                body: ''
            };
            resp.on('data', (chunk) =&amp;gt; { result.body += chunk; });
            resp.on('end', () =&amp;gt; { resolve(result); });
        }).on('error', (err) =&amp;gt; {
            console.log(`Couldn't fetch ${params.hostname}${params.path} : ${err.message}`);
            reject(err, null);
        });
    });
}

// Get the app path segment e.g. candidates.app, employers.client etc
function getAppPath(path){
    if(!path){
        return '';
    }

    if(path[0] === '/'){
        path = path.slice(1);
    }

    const segments = path.split('/');

    // will always have at least one segment (may be empty)
    return segments[0];
}

// Cloudfront requires header values to be wrapped in an array
function wrapAndFilterHeaders(headers){
    const allowedHeaders = [
        'content-type',
        'content-length',
        'last-modified',
        'date',
        'etag'
    ];

    const responseHeaders = {};

    if(!headers){
        return responseHeaders;
    }

    for(var propName in headers) {
        // only include allowed headers
        if(allowedHeaders.includes(propName.toLowerCase())){
            var header = headers[propName];

            if (Array.isArray(header)){
                // assume already 'wrapped' format
                responseHeaders[propName] = header;
            } else {
                // fix to required format
                responseHeaders[propName] = [{ value: header }];
            }    
        }

    }

    return responseHeaders;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wow. Just wow. So it turns out the body of the response is not exposed to the lambda function.&lt;/p&gt;

&lt;p&gt;This means that we’ll need to replace the 404 status with a 200, AND make a request to fetch the frontend app index from the s3 bucket to use it to populate the response body.&lt;/p&gt;

&lt;p&gt;I’m sure this would have worked but it just seemed a bit much. Lots of moving pieces to accomplish what I felt should be a fairly simple thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attempt #4 (The solution)
&lt;/h2&gt;

&lt;p&gt;After some more research I learned that although the body of the response is not exposed to the lambda function, it will persist as long as the lambda function doesn’t modify the body in any way.&lt;/p&gt;

&lt;p&gt;The final solution for me was a combination of attempt #2 and attempt #3. The main issue with #2 was that the frontend still returned a 404 status, so now the lambda function can be simplified to handle just the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use strict';

exports.handler = async (event, context, callback) =&amp;gt; {
    const cf = event.Records[0].cf;
    const request = cf.request;
    const response = cf.response;
    const statusCode = response.status;

    if (statusCode == '404') {
        response.status = '200'
    }

    console.log('response: ' + JSON.stringify(response));
    callback(null, response);
    return response;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;This story is one of frustration and persistence. Don’t get me wrong, AWS is great and provides seemingly endless solutions to meet your infrastructure needs. These numerous solutions come with the caveat that the “best” or “correct” way isn’t always clear, even for problems that are far from unique (e.g. routing). &lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.withcoherence.com/"&gt;Coherence&lt;/a&gt;, we’re doing this kind of work across the development lifecycle so you can focus on what really matters, your actual application/business logic.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Multi client authentication with auth0 and oauth2-proxy</title>
      <dc:creator>Adam Abdelaziz</dc:creator>
      <pubDate>Wed, 07 Feb 2024 20:40:20 +0000</pubDate>
      <link>https://dev.to/adamabdelaziz/multi-client-authentication-with-auth0-and-oauth2-proxy-1m41</link>
      <guid>https://dev.to/adamabdelaziz/multi-client-authentication-with-auth0-and-oauth2-proxy-1m41</guid>
      <description>&lt;p&gt;Implementing auth can be difficult and time consuming, as well as being a critical part of most software systems. This holds especially true for applications that are public/customer facing.&lt;/p&gt;

&lt;p&gt;Authentication providers like &lt;a href="https://auth0.com/"&gt;Auth0&lt;/a&gt; and &lt;a href="https://www.okta.com/"&gt;Okta&lt;/a&gt; have become commonplace in software development. These providers help take this work off of your plate, and this can be made even easier by using a reverse proxy that provides authentication capabilities, like &lt;a href="https://github.com/oauth2-proxy/oauth2-proxy"&gt;oauth2-proxy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These solutions are fairly straightforward for most applications (API, SPA, etc.) but things start to get complicated when you want to use multiple authentication flows for the same software application/platform.&lt;/p&gt;

&lt;p&gt;We'll look at a specific use-case, with the hope that this can be adapted to fit most cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;we're using Auth0 as the authentication provider&lt;/li&gt;
&lt;li&gt;we have an application (backend API + SPA) that is already set up to use Auth0 along with oauth2-proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;we're looking to add a customer-facing cli application
the cli will utilize the same backend api that is already in place&lt;/li&gt;
&lt;li&gt;we want to allow users to authenticate with the cli using the device code flow&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hurdles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;depending on your implementation, the Auth0 application/client type you use might be a single page web app, a regular web app, or a machine to machine application. None of these support the device code flow, and changing/replacing the current client is not a viable option (bad practice, unsafe, won't work anyway etc.). This means we'll need a 2nd Auth0 application and the type will need to be "native" to support the device code flow.&lt;/li&gt;
&lt;li&gt;the oauth2-proxy does not support using multiple clients with the same proxy instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of these hurdles, it seemed like we'd no longer be able to use the oauth2-proxy. A custom solution would need to be written. This was saddening as the oauth2-proxy really did make implementing auth a lot easier, and it removed quite a bit of common boiler-plate logic.&lt;/p&gt;

&lt;p&gt;I wasn't ready to throw in the towel just yet and this solution, like many of my all-time favorites, was born of a combination of stubbornness and laziness (and a bit of determination).&lt;/p&gt;

&lt;p&gt;Solution&lt;br&gt;
Using oauth2-proxy, the original setup looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Original proxy (API application)
oauth2-proxy --provider oidc --provider-display-name "Auth0" \
--client-id="${AUTH0_CLIENT_ID}" \
--cookie-secret="${AUTH0_COOKIE_SECRET}" \
--client-secret="${AUTH0_CLIENT_SECRET}" \
--cookie-expire="${COOKIE_EXPIRE_TIME}" \
--oidc-issuer-url="https://${AUTH0_DOMAIN}/" \
--redirect-url="/api/auth/callback" \
--http-address="http://0.0.0.0:${PORT}" \
--upstream="http://localhost:${API_PORT}" \
--email-domain=* \
--skip-provider-button \
--skip-auth-route="/public/*" \
--redis-connection-url="redis://${REDIS_IP}:${REDIS_PORT}" \
--redis-connection-idle-timeout="5" \
--proxy-prefix="/api/auth" \
--cookie-name="my_session" \
--session-store-type="redis" \
--whitelist-domain="${AUTH0_DOMAIN}" \
--insecure-oidc-allow-unverified-email \
--request-logging="false" \
--pass-authorization-header="true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can't add a 2nd client to the proxy, so my intention was to add a 2nd proxy. The first hurdle here was that I wasn't sure how to handle routing to the 2 proxies. I eventually settled on the idea of having the first proxy allow requests to pass through to the 2nd proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CLI_API_PREFIX=/api/cli

# Original proxy (API application)
oauth2-proxy --provider oidc --provider-display-name "Auth0" \
--client-id="${AUTH0_CLIENT_ID}" \
--cookie-secret="${AUTH0_COOKIE_SECRET}" \
--client-secret="${AUTH0_CLIENT_SECRET}" \
--cookie-expire="${COOKIE_EXPIRE_TIME}" \
--oidc-issuer-url="https://${AUTH0_DOMAIN}/" \
--redirect-url="/api/auth/callback" \
--http-address="http://0.0.0.0:${PORT}" \
# Here we add an upstream for the 2nd proxy
--upstream="http://localhost:$(expr $PORT + 1)${CLI_API_PREFIX}/" \
--upstream="http://localhost:${API_PORT}" \
--email-domain=* \
--skip-provider-button \
--skip-auth-route="/public/*" \
# Skip auth for the cli route, the 2nd proxy will handle this
--skip-auth-route="${CLI_API_PREFIX}/*" \
# Without the --skip-jwt.. and --oidc-extra-audience flags, the
# request is stripped of information needed to authenticate with the
# 2nd proxy (headers, token)
--skip-jwt-bearer-tokens="true" \
--oidc-extra-audience="${AUTH0_CLI_CLIENT_ID}" \
--redis-connection-url="redis://${REDIS_IP}:${REDIS_PORT}" \
--redis-connection-idle-timeout="5" \
--proxy-prefix="/api/auth" \
--cookie-name="my_session" \
--session-store-type="redis" \
--whitelist-domain="${AUTH0_DOMAIN}" \
--insecure-oidc-allow-unverified-email \
--request-logging="false" \
--pass-authorization-header="true" \
&amp;amp;

# Start the 2nd proxy (for the CLI application)
oauth2-proxy --provider oidc --provider-display-name "Auth0" \
--client-id="${AUTH0_CLI_CLIENT_ID}" \
# This --cookie-* attr doesn't really matter here (I believe it just couldn't be blank)
--cookie-secret="${SOME_AUTH0_COOKIE_SECRET}" \
--client-secret="${AUTH0_CLI_CLIENT_SECRET}" \
--oidc-issuer-url="https://${AUTH0_DOMAIN}/" \
--redirect-url="${CLI_API_PREFIX}/auth/callback" \
--http-address="http://0.0.0.0:$(expr $PORT + 1)" \
# Same upstream as the 1st proxy since we're using the same backend API
--upstream="http://localhost:${API_PORT}" \
--email-domain=* \
--skip-provider-button \
--skip-jwt-bearer-tokens="true" \
--redis-connection-url="redis://${REDIS_IP}:${REDIS_PORT}" \
--redis-connection-idle-timeout="5" \
--proxy-prefix="${CLI_API_PREFIX}/auth" \
--cookie-name="my_cli_session" \
--session-store-type="redis" \
--whitelist-domain="${AUTH0_DOMAIN}" \
--insecure-oidc-allow-unverified-email \
--request-logging="false" \
--pass-authorization-header="true" \
&amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;N.B. If the additional audiences are not added, then the 1st proxy will strip all auth info (headers, etc.) from the request before passing it to the 2nd proxy. This doesn't make the 1st proxy treat the request as if it's authenticated, it just allows the request to make it to the 2nd proxy without being stripped.It took an embarrassing amount of time for me to figure this out, don't be like me :).&lt;/p&gt;

&lt;p&gt;This solution worked perfectly for my use case, and I hope it helps with yours as well. If there's anything missing here, or if you have any feedback/questions feel free to reach out to &lt;a href="//adam.abdelaziz@withcoherence.com"&gt;adam.abdelaziz@withcoherence.com&lt;/a&gt;. Happy coding!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>programming</category>
      <category>security</category>
    </item>
    <item>
      <title>5 AWS/GCP Terraform Gotchas</title>
      <dc:creator>Adam Abdelaziz</dc:creator>
      <pubDate>Wed, 07 Feb 2024 19:57:29 +0000</pubDate>
      <link>https://dev.to/adamabdelaziz/5-awsgcp-terraform-gotchas-2o42</link>
      <guid>https://dev.to/adamabdelaziz/5-awsgcp-terraform-gotchas-2o42</guid>
      <description>&lt;p&gt;Programming is full of ups and downs. While the victories always feel great, there are also lots of tricky little things that end up sapping your time and energy. I feel a sense of responsibility to share those experiences in the hopes that it helps even just one fellow programmer.&lt;/p&gt;

&lt;p&gt;Here's a list of things that make me go 🤦‍♂️&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Lock your provider versions!
&lt;/h2&gt;

&lt;p&gt;I know this might seem like an obvious one but I'll reluctantly admit that I've been bitten by this on multiple occasions. Biggest lesson I've learned is to be fairly aggressive with version locking. I initially thought that preventing major version upgrades would be enough but that may not always be the case.&lt;/p&gt;

&lt;p&gt;The biggest issue here is that you cannot always downgrade a terraform provider since it's possible that your terraform state is no longer compatible with the previous version. (should only be possible w/ major version upgrades, but... 🤦‍♂️) I'd recommend going as far as locking the minor version as well. (the '~&amp;gt;' notation allows only the rightmost version component to increment)&lt;/p&gt;

&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.23.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. AWS Dynamic listener rules
&lt;/h2&gt;

&lt;p&gt;If you've ever tried to implement dynamic &lt;a href="https://dev.tolistener%20rules"&gt;listener rules&lt;/a&gt; for an AWS application load balancer you may have run into an error like &lt;em&gt;"Error creating LB Listener Rule: PriorityInUse: Priority X is currently in use"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This would typically happen when reordering rules. e.g. going from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ruleA(p0), ruleB(p1), ruleC(p2)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ruleA(p0), ruleAA(p1), ruleB(p2), ruleC(p3)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've seen all kinds of "clever" solutions involving somehow tracking priorities and incrementing them to avoid the issue, but they're usually pretty complicated and error prone.&lt;/p&gt;

&lt;p&gt;The easiest way to handle this is to name your listener rule resources based on their priority. e.g. &lt;code&gt;resource "aws_lb_listener_rule" "rule_p0" {&lt;/code&gt; This way the rules are updated in place (e.g. in the example above p1 goes from ruleB =&amp;gt; ruleAA), and the priorities never change so no priority conflict.&lt;/p&gt;

&lt;p&gt;N.B. It's probably a good idea to remove any lifecycle arguments like &lt;code&gt;create_before_destroy&lt;/code&gt; from these resources as well. (Since it would be impossible without changing the priority)&lt;/p&gt;

&lt;h2&gt;
  
  
  3. AWS ECS speedy healthchecks for non-prod envs
&lt;/h2&gt;

&lt;p&gt;This one is less of a gotcha and more of a quality of life tip. ECS deployments (with default healthcheck settings) typically take a long time, and if you're deploying a preview or staging environment it can be very time consuming. This is especially painful if you're deploying many times a day.&lt;/p&gt;

&lt;p&gt;You can make some minor changes to your health check configuration and enjoy faster deploys, without completely forfeiting stability, while still allowing for slow startups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_alb_target_group" "example_target_group" {
    name = "example-target-group"
    port = 80
    protocol = "HTTP"
    vpc_id = "${aws_vpc.my_vpc.id}"
    target_type = "ip"
    health_check {
        path = "/health"
        matcher = "200-399"
        interval = 10
        healthy_threshold = 2
        unhealthy_threshold = 9
    }
    lifecycle {
        create_before_destroy = true
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the above configuration, health checks could pass in as little as 20 seconds. If your application has a tendency to occasionally have slow startups you'd still get up to 90 seconds before health checks would be consider failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Reducing GCP URL map size
&lt;/h2&gt;

&lt;p&gt;This one is probably not super common but you may have run into this if you have a lot of microservices or many applications/environments under the same domain.&lt;br&gt;
&lt;code&gt;Error creating UrlMap: googleapi: Error 409: 'URL_MAP' error Resource error code: 'SIZE_LIMIT_EXCEEDED' Resource error message: 'Size of the url map exceeds the maximum permitted size'.&lt;/code&gt;&lt;br&gt;
Using a 2nd url map might be an option but I'm not sure it's possible without using a 2nd domain, and if there is a way it's beyond me. I did find that I had alot of rules that were redundant though and it wasn't as obvious as I'd have thought.&lt;/p&gt;

&lt;p&gt;Here's an example url map configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is not a complete example, its just meant to show the path matchers/rules
resource "google_compute_url_map" "example-map" {
  name        = "exampleapp-urlmap"

  default_service = google_compute_backend_service.example_service.id

  host_rule {
    hosts        = ["public-assets.my-app-domain.com"]
    path_matcher = "public-assets"
  }

  path_matcher {
    name            = "public-assets"
    default_service = google_compute_backend_bucket.example_bucket.id

    path_rule {
      paths   = ["/*"]
      service = google_compute_backend_bucket.example_bucket.id
    }
  }

  path_matcher {
    name            = "example-backend-service"
    default_service = google_compute_backend_service.ex_backend_svc.id

    path_rule {
      paths   = ["/api/", "/api/*"]
      service = google_compute_backend_service.ex_backend_svc.id
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, the &lt;code&gt;path_rule&lt;/code&gt; for the "public-assets" &lt;code&gt;path_matcher&lt;/code&gt; is unnecessary and creates a duplicate rule. When you set a &lt;code&gt;default_service&lt;/code&gt; for the path_matcher the result is the creation of a rule with &lt;code&gt;paths = ["/*"]&lt;/code&gt;. So that path matcher could have just been:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path_matcher {
    name            = "public-assets"
    default_service = google_compute_backend_bucket.example_bucket.id
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next is the &lt;code&gt;path_rule&lt;/code&gt; for "example-backend-service". That rule ultimately generates 2 path rules in GCP. This is unnecessary because "*" matches 0 or more characters. So one of the rules is redundant and the path matcher would accomplish the same if it was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path_matcher {
    name            = "example-backend-service"
    default_service = google_compute_backend_service.ex_backend_svc.id

    path_rule {
      paths   = ["/api/*"]
      service = google_compute_backend_service.ex_backend_svc.id
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. GKE NEGs (Network endpoint groups)
&lt;/h2&gt;

&lt;p&gt;To help handle routing with GKE services google has this awesome way of setting up &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg"&gt;standalone zonal NEGs&lt;/a&gt; along with kubernetes services.&lt;/p&gt;

&lt;p&gt;Basically, you set some annotations on your kubernetes service, and GCP takes care of setting up and managing the NEGs for you.&lt;/p&gt;

&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_service" "myexampleservice" {
    metadata {
        name = "myexampleservice-svc"
        namespace = "myexampleservice-deploy"
        annotations = {
            "cloud.google.com/neg": jsonencode({
                exposed_ports = {
                    80 = {
                        name = "myexampleservice-neg"
                    }
                }
            })
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, GCP automatically adds additional annotations to the k8s service that contain the names of the NEGs that are created as well as the zones that they were created in.&lt;/p&gt;

&lt;p&gt;The issue here is that we're no longer managing the NEGs via terraform and there are resources that we are managing that depend on the NEGs. (In my case it was a "google_compute_backend_service".)&lt;/p&gt;

&lt;p&gt;Accessing the annotations via the "kubernetes_service" resource doesn't work because we're not able to see the additional annotations that were added by GCP.&lt;/p&gt;

&lt;p&gt;The solution is to add a data resource for the kubernetes_service we just created. Surprisingly, this allows us to see the added annotations:&lt;/p&gt;

&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "kubernetes_service" "myexampleservice" {
    metadata {
        name = "myexampleservice-svc"
        namespace = "myexampleservice-deploy"
    }

    depends_on = [kubernetes_service.myexampleservice]
}

data "google_compute_network_endpoint_group" "myexampleservice-neg" {
    count = data.kubernetes_service.myexampleservice.metadata != null &amp;amp;&amp;amp; data.kubernetes_service.myexampleservice.metadata[0].annotations != null ? length(jsondecode(data.kubernetes_service.myexampleservice.metadata[0].annotations["cloud.google.com/neg-status"])["zones"]) : 0
    name = "myexampleservice-neg"
    zone = jsondecode(data.kubernetes_service.myexampleservice.metadata[0].annotations["cloud.google.com/neg-status"])["zones"][count.index]

    depends_on = [
        kubernetes_service.myexampleservice,
        data.kubernetes_service.myexampleservice
    ]
}

resource "google_compute_backend_service" "myexampleservice" {
    name      = "myexampleservice"
    protocol  = "HTTP"
    port_name = "http"
    timeout_sec = 30
    log_config {
      enable = true
      sample_rate = "1.0"
    }

    dynamic "backend" {
      for_each = data.kubernetes_service.myexampleservice.metadata[0].annotations != null ? range(length(jsondecode(data.kubernetes_service.myexampleservice.metadata[0].annotations["cloud.google.com/neg-status"])["zones"])) : []
      content {
        group = data.google_compute_network_endpoint_group.myexampleservice-neg[backend.value].id
        balancing_mode = "RATE"
        max_rate = 1000
      }
    }
    health_checks = [google_compute_health_check.myexampleservice.id]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;Thanks for reading! I hope this helps save you some wasted effort, coffee, and aspirin. And please excuse my shameless plug for what we're building over at &lt;a href="https://www.withcoherence.com/"&gt;Coherence&lt;/a&gt;. We’re doing this kind of work across the development lifecycle so you can focus on what really matters, your actual application/business logic.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
