<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SKisContent</title>
    <description>The latest articles on DEV Community by SKisContent (@skiscontent_46).</description>
    <link>https://dev.to/skiscontent_46</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/skiscontent_46"/>
    <language>en</language>
    <item>
      <title>AWS Private Zones To The Max</title>
      <dc:creator>SKisContent</dc:creator>
      <pubDate>Mon, 25 Nov 2024 06:02:13 +0000</pubDate>
      <link>https://dev.to/skiscontent_46/aws-private-zone-to-the-max-1hf1</link>
      <guid>https://dev.to/skiscontent_46/aws-private-zone-to-the-max-1hf1</guid>
      <description>&lt;p&gt;Most of the time people think of DNS as a way for internet users to find your website. However, it is also useful for your applications and services to find each other. Let's say that within your system you have a separate server that provides an AI LLM service via an API. To invoke the service, the code in your main server would need to make that API call using a URL, such as &lt;a href="https://ollama.example.com/api/v1/query" rel="noopener noreferrer"&gt;https://ollama.example.com/api/v1/query&lt;/a&gt;. However, you do not want to expose this URL to the entire web, both to limit access and to limit usage. How would you do it?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm1qbrj6kwpl4l6h2ci4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm1qbrj6kwpl4l6h2ci4.jpg" alt="diagram of users connecting to a web site, a load balancer, a server and an internal server" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One way would be to add an entry to the web server's /etc/hosts file. As long as the LLM server's IP address doesn't change, this would work and it is a reasonable option if you only need to make a handful of such entries. However, this approach won't scale well and it requires elaborate efforts to configure through infrastructure as code. A slight modification of this approach would be to maintain a lookup table with the host name and the IP address, a roll-your-own DNS. This starts to get inelegant very fast.&lt;/p&gt;

&lt;p&gt;Fortunately, AWS Route 53 lets you create a private DNS hosted zone that can be attached to your VPC. A hosted zone is a container for DNS records. A public hosted zone contains records that are visible to the entire internet. The records in a private hosted zone are only visible to the VPC to which the zone is attached. Thus, the above problem of making the LLM server available by hostname can be solved using a private hosted zone and adding a simple type A record for the server. This can be done with straightforward infrastructure code using any platform, whether Terraform, Ansible, CloudFormation, Pulumi, or even the AWS CLI and bash scripts, without needing to manipulate config files within any servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzjrel5v9how64gu9vzc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzjrel5v9how64gu9vzc.jpg" alt="diagram of users connecting to a web site, a load balancer, a server and an internal server, which are in a rectangle associated with a route 53 private zone" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  One is Good, Two is Better
&lt;/h2&gt;

&lt;p&gt;The real superpower of private hosted zones is that you can create many of them and attach them to separate VPCs. This is a huge benefit when building systems that need to be scalable, fault tolerant, distributed or available for disaster recovery. Each private zone lets you create a virtual DNS within your virtual network, and the workloads within these networks do not need to know that there are similar workloads in other networks. This isolation means that you could create one or a dozen clones with almost no additional effort.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu84jzupfdficxw4mw5tv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu84jzupfdficxw4mw5tv.jpg" alt="diagram of users connecting to one of three identical website subcomponents containing a load balancer, a server and an internal server, which are in a rectangle associated with a route 53 private zone" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a big advantage over having to use a single private zone. In that scenario, you would need to distinguish each new DNS record for a new copy of a service, such as by adding a sequence or some random string. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Global Solution
&lt;/h2&gt;

&lt;p&gt;The VPCs with their own dedicated private zone can be distributed within a single AWS region, across regions or even across accounts for security isolation. The only additional configuration that is required whenever a new zone is added is to add a record to the public DNS for the additional entry point server.&lt;/p&gt;

&lt;p&gt;Another use case would be a globally distributed system with location-based DNS routing to a regional endpoint. Additional regions can be added as usage grows or other requirements such as data protection laws require. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0mwcjmlv7zwn7nxtttc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0mwcjmlv7zwn7nxtttc.jpg" alt="picture of a world map with six identical website systems containing a load balancer, a server and an internal server, which are in a rectangle associated with a route 53 private zone" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to test this out for yourself, I have created a simple demo that can be scaled to two or more regions. The code is in Github here. I applied it to create two regions. As the following images show, the same DNS entry in each region resolves to a different host IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76ryyom2rtbdxufazrxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76ryyom2rtbdxufazrxe.png" alt="screenshot of a DNS lookup command " value="" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z1jytfjvbpyppavxbi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z1jytfjvbpyppavxbi4.png" alt="screenshot of a DNS lookup command " value="" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Route 53 private zones use the same default nameservers, but the DNS query response changes depending on the zone. If a hostname is not defined within the private zone, then the query is forwarded to the next zone in the delegation chain. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;AWS Route 53 private hosted zones round out the "virtual" aspect of virtual private clouds. They permit the isolation of DNS lookups within a VPC to values specific to that VPC. In this manner, each VPC can be a self-contained application system that can be cloned and deployed multiple times, with each deployment independent of each other.&lt;/p&gt;

</description>
      <category>privatezone</category>
      <category>route53</category>
      <category>vpc</category>
      <category>privatedns</category>
    </item>
    <item>
      <title>Today is the day you should purchase an AWS Savings Plan</title>
      <dc:creator>SKisContent</dc:creator>
      <pubDate>Tue, 27 Feb 2024 15:28:26 +0000</pubDate>
      <link>https://dev.to/skiscontent_46/today-is-the-day-you-should-purchase-an-aws-savings-plan-2nd2</link>
      <guid>https://dev.to/skiscontent_46/today-is-the-day-you-should-purchase-an-aws-savings-plan-2nd2</guid>
      <description>&lt;p&gt;The cloud computing market has transformed the way organizations procure computer servers. Amazon Web Services (AWS), in particular, offers convenience and adaptability for any organization that needs a server to host a web application or perform some computing workload. In less than five minutes, and for a dollar a day, you can provision a server with 2 CPUs and 4 GB of memory. That is enough to handle light to moderate web traffic. Moreover, these servers, or virtual instances as they are called, come in a variety of configurations of CPU type and count, memory and networking capacity, to suit different customer needs.&lt;/p&gt;

&lt;p&gt;However, there is a flip side to this ease of provisioning. Over time, or even in five minutes, a customer can launch expensive server configurations, or accumulate a large fleet of instances, and single dollars can become tens, hundreds or thousands of dollars per day. The monthly bill can significantly impact budgets or a business’s bottom line.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Has a Plan
&lt;/h2&gt;

&lt;p&gt;There is a way to reduce the server cost with AWS. To do so, a customer must commit to pay for a minimum amount of compute capacity for a fixed term of one or three years. The cost reduction comes in the way of a discounted price, which the customer cay pay either in full at the time of entering into the agreement, or monthly, or in a combination of upfront and monthly payments. AWS calls this a Savings Plan, and it can lead to a savings of around 28% to 72% of the compute cost, depending on the term and timing of the payments. To be clear, a savings plan only affects compute costs, which AWS defines as EC2, Fargate and Lambda. It does not reduce other managed service costs, such as for storage, data transfer, or databases.&lt;/p&gt;

&lt;p&gt;While this article discusses savings plans, AWS also offers another discount program called Reserved Instances (RI). The purchase terms of an RI are similar to a savings plan, but a RI purchase will be limited to a specific instance type and region. The cost analysis is generally the same, but there may be specific situations where RIs make more sense.&lt;/p&gt;

&lt;p&gt;Knowing about savings plans, this is the key question: if you use AWS compute instances, should you purchase a savings plan, and if so, when is the correct time to buy? The short answer, if you want to stop reading, is “Today!”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Commitment Phobia
&lt;/h2&gt;

&lt;p&gt;One common reason that customers hesitate to commit to a savings plan is because they know, or think, that they are over-provisioned. They reason that if they commit to their current spending level for a year, and at some later time they reduce their usage, they will pay for unnecessary capacity. However, this thinking is misguided. The best time to purchase a savings plan is right now (there may be exceptions). By not purchasing a savings plan today, the customer is leaving money on the table for Amazon to pick up. By purchasing a savings plan, the customer may come out ahead even if the usage goes down in the future. &lt;/p&gt;

&lt;p&gt;Before diving into the math to understand why today is the right day, let’s consider what may cause compute usage to go down in the future. There can be two reasons. First, the business is facing a downturn, losing popularity or losing customers, and the compute load is decreasing. The other reason is that the business is planning to reduce over-provisioned capacity or to re-design its systems to use less compute-intensive processes.&lt;/p&gt;

&lt;p&gt;Considering the first scenario, if a business is expecting a downturn, it would be better to reduce costs now rather than later, and a savings plan would do just that. The business can park the savings for a future date when it may not have the cash flow to cover the committed cost. Depending on the savings rate, it may break even on the cost before the downturn happens.&lt;/p&gt;

&lt;p&gt;On the other hand, as an AWS customer, if you plan to reduce usage, and it is possible to do so today, just do it! Then you can purchase a savings plan for the usage that remains. However, most of the time, reducing usage is an objective that may be desired but is difficult to achieve. If it were easy, you would have done it already. &lt;/p&gt;

&lt;p&gt;The challenges to reducing costs can be numerous. If the business is in growth focus, like a startup, then it does not have time or resources to make changes that are not adding new features. In mature organizations, existing systems may have grown organically into a complex architecture that will require time to restructure. If the systems serve business customers or are mission critical, changing them will require planning, approval, testing and careful rollout. Those activities take time. Meanwhile, the business is paying AWS full price for its compute usage.&lt;/p&gt;

&lt;p&gt;As the analysis below shows, in most realistic cases, purchasing a savings plan today rather than waiting is the more cost-effective option.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pluses and Minuses
&lt;/h2&gt;

&lt;p&gt;In the remainder of the article, I will consider and compare the cost results of purchasing a savings plan today with the results of purchasing a savings plan after some months spent on reducing compute usage. The outcome that I will use for the comparison is the total cost of operation (TCO) of AWS compute for the next 12 months. To make sense of the analysis in actual dollars, let’s suppose a monthly cost of $83,333, which adds up to almost $1 million over 12 months ($83,333 x 12 = $999,996). I am going to assume a savings rate of 28% for the savings plan, which is typical minimum savings for most instance types with a 1-year and no up-front commitment in the us-east-1 region. This purchase would result in a 72% TCO compared to doing nothing (not purchasing a savings plan), which would result in a 100% TCO at the end of 12 months. In dollars, a discount of 28% means 12 monthly payments of $60,000 ($83,333 * (1-0.28) = $60,000) for a total of $720,000 ($60,000 * 12 = $720,000), as compared to $1,000,000 for the full price.&lt;/p&gt;

&lt;p&gt;With these assumptions in mind, would it be worth waiting one month to purchase a savings plan, and using that month to reduce compute usage by 5%? The TCO 12 months from now of this plan would be $83,333 + 11 * 0.95 * 0.72 * $83,333 = $710,333. So the answer is, yes, by the numbers, it would be worth waiting one month. It would be more cost efficient to wait 1 month to achieve any reduction in usage of more than 3.5%.&lt;/p&gt;

&lt;p&gt;The difference between the TCO in the two scenarios (purchasing a savings plan today vs after 1 month with a 5% reduction in usage ) is under 1%. In dollars, it is just under $10,000, out of what was originally a $1 million expense. If a reduction in usage of 5% in one month requires little effort to achieve, and the probably of achieving it is high, it would make economic sense to wait one month and purchase the savings plan after the reduction. If the effort would divert resources from other initiatives, or the chances of achieving the reduction are low, it would be wiser to lock in the savings now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Waiting for Go-Low
&lt;/h2&gt;

&lt;p&gt;The more usage a customer thinks they can reduce, the longer they can afford to wait to purchase a savings plan. The chart below demonstrates this. If they can reduce usage by 10%, they can make the purchase after two months, and still end up with a lower TCO twelve months from today than if they purchase a savings plan today. For a reduction of 15% in compute usage, they can wait three months.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzcowq5flypaijfy7jf2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzcowq5flypaijfy7jf2.jpg" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;br&gt;
The truth about software and IT projects is that estimates of effort are often inaccurate, and complex projects usually take longer than estimated. If a project to reduce usage by 25% is expected to take 4 months, there is a chance that it may take 5 months. If it does take longer, waiting to purchase a savings plan may turn out to be a bad decision because the TCO 12 months from today would be 73% of the full cost, which is higher than the TCO of purchasing a savings plan today. As before, 1% is not much compared to the overall spend, but there is also the risk that the reduction falls short of 25%. If usage is reduced by 20% instead, then waiting 5 months will result in a TCO of 75%. It is also true that the reduction in usage would have benefits further down the road as well, but you need to consider the opportunity cost of taking on this work now.&lt;/p&gt;

&lt;p&gt;One strategy to mitigate the effect of delay would be to purchase a saving plan for the targeted level of compute usage. This is illustrated in the last column of the chart above. If compute usage can be reduced by 50%, you can wait 6 months to purchase a saving plan to get a TCO of 68%. However, you could also purchase a savings plan today for 50% of your current compute, and evaluate the situation in 6 months. If you did indeed cut compute usage by 50%, then there is not need to take any further action, and the TCO 12 months from now would be 61%. If you did not reduce usage by 50%, then the TCO would be higher than 61%, and may be 79% if you did not reduce usage at all. Making a purchase today for 50% of your current usage would still be better than not purchasing a savings plan today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article I show why purchasing an AWS Savings Plan today is the best decision. The starting-gate savings makes it worthwhile even if you think that you are currently over-provisioned. If you are sure that you can reduce your compute usage, it still makes sense to purchase a savings plan for the usage that you expect will remain after you make reductions.&lt;/p&gt;

&lt;p&gt;Get in touch if you have a different view on this analysis, or you think this advice does not apply to your situation. I am eager to learn and refine my understanding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>cloudcost</category>
      <category>awscosts</category>
    </item>
    <item>
      <title>Never Use Credentials In A CI/CD Pipeline Again</title>
      <dc:creator>SKisContent</dc:creator>
      <pubDate>Tue, 30 May 2023 13:53:09 +0000</pubDate>
      <link>https://dev.to/skiscontent_46/never-use-credentials-in-a-cicd-pipeline-again-2nak</link>
      <guid>https://dev.to/skiscontent_46/never-use-credentials-in-a-cicd-pipeline-again-2nak</guid>
      <description>&lt;p&gt;As someone who builds and maintains cloud infrastructure, I have always been leery from a security perspective of giving 3rd party services, such as CI/CD platforms, access to the resources. All the service vendors claim to take stringent precautions and implement foolproof processes, but still, vulnerabilities &lt;a href="https://blog.lastpass.com/2023/03/security-incident-update-recommended-actions/"&gt;get&lt;/a&gt; exploited and errors &lt;a href="https://www.theregister.com/2023/03/24/github_changes_its_ssh_host/"&gt;happen&lt;/a&gt;. Therefore, my preference is to use tools that can be self-hosted. However, I may not always have a choice if the organization is already committed to an external partner, such as Bitbucket Pipelines or GitHub Actions. In that case, in order to apply some Terraform IaC or deploy to an autoscaling group, there is no choice but to furnish the external tool with an API secret key, right? Wrong! With the proliferation of OpenID Connect, it is possible to give 3rd party platforms token-based access that does not require secret keys.&lt;/p&gt;

&lt;p&gt;The problem with a secret is that there is always a chance of it leaking out. The risk increases the more it is shared, which happens as employees leave and new ones join. One of them may disclose it intentionally or they may be the victim of phishing or a breach. When a secret is stored in an external system, that introduces an entire new set of potential leak vectors. Mitigating the risk involves periodically changing the credentials, which is a task that adds little perceptible value.&lt;/p&gt;

&lt;p&gt;The typical way to authorize a third-party system works like this: We create a user or service account in our cloud provider and generate a username and password or API credentials. Then we save the credentials in the third party platform’s secrets store. Whenever we need the third party platform to perform an action in our cloud provider, like in a build pipeline, we refer to the secrets. The following is a simplified sequence diagram for this configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://skiscontent.com/wp-content/uploads/2023/05/credentials-sequence-diagram.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aH2zecs7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/credentials-sequence-diagram.png" alt="" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenID Connection authorization works differently. A system that is the source of identities is the Identity Provider (IdP for short). The system that needs access to the cloud is the client (also known as the audience). In this situation, the IdP will be the CI/CD platform, such as GitHub, and the client will also be GitHub. The cloud provider receives a request from the client and validates the identity with the IdP. To set this up, first, we tell the cloud provider where to validate identities. In AWS, this involves configuring an OIDC provider. In order for AWS to validate the incoming credentials, it needs the TLS certificate fingerprint of the IdP. In addition, the client must be authorized to perform specific actions (claims). In AWS, this authorization is granted through an IAM role. Finally, the job or pipeline in the CI/CD platform must be configured to use the authorization with the specific cloud provider. So the cloud provider does not necessarily need to know about the CI/CD platform, but the platform must know how to work with the cloud provider. In addition to GitHub, OIDC authorization is supported by &lt;a href="https://docs.gitlab.com/ee/ci/cloud_services/"&gt;GitLab&lt;/a&gt;, &lt;a href="https://circleci.com/docs/openid-connect-tokens/"&gt;Circle CI&lt;/a&gt;, and &lt;a href="https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/"&gt;Bitbucket&lt;/a&gt;. In addition to AWS, other cloud providers that support OIDC include Azure and GCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://skiscontent.com/wp-content/uploads/2023/05/oidc-sequence-diagram.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V-YMhnMG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/oidc-sequence-diagram.png" alt="" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s work through an example of connecting GitHub Actions to AWS. In the rest of this post I will demonstrate the steps to setting up OpenID Connect authentication:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Configure an identity provider&lt;/li&gt;
&lt;li&gt; Create a role&lt;/li&gt;
&lt;li&gt; Configure the CI/CD pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Set up the Identity Provider
&lt;/h3&gt;

&lt;p&gt;In order for AWS to verify the legitimacy of an an incoming request, it will make an HTTPS request to a URL that knows of the client. AWS verifies the URL by checking the TLS certificate. The full command to get the fingerprint (or thumbprint as AWS refers to it) is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export THUMBPRINT=$(openssl s_client -connect token.actions.githubusercontent.com:443 -servername token.actions.githubusercontent.com -showcerts 2&amp;gt;/dev/null &amp;lt;/dev/null |
 sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/{/BEGIN/{h;d;};H;};${x;p;}' |
 openssl x509 -inform pem -outform der |
 openssl dgst -sha1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s break this down. The first command in the sequence, &lt;strong&gt;openssl&lt;/strong&gt;, connects to the remote server as a client and fetches information about the SSL/TLS certificate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;s_client&lt;/strong&gt; Tells openssl to act as a SSL/TLS client (similar to a web browser, but only for the SSL connection part)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;-connect&lt;/strong&gt; The remote host and port to which to connect&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;-servername&lt;/strong&gt; Not necessary if the connect option was a DNS name, otherwise needed for TLS server name indication (SNI)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;-showcerts&lt;/strong&gt; display all the certificates in the chain&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;2&amp;gt;/dev/null&lt;/strong&gt; send stderror output to /dev/null&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&amp;lt;/dev/null&lt;/strong&gt; send null as input to the command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result of the command is as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONNECTED(00000005)
write W BLOCK
---
Certificate chain
 0 s:/C=US/ST=California/L=San Francisco/O=GitHub, Inc./CN=*.actions.githubusercontent.com
   i:/C=US/O=DigiCert Inc/CN=DigiCert TLS RSA SHA256 2020 CA1
-----BEGIN CERTIFICATE-----
MIIG8jCCBdqgAwIBAgIQCn5zvdee2Vg6XXlzFLM1XDANBgkqhkiG9w0BAQsFADBP
...
evZ35QEWOlwhphLyHhUL6QFCuAe0wL2arESMXnxgaYE7Ka+SexxEiT5ZmdyrcFwg
BL7FKjOM
-----END CERTIFICATE-----
 1 s:/C=US/O=DigiCert Inc/CN=DigiCert TLS RSA SHA256 2020 CA1
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
-----BEGIN CERTIFICATE-----
MIIE6jCCA9KgAwIBAgIQCjUI1VwpKwF9+K1lwA/35DANBgkqhkiG9w0BAQsFADBh
...
as6xuwAwapu3r9rxxZf+ingkquqTgLozZXq8oXfpf2kUCwA/d5KxTVtzhwoT0JzI
8ks5T1KESaZMkE4f97Q=
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=San Francisco/O=GitHub, Inc./CN=*.actions.githubusercontent.com
issuer=/C=US/O=DigiCert Inc/CN=DigiCert TLS RSA SHA256 2020 CA1
---
No client certificate CA names sent
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3567 bytes and written 489 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 943700002B9E436955AF42DC0924EB0F9DC5B1E6617F321CAC1BA2B568ED2890
    Session-ID-ctx: 
    Master-Key: ADAC9291D92242641B15A1E9E016519FD0D14A3B066763852360522B01F1A4EDFDB70C7343542B5CE24E3A5EE39FAE98
    Start Time: 1684635042
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;AWS &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html"&gt;uses&lt;/a&gt; the hash “thumbprint” of the CA that the identity provider uses to sign the HTTPS responses. If AWS pinned the hash of the signing certificate itself, then we would need to keep updating it every time the certificate changed/expired. The CA should change less often. AWS uses the last certificate in the chain, which the next command in the sequence finds.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/{/BEGIN/{h;d;};H;};${x;p;}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;sed&lt;/strong&gt; command does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;-ne&lt;/strong&gt; The -n option tells sed not to print the pattern space; the -e option is followed by the regex expression&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/&lt;/strong&gt; matches the range of lines starting with -BEGIN CERTIFICATE- and ending with -END CERTIFICATE-&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;{/BEGIN/{h;d;};H;};${x;p;}&lt;/strong&gt; 

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;{ … }&lt;/strong&gt; group commands together&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;/BEGIN/&lt;/strong&gt; match a line with BEGIN in the pattern space&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;h&lt;/strong&gt; replace the contents of the hold space with the contents of the pattern space&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;d&lt;/strong&gt; delete the contents of the pattern space&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;H&lt;/strong&gt; append a newline and contents of the pattern space to the hold space&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;${x;p;}&lt;/strong&gt; at the end of the file, x swaps the hold space and the pattern space and p prints the pattern space&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output of this command in the sequence is the last certificate in the openssl information (abbreviated for conciseness):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-----BEGIN CERTIFICATE-----
MIIE6jCCA9KgAwIBAgIQCjUI1VwpKwF9+K1lwA/35DANBgkqhkiG9w0BAQsFADBh
...
8ks5T1KESaZMkE4f97Q=
-----END CERTIFICATE-----
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The third command in the sequence again uses &lt;strong&gt;openssl&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -inform pem -outform der
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command outputs the x509 value of the certificate in binary format (&lt;code&gt;-outform der&lt;/code&gt;), taking as input the pem format (inform pem). The binary output is mostly unprintable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bG74Vw6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/github-ca-x509-binary.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bG74Vw6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/github-ca-x509-binary.png" alt="" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl dgst -sha1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This final command calculates the sha1 digest of the input. The result is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;6938fd4d98bab03faadb97b34396831e3780aea1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Create a role
&lt;/h3&gt;

&lt;p&gt;Now that we have the thumbprint, we can create the identity provider in AWS.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-open-id-connect-provider \\  
\--url https://token.actions.githubusercontent.com \\  
\--thumbprint ${THUMBPRINT} \\  
\--tags Key=created-by,Value=sushil Key=environment,Value=development   

aws iam create-role \\  
\--role-name github-actions-role \\  
\--assume-role-policy-document \\  
'{  
  "Version": "2012-10-17",  
  "Statement": \[{  
    "Effect": "Allow",  
    "Principal": {  
      "Federated": "arn:aws:iam::ACCOUNT\_ID:oidc-provider/token.actions.githubusercontent.com"  
    },  
    "Action": "sts:AssumeRoleWithWebIdentity",  
    "Condition": {  
      "StringEquals": {  
        "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"  
      },  
      "StringLike": {  
        "token.actions.githubusercontent.com:sub": "repo:SKisContent/intuitive:\*"  
      }  
    }  
  }\]  
}' \\  
\--description "Role for github actions" \\  
\--tags Key=created-by,Value=sushil Key=environment,Value=development   

aws iam attach-role-policy \\  
\--role-name github-actions-role \\  
\--policy-arn arn:aws:iam::aws:policy/PowerUserAccess
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first command configures an OIDC provider. We furnish the provider URL and the thumbprint. If we did this in the AWS console, the AWS console client would calculate the thumbprint for us. &lt;/p&gt;

&lt;p&gt;The second command creates a role. The assume-role-policy-document, which is also referred to as the trusted entity in the console, tells AWS that the principal using the role is a federated identity. We also specify some conditions that must be met by the request. The first is that the &lt;strong&gt;aud&lt;/strong&gt; value must be &lt;strong&gt;sts.amazonaws.com&lt;/strong&gt;. The second condition requires the &lt;strong&gt;sub&lt;/strong&gt; value to match the wildcard string &lt;strong&gt;repo:SKisContent/myrepo:*&lt;/strong&gt;. These values (called claims) are sent in a JWT that accompanies the inbound request. The JWT may include several other claims and &lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#understanding-the-oidc-token"&gt;depends&lt;/a&gt; on the platform. Breaking down the sub claim above, it says that the GitHub organization must match  &lt;strong&gt;SKisContent&lt;/strong&gt; and the repository must match &lt;strong&gt;myrepo&lt;/strong&gt;. We could also specify branches or tags, but in this case we allow them all using an &lt;strong&gt;*&lt;/strong&gt;as a wildcard. We want to define the conditions that the claims must meet as narrowly as possible while still allowing flexibility if we want to allow multiple branches or even repositories. &lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the CI/CD pipeline
&lt;/h3&gt;

&lt;p&gt;Finally, we create the GitHub action. A simple &lt;em&gt;.github/workflows/main.yaml&lt;/em&gt; file in a repository for a Terraform project might look as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Plan on pull-request to main  
on:  
  pull\_request:  
    branches:  
      - main  
    types: \[opened, synchronize, reopened\]  

jobs:  
  plan:  
    name: Plan  
    runs-on: ubuntu-latest  
    permissions:  
      id-token: write   
      contents: read  

    steps:  
      - name: Check out code  
        uses: actions/checkout@v3  

      - name: Setup Terraform  
        uses: hashicorp/setup-terraform@v2  
        with:  
          terraform\_version: 1.4.5  

      - name: Configure AWS Credentials  
        uses: aws-actions/configure-aws-credentials@v2  
        with:  
          role-to-assume:arn:aws:iam::ACCOUNT\_ID:role/github-actions-role  
          role-session-name: githubrolesession  
          aws-region: us-east-1  

      - name: Initialize Terraform  
        run: |  
          terraform init -input=false  

      - name: Plan Terraform  
        id: plan  
        continue-on-error: true  
        run: |  
          terraform plan -input=false -no-color
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use the &lt;a href="https://github.com/aws-actions/configure-aws-credentials"&gt;predefined&lt;/a&gt; &lt;strong&gt;configure-aws-credentials&lt;/strong&gt; action to connect the workflow to AWS. When we run the action, we will see the results below. The expanded section shows the task that connects to AWS and fetches the temporary credentials that will be used as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iCCw6pwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/github-action-run-with-tf-plan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iCCw6pwS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://skiscontent.com/wp-content/uploads/2023/05/github-action-run-with-tf-plan.png" alt="" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>security</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
