<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sgrilux</title>
    <description>The latest articles on DEV Community by sgrilux (@sgrilux_41).</description>
    <link>https://dev.to/sgrilux_41</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sgrilux_41"/>
    <language>en</language>
    <item>
      <title>From Cloudformation to Terraform</title>
      <dc:creator>sgrilux</dc:creator>
      <pubDate>Thu, 08 Sep 2022 20:59:53 +0000</pubDate>
      <link>https://dev.to/sgrilux_41/from-cloudformation-to-terraform-2f86</link>
      <guid>https://dev.to/sgrilux_41/from-cloudformation-to-terraform-2f86</guid>
      <description>&lt;p&gt;When it comes to Infrastructure as Code, most of companies choose either AWS Cloudformation or Terraform, but sometimes (often) they start with one and, at some point, decide to use the second one, especially from Cloudformation to Terraform.&lt;/p&gt;

&lt;p&gt;So in this post, I'll try to give some tips to start migrating resources from Cloudformation to Terraform.&lt;/p&gt;

&lt;p&gt;The migration involves the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update Cloudformation stacks&lt;/li&gt;
&lt;li&gt;Create Terraform code&lt;/li&gt;
&lt;li&gt;Import resources into terraform&lt;/li&gt;
&lt;li&gt;Plan and Apply Terraform&lt;/li&gt;
&lt;li&gt;Delete Cloudformation stacks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Looks easy right? Yes, it might be, it might not! It all depends on what are you trying to migrate, is it a big project, or it is just a couple of Cloudformation stacks, one region, or multiple regions?&lt;/p&gt;

&lt;p&gt;But don't worry I got you covered :)&lt;/p&gt;

&lt;p&gt;Let's start!&lt;/p&gt;

&lt;h2&gt;
  
  
  Update Cloudformation stacks
&lt;/h2&gt;

&lt;p&gt;This is not necessary the first step, it could be done also after step 3 or 4, but for sure it's the most important part and needs to be completed before deleting any stacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  DeletionPolicy
&lt;/h3&gt;

&lt;p&gt;The first thing we want to make sure is that all resources have the attribute &lt;code&gt;DeletionPolicy&lt;/code&gt;set to &lt;code&gt;Retain&lt;/code&gt;. This will prevent resources to be deleted when you delete the stack.&lt;/p&gt;

&lt;p&gt;Example: &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rYQRK4au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662534411730/gYzjs4ARy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rYQRK4au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662534411730/gYzjs4ARy.png" alt="carbon.png" width="880" height="941"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Drift
&lt;/h3&gt;

&lt;p&gt;It is possible that these old Cloudformation stacks have been somehow been abandoned and, because the lack of knowledge, some resources have been modified manually (very bad), so your infrastructure might drifts from what's been deployed through CFN.&lt;/p&gt;

&lt;p&gt;You have two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update and redeploy your cfn templates with the changes applied manually&lt;/li&gt;
&lt;li&gt;As you are going to delete these task anyway, you can just take a note of the manual changes and add them into terraform (just make sure nobody will use these templates until the migration is completed)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Terraform code
&lt;/h2&gt;

&lt;p&gt;Now it's time to write the terraform code that will replace Cloudformation templates. This is the "easy" part. Terraform &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs"&gt;documentation&lt;/a&gt; is your best friend.&lt;/p&gt;

&lt;p&gt;For example the instance we have seen earlier can be turned into terraform with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P381nPAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662665940815/KPXxOgpG2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P381nPAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662665940815/KPXxOgpG2.png" alt="carbon (1).png" width="880" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Import resources into terraform
&lt;/h2&gt;

&lt;p&gt;Once you think everything is coded in terraform you can start importing resources created by Cloudformation into the terraform state file.&lt;/p&gt;

&lt;p&gt;To do so you will use the &lt;code&gt;terraform import&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;This part is the most boring part, you need to check your cfn stacks and the AWS console to make sure everything is imported correctly.&lt;/p&gt;

&lt;p&gt;Again, here the terraform documentation is a great resource. At the end of each resource documentation there is a little paragraph that explain how to import the resource. Most of the time is just specifying the ID of the resource, but sometime the ID is based on the concatenation of more attributes.&lt;/p&gt;

&lt;p&gt;An example is the &lt;code&gt;aws_route53_record&lt;/code&gt; resource, which needs the Zone ID + record + record_type, all separated by a &lt;code&gt;_&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform import aws_route53_record.myrecord Z4KAPRWWNC7JR_dev.example.com_NS

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While you are importing resources into your state file, I would suggest to run also some terraform plans, just to make sure you are importing them correctly and your code is aligned.&lt;/p&gt;

&lt;p&gt;You can also build few bash scripts (or using you preferred scripting language) to help you with the import.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plan and Apply terraform
&lt;/h2&gt;

&lt;p&gt;Now that terraform code has been created it's time to run a final &lt;em&gt;plan&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Sometimes is useful to target resources instead showing the full plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -target "aws_ec2_instance.ec2_instance"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, after the terraform code is created and resources are imported, I'm expecting there is nothing to "apply", at most few intentional changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delete Cloudformation stacks
&lt;/h2&gt;

&lt;p&gt;Finally, your infrastructure is now managed by Terraform and you are free to delete all Cloudformation stacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember to double check that &lt;code&gt;DeletionPolicy: Retain&lt;/code&gt; is in place for all resources.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some tips
&lt;/h2&gt;

&lt;p&gt;Depending on the dimension of your migration you might be overwhelmed by all resources, import commands, subnets_id, security_groups, plans.&lt;/p&gt;

&lt;p&gt;Before starting remember the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't rush&lt;/strong&gt; : if it's just few cfn simple templates, the process it's pretty straightforward. However if you are managing a lot of templates, just take your time to think how you want to configure your project: is it multi-region? Do you have more environments? You want to be sure that your new code is easy to manage and to maintain and it's scalable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start with basics&lt;/strong&gt; : this might sound obvious, but try to start with the foundations and then build up on it. This helps you thinking also how you want to structure your code. You can start building first all your network resources, like VPC, Subnets, Nat GW, etc... and then the application stacks that depends on it, EC2, ECS, Lambda. Probably you want to use separate repositories. For instance have the network repository and then application which will use terraform data sources to query data created from the network part. Or use modules, which is my next point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use modules&lt;/strong&gt; : try to use terraform module as much possible. This will help you building a more readable code and also importing and planning your resources with terraform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7zoq1DWQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662668843507/1fJjLSRNb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7zoq1DWQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1662668843507/1fJjLSRNb.png" alt="carbon (3).png" width="840" height="1116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it. Good luck with your migration and thanks for reading this post. If you have any question or need any help, please reach out.&lt;/p&gt;

&lt;p&gt;Ciao&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
      <category>terraform</category>
      <category>iac</category>
    </item>
    <item>
      <title>Meet Up: Modeling for relationships with DynamoDB</title>
      <dc:creator>sgrilux</dc:creator>
      <pubDate>Tue, 06 Sep 2022 12:28:36 +0000</pubDate>
      <link>https://dev.to/sgrilux_41/meet-up-modeling-for-relationships-with-dynamodb-je0</link>
      <guid>https://dev.to/sgrilux_41/meet-up-modeling-for-relationships-with-dynamodb-je0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZXhWd10s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3toq2genu0j62fg5vicb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZXhWd10s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3toq2genu0j62fg5vicb.png" alt="Image description" width="676" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I thought this would be an interesting thing to share.&lt;/p&gt;

&lt;p&gt;My company is sponsoring &lt;a href="https://www.meetup.com/aws-specialists-stockholm/events/288136015/"&gt;this&lt;/a&gt; meet up in Stockholm and it will be hosted by AWS on the 11th October.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Evolate, specialists in Amazon Web Services, invite you to an interactive and unforgettable evening with AWS Hero and DynamoDB specialist Alex DeBrie. He is the author of The DynamoDB Book, the comprehensive guide to data modeling with DynamoDB, and a world-renowned expert on data modeling, serverless architectures, and general AWS usage. Alex has helped thousands of developers learn DynamoDB and worked with a variety of impressive clients, including government agencies and publicly-traded enterprises. He had the second-most viewed session at AWS re:Invent, and has helped write some of the official guides for AWS database services. Alex is joining us to discuss the basics of DynamoDB and the various mechanisms to handle relationships in your DynamoDB data modeling.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Please join us, we have limited spots available for this, so sign up now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.google.com/maps/search/?api=1&amp;amp;query=59.333157%2C%2018.066673"&gt;https://www.google.com/maps/search/?api=1&amp;amp;query=59.333157%2C%2018.066673&lt;/a&gt;&lt;/p&gt;

</description>
      <category>meetup</category>
      <category>aws</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>ECS Container credentials</title>
      <dc:creator>sgrilux</dc:creator>
      <pubDate>Sun, 08 May 2022 11:20:01 +0000</pubDate>
      <link>https://dev.to/sgrilux_41/ecs-container-credentials-5d92</link>
      <guid>https://dev.to/sgrilux_41/ecs-container-credentials-5d92</guid>
      <description>&lt;p&gt;I've recently come across an issue assuming a role from an ECS container.&lt;/p&gt;

&lt;p&gt;The task role of my ECS task had the policy to assume the role in the destination account and I also followed steps and troubleshootings from AWS documentation (See here) However I was still unable to assume the role.&lt;/p&gt;

&lt;p&gt;My main problem was that the process wasn't able to see the &lt;code&gt;AWS_CONTAINER_CREDENTIALS_RELATIVE_URI&lt;/code&gt; variable so it couldn't get the credentials from the role.&lt;/p&gt;

&lt;p&gt;As per AWS doc:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The environment variable &lt;strong&gt;AWS_CONTAINER_CREDENTIALS_RELATIVE_URI&lt;/strong&gt; is available only to PID 1 processes within a container. If the container is running multiple processes or init processes (such as wrapper script, start script, or supervisord), the environment variable is unavailable to non-PID 1 processes.&lt;/p&gt;

&lt;p&gt;To set your environment variable so that it's available to non-PID 1 processes, export the environment variable in the .profile file. For example, run the following command to export the variable in the Dockerfile for your container image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' &amp;gt;&amp;gt; /root/.profile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now additional processes can access the environment variable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As suggested I tried add the command in the Dockerfile but .... didn't help. I also tried to add the same in the docker entypoint script ... the same. I tried different things but still nothing.&lt;/p&gt;

&lt;p&gt;My container was running as part of a gitlab pipeline (see the &lt;a href="https://dev.to/sgrilux_41/serverless-gitlab-runner-part1-20p-temp-slug-8638790"&gt;Gitlab Runner Job&lt;/a&gt;) and the only way I made it working was to export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in the pipeline job.&lt;/p&gt;

&lt;p&gt;For ex.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test-job1:
  stage: test
  script:
    - export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
    - aws s3 ls --profile cross-account-role

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the moment this is the only way I could get the credentials from the task role. It's not elegant but it's working.&lt;/p&gt;

&lt;p&gt;If anyone hade the same issue and managed to find a better solution I'll be happy to hear from you and if I helped even a bit to fix your problem then I am even happier.&lt;/p&gt;

&lt;p&gt;CIAO!&lt;/p&gt;




&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-role-another-account/"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-role-another-account/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-task-roles-config-errors/"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-task-roles-config-errors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Serverless Gitlab Runner #Part1</title>
      <dc:creator>sgrilux</dc:creator>
      <pubDate>Sat, 30 Apr 2022 14:47:49 +0000</pubDate>
      <link>https://dev.to/sgrilux_41/serverless-gitlab-runner-part1-4ci7</link>
      <guid>https://dev.to/sgrilux_41/serverless-gitlab-runner-part1-4ci7</guid>
      <description>&lt;p&gt;I come from a sysadmin background and Ive spent many years supporting infrastructures and applications during nights and weekends. Nowadays with Cloud you can easily build solutions that requires little or zero maintenance.&lt;/p&gt;

&lt;p&gt;So today I want to talk about how we can implement a Serverless Gitlab runner solution on a AWS ECS Cluster using Fargate. This is the first part where I explain the concept and a diagram that shows how to implement it. Next I will show some code I will post on Github as soon as its ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  What?
&lt;/h2&gt;

&lt;p&gt;Lets start with a couple of questions: What is a gitlab runner? And What is Fargate?&lt;/p&gt;

&lt;p&gt;A Gitlab runner is nothing more that an application that execute Gitlab jobs in pipelines. It gets the code from gitlab and the configuration from &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; and execute it. It can be a virtual machine or a docker container. You can use Shared runners provided by Gitlab or host your own runners. Im not here to go deep and explain how gitlab pipeline works and how to configure &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;, if you are here you probably know already and probably better than me :)&lt;/p&gt;

&lt;p&gt;Fargate is an AWS managed service where you can run your ECS or EKS cluster. Being s managed service means that you dont need to do anything, AWS manages it for you, configuration and patches. In case of ECS what you need is create configurations that are needed by your application to run on the cluster, like memory, cpu, network and the docker image. These configurations are called task definition. Again, you all know how ECS works, right? :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;After the What? there is always the Why?. So why we want to build such solution and why on ECS and not on EKS? &lt;strong&gt;Good question&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Well for start we do it for fun, why not! And second ECS is a bit easier than EKS. ECS is the AWS container orchestrator, it's a proprietary solution. EKS is basically kubernetes on AWS. You don't need to install manually k8s on EC2 instances.&lt;/p&gt;

&lt;p&gt;With ECS you just need to configure a task definition and its done. It requires less expertise to setup a cluster and run your application.&lt;/p&gt;

&lt;p&gt;On the other hand Fargate is the AWS managed compute engine. You don't need to provision any instance, AWS manages it for you. That means less operational overhead of scaling, patching, securing, and managing servers. Fargate is compatible with both ECS and EKS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.gitlab.com/runner/executors/kubernetes.html"&gt;Here&lt;/a&gt; you can find also how to run runners on EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW?
&lt;/h2&gt;

&lt;p&gt;Now comes the fun question: how can we build this?&lt;/p&gt;

&lt;p&gt;Heres a diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--95kwfgoM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1651328704522/BMDuIQyOF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--95kwfgoM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1651328704522/BMDuIQyOF.png" alt="gitlab-ecs.png" width="831" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see everything is mainly managed by few lambda functions. But let me explain how the flow works.&lt;/p&gt;

&lt;p&gt;What we need is a Docker image for the Gitlab runner executor with Fargate drivers and some other images for the runner jobs which are then executed by the executor.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First you need to create a webhook in your repository which endpoint is a lambda, passing the token that will be used by the runner to register automatically to gitlab. When you configure the token you pass also the secret that will be used for authentication. Depends on your needs you can pass more parameters in the query strings.&lt;/li&gt;
&lt;li&gt;The webhook lambda authenticates the call against a predefined secrete we have storer in SSM Parameter Store and put an event to EventBridge.&lt;/li&gt;
&lt;li&gt;An EventBridge rule filters the event and call another lambda that will start a runner with Fargate drivers.&lt;/li&gt;
&lt;li&gt;The next lambda get the event from EventBridge, read some parameters from SSM (like the runner token) and check if there are other runners running. If all is good it starts the gitlab runner executor passing the token and few more information.&lt;/li&gt;
&lt;li&gt;The runner register against Gitlab and it's now ready to accept jobs.&lt;/li&gt;
&lt;li&gt;When a new pipeline is triggered the runner starts new jobs in separate containers.&lt;/li&gt;
&lt;li&gt;An event on EventBridge is also scheduled to run every tot time to check if the runner is doing something (if there are jobs running), and if not it will stop the runner to save money.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Well go more into the details on part 2 when Ill show you the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;This solution is quite nice, although if you have some complex pipelines it's not perfect as there is a limitation in the taskDefinition that won't allow to override the job image parameter in a pipeline. That means you need bigger Docker images to manage different cases, which is not ideal.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.gitlab.com/runner/configuration/runner%5C_autoscale%5C_aws%5C_fargate/"&gt;https://docs.gitlab.com/runner/configuration/runner\_autoscale\_aws\_fargate/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I know this solution is not ideal, but I like experimenting and even improve what's already there. I'm sure both Gitlab and AWS will solve the problem of the image soon.&lt;/p&gt;

&lt;p&gt;I hope you liked this article and gave you some inspiration and hopefully I can push some code in Github soon to share with you in the next part.&lt;/p&gt;

&lt;p&gt;For now thanks for stopping by and reading it and I'm happy to "hear" your thought especially if you have already implemented it.&lt;/p&gt;

&lt;p&gt;Ciao!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Tagging with terraform</title>
      <dc:creator>sgrilux</dc:creator>
      <pubDate>Fri, 22 Apr 2022 19:25:33 +0000</pubDate>
      <link>https://dev.to/sgrilux_41/tagging-with-terraform-2m77</link>
      <guid>https://dev.to/sgrilux_41/tagging-with-terraform-2m77</guid>
      <description>&lt;p&gt;We all know that when we share a resource in AWS with a different account, tags are not shared alongside the resource. Tags are account based so you need to assign tags to the other account as well.&lt;/p&gt;

&lt;p&gt;Terraform comes to help with aws_ec2_tag which allows us to tag indivvidual resources that are created outside Terraform.&lt;/p&gt;

&lt;p&gt;So lets jump directly to an example.&lt;/p&gt;

&lt;p&gt;You have a networking account with a VPC that you are sharing with your production account. You want to Name your VPC with the same name that has been given in the networking account.&lt;/p&gt;

&lt;p&gt;first you need to collect VPC data (using a provider that connects to the networking account)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_vpc" "selected" {
  filter {
    name = "tag:Environment"
    values = ["production"]
  }

  provider = aws.central-networking
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then, with a production provider, you can tag your VPC&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ec2_tag" "my_prod_vpc" {
  resource_id = data.aws_vpc.selected.id
  key = "Name"
  value = data.aws_vpc.selected.tags.Name

  provider = aws.production
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use for_each to assign multiple tags to the same resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ec2_tag" "my_prod_vpc" {
  for_each = local.tags

  resource_id = data.aws_vpc.selected.id
  key = each.key
  value = each.value
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thats it, easy peasy!&lt;/p&gt;

&lt;p&gt;I hope you have enjoyed itsee you on the next post.&lt;/p&gt;

&lt;p&gt;CIAO!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_tag"&gt;Here&lt;/a&gt; for more information about aws_ec2_tag&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
