<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paweł Piwosz</title>
    <description>The latest articles on DEV Community by Paweł Piwosz (@pawelpiwosz).</description>
    <link>https://dev.to/pawelpiwosz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pawelpiwosz"/>
    <language>en</language>
    <item>
      <title>AWS Re:Invent announcement: Lambda SnapStart for .Net - let's try it!</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Wed, 18 Dec 2024 23:53:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-reinvent-announcement-lambda-snapstart-for-net-2cfk</link>
      <guid>https://dev.to/aws-builders/aws-reinvent-announcement-lambda-snapstart-for-net-2cfk</guid>
      <description>&lt;h1&gt;
  
  
  AWS re:Invent 2024 announcements
&lt;/h1&gt;

&lt;p&gt;In previous article I wrote about my top 10 announcements (very briefly). But there is one more which I found very important. However, I didn't place it on the top 10 list. Why?&lt;/p&gt;

&lt;p&gt;Well, There are a few reasons.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SnapStart itself isn't new. We have this functionality in Java for around 1 year now.&lt;/li&gt;
&lt;li&gt;It is yet another improvement for Lambda functions to minimize cold starts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These two are the most important, I think.&lt;/p&gt;

&lt;p&gt;Before we will discuss SnapStart, let's talk about the reason why AWS gave us this functionality.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is the infamous cold start?
&lt;/h1&gt;

&lt;p&gt;Everyone who knows AWS Lambda (and serverless compute in general) deeper, understands the concept of cold start. But let's go even deeper and explain it more thoroughly.&lt;/p&gt;

&lt;p&gt;There is no magic. Serverless means nothing more than that you are not the one who manage the server. But there is a server. In order to run your code, AWS is using Firecracker virtual machines. It is solution built by AWS and uses KVM to run microVMs (how they call it). These microVMs are very lightweight virtual machines, run on Firecracer (which in fact is a virtualization technology). &lt;/p&gt;

&lt;p&gt;The fact that this VM is lightweight gives it the needed performance and speed to spin up to server your requests. But you cannot beat the physics, not today, anyway. This VM needs some time to spin up. &lt;/p&gt;

&lt;p&gt;When the Firecracker run your VM, it prepares the environment for you. Installs all dependencies for your runtime, downloads your code, and finally, starts the runtime. These elements are part of the cold start which is managed by AWS.&lt;/p&gt;

&lt;p&gt;Another cold start is in our hands. It depends on how our code is written, how many dependencies we load, how we initialize elements in the code.&lt;/p&gt;

&lt;p&gt;So, we have two cold starts. The one which we manage and we can try to decrease, and another, managed by AWS, which is not in our reach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk2d9kjjvcp8jql925gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk2d9kjjvcp8jql925gw.png" alt="Cold and warm start" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we see on the picture above, the cold start can take a long time. When the customer tries to do something on our serverless app, for him it takes ages :) Second part of the picture shows something what we call a warm start. This happens when previous instance of microVM finished its work and is free to take another task. MicroVM is in this state for several minutes, and it that time it takes first incoming request and processes it, without any preparations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;It is not part of this article, but if you want to reuse warm functions, please remember about cache, stored data, etc :)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, the SnapStart.&lt;/p&gt;

&lt;p&gt;...or, well, not yet. &lt;/p&gt;

&lt;p&gt;There is one more element, important in the case which we explore today.&lt;/p&gt;

&lt;p&gt;If you look at the picture above carefully, you can see that run Lambda handler for warm start is shorter than during the cold start. And it makes perfect sense, at least in some languages, like .Net.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer: I am not a developer. I don't know .Net. This article is not an anatomy of performance of .Net Lambda, but just my test for SnapStart. I'd like to make it clear. Clear? ;)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;.Net uses dynamic compilation, what means that the part of the code is compiled only when and just when is executed by the runtime. As far as I know, this behavior can be changed, but I didn't try it. So my tests used only standard approach.&lt;/p&gt;

&lt;p&gt;Another words, you run your function, this function calculates things, does many actions, and when it reaches the point where the data must be saved in the DynamoDB table, this part (module, or whatever we will call it) is dynamically compiled.&lt;/p&gt;

&lt;p&gt;What means...&lt;/p&gt;

&lt;p&gt;Yes, you're right. Let's call it the third type of cold start :) At this point the compilation takes time. More, or less, but it takes time. And we will see it soon on the screens.&lt;/p&gt;

&lt;p&gt;This information is important for our further exploration.&lt;/p&gt;

&lt;h1&gt;
  
  
  SnapStart
&lt;/h1&gt;

&lt;p&gt;Finally! Let's talk about SnapStart.&lt;/p&gt;

&lt;p&gt;During the re:Invent 2024 AWS announced SnapStart for .Net and Python. SnapStart for Java is available for around one year. &lt;/p&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;When you deploy the new function (or updated one), AWS is creating a snapshot of the microVM. This snapshot is done after all elements included in cold start process are completed. Every single time when the function is called and there is no warm instance available, the function is restored from snapshot, with all elements already prepared.&lt;/p&gt;

&lt;p&gt;Who sees the catch here already? :)&lt;/p&gt;

&lt;p&gt;The important prerequisite is to have enable versioning for the Lambda function. Without it SnapStart cannot be enabled.&lt;/p&gt;

&lt;p&gt;What about the price? SnapStart itself is free. However, the storage for snapshots and the restore transfers are not. The pricing is on AWS page, you can easily calculate how much you will pay.&lt;/p&gt;

&lt;p&gt;With SnapStart you have to remember about storing and caching data during the snapshot process. You have to ensure that you don't have any sensitive data stored.&lt;/p&gt;

&lt;p&gt;Worth to remember is that the time of snapshot restore depends on many aspects. One of them is size of the memory you configured for your Lambda.&lt;/p&gt;

&lt;p&gt;Enough of theory, let's see how it looks in practice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dotnet6
&lt;/h1&gt;

&lt;p&gt;I started my experimentation with dotnet6. I have to confess, I didn't check, which runtimes work with SnapStart. It turned out, that dotnet6 doesn't, you have to use dotnet8. Anyway, I tested this version, to see the "clean" cold start and measure it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp4ntnx61hql6fw93akm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp4ntnx61hql6fw93akm.png" alt="Two executions" width="800" height="53"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above shows two executions of my Lambda. It is extremally easy to see the cold start. And yes, it is unacceptable. Let's take a look into traces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa47rh6coijl4kezv1uuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa47rh6coijl4kezv1uuj.png" alt="Cold start" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What can we see here? I think the second picture will help to better understand the runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sp5f7evovychdsteymd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sp5f7evovychdsteymd.png" alt="Warm start" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Initialization&lt;/code&gt; - this is our cold start. It took almost half of second. Is it a lot, or not? Well, for me - it is. But something else is more interesting. Do you remember when I wrote about dynamic compilation? &lt;/p&gt;

&lt;p&gt;It is there. Compare the execution times for &lt;code&gt;Invocation&lt;/code&gt;. 14 seconds (!!!) for cold function versus less than 300 ms.&lt;/p&gt;

&lt;p&gt;It is unacceptable. Fortunately, we have a solution for it.&lt;/p&gt;

&lt;p&gt;Before we go there, you have to know, you cannot enable SnapStart for this runtime. I told it a few paragraphs before.&lt;/p&gt;

&lt;h1&gt;
  
  
  Ready To Run and Dotnet8
&lt;/h1&gt;

&lt;p&gt;Well, what I did is not a perfect test, but I don't care. The goal was to check the SnapStart, just that. So, I changed the runtime to dotnet8, Copilot had a lot of troubles with making the code workable again. I mentioned already, I don't know dotnet, but I had to clearly show Copilot where the error is, only then it was able to fix it. &lt;/p&gt;

&lt;p&gt;Anyway, the code is available in &lt;a href="https://github.com/pawelpiwosz/dotnetsnapstart/tree/v1.0" rel="noopener noreferrer"&gt;this repository&lt;/a&gt;. I already direct you to tag v1.0, where you can find dotnet8 code prepared with Ready To Run functionality. And this is very important.&lt;/p&gt;

&lt;p&gt;Ready To Run prepares the code to decrease sygnificantly the time needed to start your code. It doesn't, however, solve all problems with dynamic compilation, but helps a lot.&lt;/p&gt;

&lt;p&gt;Let's see the picture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pwiohc8lfusfwn2z0vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pwiohc8lfusfwn2z0vg.png" alt="Response time distribution" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see improvements in all executions. Warm functions were faster than before, but most importantly, cold one is much, much faster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkngb0ts3wzdvaobpbkxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkngb0ts3wzdvaobpbkxj.png" alt="Cold start" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although, we see longer time for initialization!&lt;/p&gt;

&lt;p&gt;How it looks for warm execution?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnb7af69pfz352lnwsna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnb7af69pfz352lnwsna.png" alt="Warm start" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time it was so quick! the whole execution took 55 ms!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yes, I know my instraumentation for Powertools lack a lot in this version, I forgot to add it, however, it doesn't change the conclusions!&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Enable SnapStart
&lt;/h1&gt;

&lt;p&gt;It is time to enable SnapStart. In order to do so, we need to change a little the SAM template. We need to enable versioning for Lambda functions. The code is under &lt;a href="https://github.com/pawelpiwosz/dotnetsnapstart/tree/v2.0" rel="noopener noreferrer"&gt;v2.0 tag&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This template enables versioning and also enables SnapStart. &lt;/p&gt;

&lt;p&gt;If you would like to do it manually, you need to go to the configuration options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg96plufybam2e3x4nmt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg96plufybam2e3x4nmt.png" alt="Configuration" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Edit&lt;/code&gt; in &lt;code&gt;General configuration&lt;/code&gt; and change the SnapStart from &lt;code&gt;None&lt;/code&gt; to &lt;code&gt;PublishedVersions&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foppbom3wexgrc8owfepj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foppbom3wexgrc8owfepj.png" alt="Configuration" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing SnapStart!
&lt;/h1&gt;

&lt;p&gt;Let's run our function!&lt;/p&gt;

&lt;p&gt;I ensured that I execute function from cold state and I saw this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh87yhagsi36nsqy3ezv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh87yhagsi36nsqy3ezv6.png" alt="SnapStart cold start" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see clearly, there is no &lt;code&gt;Init&lt;/code&gt; phase, it was replaced by &lt;code&gt;Restore&lt;/code&gt;. This means that the function was restored, no Initialization was needed. The whole fnunction is a little bit longer (no worries, we will come back to it), and the Restore is very similar in duration like Init was. This needs some explanations, as it is extremally important, I cover it in &lt;code&gt;Conlusions&lt;/code&gt; part.&lt;/p&gt;

&lt;p&gt;For the record, below is the warm function execution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs38es3fn493jbc13i9r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs38es3fn493jbc13i9r7.png" alt="SnapStart warm start" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No history here, the result is exactly as we exepcted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I do not care about longer execution?
&lt;/h2&gt;

&lt;p&gt;It is simple. One execution is not really measurable. In next section you will see the effect of multiple executions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next experiments
&lt;/h2&gt;

&lt;p&gt;I run some "load test". 200 executions with concurency of 25. A few of the functions were throttled (well, my bad :) but it doesn't matter). What are the averages?&lt;/p&gt;

&lt;p&gt;Cold start average duration time: 2.75s&lt;br&gt;&lt;br&gt;
Warm start average duration time: 0.07s&lt;/p&gt;

&lt;p&gt;The times are quite good. Especially when the function is warm.&lt;/p&gt;

&lt;p&gt;Let me provide some more numbers:&lt;/p&gt;

&lt;p&gt;Cold start:&lt;/p&gt;

&lt;p&gt;The longest execution: 8.5s &lt;strong&gt;(!!!)&lt;/strong&gt; I believe this was just accident, however... It increased the average a little.&lt;br&gt;&lt;br&gt;
The longest repeatable execution time: 2.74s (without the 8s long run it is 2.5s)&lt;br&gt;&lt;br&gt;
The shortest execution time: 2.15&lt;/p&gt;

&lt;p&gt;As we can see these times are quite close to each other (except one :) )&lt;/p&gt;

&lt;p&gt;Warm start:&lt;/p&gt;

&lt;p&gt;The longest execution: 0.18s&lt;br&gt;&lt;br&gt;
The shortest execution time: 0.027&lt;/p&gt;

&lt;p&gt;Of course, there is still API Gateway to consider. But as I said, I do not do perfect performance tests :)&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;SnapStart works. That's for sure. Now, you may ask, what is the benefit? I didn't show any, right? Well, consider this:&lt;/p&gt;

&lt;p&gt;I asked other Community Builders and what I heard confirmed my thoughts. If you check the code, you'll see, there is not much to do. It is simple. Collect record from DynamoDB, increment counter and store the value back in DynamoDB. That's all.&lt;/p&gt;

&lt;p&gt;And this is the reason why SnapStart doesn't show the full potential. The time needed for all initializations is not long enough to give the reason to use snapStart. What's more, I used 512M of memory, if I tested 2048M, the restoration of snapshot was twice longer.&lt;/p&gt;

&lt;p&gt;This means, with simple functions SnapStart is not necessary and will add something to your bill.&lt;/p&gt;

&lt;p&gt;SnapStart didn't help with dynamic compilation. In fact, this is something what I expected. Lambda creates snapshot when the function is created / updated. Not after the execution.&lt;/p&gt;

&lt;p&gt;It is time to test something more complicated, I already plan to make the function bigger, perform more operations. This should show improvements towards effectiveness of SnapStart. I will publish the second part in some time, stay tuned!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>smartstart</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS re:Invent 2024 is a history now</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sun, 15 Dec 2024 20:06:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-reinvent-2024-is-a-history-now-gec</link>
      <guid>https://dev.to/aws-builders/aws-reinvent-2024-is-a-history-now-gec</guid>
      <description>&lt;p&gt;It was my first re:Invent in person. Was it worth to go? This short text should answer to it :)&lt;/p&gt;

&lt;p&gt;It is the major event in Cloud world. Organized by AWS, obviously. This year we had 13th edition of the event. 5 days in Las Vegas, fully packed with presentations, workshops, networking and expo. Re:Invent is also the place, where a lot of announcements happen. But in fact, AWS is announcing new products, updates, changes etc for a whole month before the event.&lt;/p&gt;

&lt;h1&gt;
  
  
  A few numbers
&lt;/h1&gt;

&lt;p&gt;Thanks to work done by Wojtek Gawronski, Developer Advocate at AWS, who prepared this data, I can share a few numbers about the conference. These numbers can give you the glimpse about the scale. And scale is enormous.&lt;/p&gt;

&lt;p&gt;60000 people joined the event in Las Vegas. It is a lot of people, right? Yes, you can feel the number of people. But imagine 5 big hotels in Las Vegas. There is a lot of space to accomodate this crowd!&lt;/p&gt;

&lt;p&gt;3500 speakers and 1000 sessions (talks, panels and workshops). Believe me, it is hard and almost impossible to choose something for yourself. So many good sessions happens in the same time!&lt;/p&gt;

&lt;p&gt;But there is a catch. This year we could experience how the hype works. I think more than 30% of sessions had “AI” in the title. The reality is simple. AI is a hype. However, I have a feeling that many of sessions had AI in the title for one reason - to have more chances to be selected into the conference. The main topics were hidden behind AI. I will explain this thesis deeper in the video on my channel soon.&lt;/p&gt;

&lt;h1&gt;
  
  
  Announcements
&lt;/h1&gt;

&lt;p&gt;During the re:Invent we heard 123 announcements about new services, changes, updates, new functionaalities. In total, together with all announcements before and during the event we heard 545 announcements.&lt;/p&gt;

&lt;h1&gt;
  
  
  My top 10
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In this first summary done by me I selected 10 announcements which I found most interesting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aurora DSQL&lt;/strong&gt;. Serverless distributed PostgreSQL compatible database. We can create distributed, multi-region, active-active database. Imagine the possibilities! I had the opportunity to work with DSQL during the workshops, and I have to say, it is really promissing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch Database Insights&lt;/strong&gt;. Together with other Insights services, we can monitor and analyze more deep data from databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Tables&lt;/strong&gt;. New type of S3 bucket which allows to store data in Apache Iceberg format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictive scaling for ECS&lt;/strong&gt;. Your infrastructure can scale proactively, based on ML learning and anticipating the changes in the workload&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS Auto Mode&lt;/strong&gt;. If you are not very familiar with AWS and / or Kubernetes, EKS Auto Mode helps you to easily provision cluster with all resources needed, like databases, storages, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic VMWare Service&lt;/strong&gt; (EVS). With latest changes in licensing and purchasing options of VMWare, this solution helps client to migrate the workload from on-prem to AWS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trainium2&lt;/strong&gt; (Trn2). New CPU produced by Amazon subsidiary - Annapurnalabs. This CPU is designed to provide better capabilities for ML learning, but also inference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ultraservers&lt;/strong&gt;. Now you can build a huge cluster of 64 Trn2 CPUs. With internal Elastic Fabric Adapter (EFA), you can create a rack-wide cluster of servers with very fast internal network what is a huge advancement for machine learning processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nova models in Bedrock&lt;/strong&gt;. 4 text models (3 are available already), one for video and one for images. AWS claims these are the most multi-modal models, to the point where we talk about any-to-any.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock Marketplace&lt;/strong&gt;. From now we are not limited to a few models given to us by AWS, but actually vendors can place their models in the marketplace and clients can easily start to use them with Amazon Bedrock.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had some troubles to complete this list. There was so many great announcement! Maybe I'll write more about them, maybe in more detail way. Are you interested? Let me know in the comments!&lt;/p&gt;

&lt;h1&gt;
  
  
  More numbers
&lt;/h1&gt;

&lt;p&gt;80 kilometers. This is the distance my watch measured during the event. It is a lot of walking! Re:Invent is truly the walkathon, not just a conference :)&lt;/p&gt;

&lt;p&gt;3 Keynotes watched live&lt;/p&gt;

&lt;p&gt;2 workshops. I attended two workshops (once without laptop :) But thanks to Johannes Koch, I was able to actively participate :) )&lt;/p&gt;

&lt;p&gt;4 sessions attended. Someone can say here “what? Only 4?” Maybe. But consider this - the conference is not only sbout presentations. It is about networking, for mer, mainly about networking.&lt;/p&gt;

&lt;p&gt;1 customer review session attended.&lt;/p&gt;

&lt;p&gt;Not measured number of booths visited :)&lt;/p&gt;

&lt;p&gt;A lot of people met, a lot of discussions and a lot of ideas for more videos.&lt;/p&gt;

&lt;p&gt;2 re:Caps already planned. Re:Cap is an event where experts share their views on the conference, announcements etc. As for today, I participated as speaker in one event in Gdansk, we have another in Warsaw tomorrow. Quite possibly, this number will grow.&lt;/p&gt;

&lt;p&gt;Flights rebooked 4 times. Yeah, my trip to Vegas wasn't easy. My trip there took almost 40 hours. I will record a video about it. And I am thinking about the title. It will be re:Mess, or re:Crap, I will see :D To be clear - it doesn't have anything wit the re:Invent itself!&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap up
&lt;/h1&gt;

&lt;p&gt;It was a great event. My first, as I said, hopefully not last. Stay tuned for videos and potentially more posts about re:Invent!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsreinvent</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Get your certificate easily! Knowledge? Who needs that?</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sun, 02 Apr 2023 13:23:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/get-your-certificate-easily-knowledge-who-needs-that-3pg7</link>
      <guid>https://dev.to/aws-builders/get-your-certificate-easily-knowledge-who-needs-that-3pg7</guid>
      <description>&lt;h1&gt;
  
  
  Background
&lt;/h1&gt;

&lt;p&gt;Let me first explain, how I see certificates today. I already published my text about it some time ago and honestly speaking, my opinion now is even stronger. You can find this text &lt;a href="https://medium.com/@pawel.piwosz/certificates-certificates-c7cff03fbb13" rel="noopener noreferrer"&gt;here&lt;/a&gt;. But shortly - I am not a fan of pass exams for just passing the exams. And potentially list numbers of certs in LinkedIn profile :) Certification should be a knowledge AND experience confirmation.&lt;/p&gt;

&lt;h1&gt;
  
  
  The story
&lt;/h1&gt;

&lt;p&gt;This story begins some time ago, when I read a post on LinkedIn about offer to pass certification exam on behalf of someone. It was quite hard to believe, but I was not THAT surprised. In fact, I saw many "interesting" things during technical interviews when I did an evaluation of the candidates.&lt;/p&gt;

&lt;p&gt;What makes me somehow happy (I know how it sounds) is the growing consiousness of the real value of certifications. Especially for those people who collects them. For example, I saw no so long ago someone who has more than 60 certificates. I mean, really???&lt;/p&gt;

&lt;p&gt;Anyway, let's come back to the main line of the story.&lt;/p&gt;

&lt;h1&gt;
  
  
  An offer
&lt;/h1&gt;

&lt;p&gt;I received an offer. An offer which can be treated as kind of scam or try to get my sensitive data, but I know that something like that might be genuine.&lt;/p&gt;

&lt;p&gt;So, I get an offer to achieve any certificate I want. Easily. With 100% of success rate. No, not very extensive trainings provided by best of the bests. Just pass the exam with 100% success.&lt;/p&gt;

&lt;p&gt;Great, right?&lt;/p&gt;

&lt;p&gt;Sounds like a fraud? It should!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh8h0rm1dcdqrhdaobn7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh8h0rm1dcdqrhdaobn7.jpg" alt="Conversation starts" width="456" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, first of all - yeah, I really believe, you are the real person, Adria. Second... It doesn't smell that much yet... Right?&lt;/p&gt;

&lt;p&gt;Oh, don't be naive :)&lt;/p&gt;

&lt;p&gt;Anyway, &lt;em&gt;yeah, of course, I WANT THEM ALL!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8thvljmb08tlyt9tjg6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8thvljmb08tlyt9tjg6.jpg" alt="We wants them!" width="625" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, I decided to continue this game. &lt;/p&gt;

&lt;p&gt;What is interesting though, I am AWS Community builder, I have (or should I say - I had :) ) a few of certificates and what you can easily see on my LinkedIn profile (at least, I hope it is :D) I have &lt;em&gt;some&lt;/em&gt; knowledge about AWS.&lt;/p&gt;

&lt;p&gt;So, I answered to "Adria"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzydi544sm6m9txbn4yp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzydi544sm6m9txbn4yp.jpg" alt="Conversation continues" width="448" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks like Adria is very eager to help me, the answer was instant.&lt;/p&gt;

&lt;p&gt;Considering what I told you just a few lines above, I asked about AWS certifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk81yjs6bbq188a4b39qb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk81yjs6bbq188a4b39qb.jpg" alt="Conversation continues" width="420" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, of course there is nothing said that they will &lt;strong&gt;cheat&lt;/strong&gt; on my behalf. But I believe this conclusion is more than obvious.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99i1tzs7sibfrckvipqs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99i1tzs7sibfrckvipqs.jpg" alt="Conversation ends" width="408" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I said on the end that I report this wherever I can. I decided to leave the name and phone number uncovered as this is &lt;strong&gt;scam&lt;/strong&gt;, &lt;strong&gt;fraud&lt;/strong&gt; and something what brings the industry down. I have no wish to see something like this on the market.&lt;/p&gt;

&lt;p&gt;I hope that AWS, Microsoft, GCP, to name the few here finally will find a solution to remove these cheaters from the landscape.&lt;/p&gt;

&lt;h1&gt;
  
  
  Sad bucket of cold water
&lt;/h1&gt;

&lt;p&gt;I reported this person to LinkedIn. Sad thing is, LinkedIn doesn't see anything inapropriate here so far. I hope this will change too.&lt;/p&gt;

</description>
      <category>certificates</category>
      <category>fraud</category>
      <category>scam</category>
    </item>
    <item>
      <title>Drifts again</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sun, 05 Mar 2023 17:45:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/drifts-again-j28</link>
      <guid>https://dev.to/aws-builders/drifts-again-j28</guid>
      <description>&lt;p&gt;In the previous episodes we learned what is drift and how to detect it. We also learned how to configure the workers pools and use them to run our workloads. Finally, we know that drift detection can be run on workers pool only. So, let's put this knowledge together, and schedule the detection.&lt;/p&gt;

&lt;h1&gt;
  
  
  Change the behavior
&lt;/h1&gt;

&lt;p&gt;First, we need to change the behavior for the whole stack. Navigate to &lt;code&gt;Behavior&lt;/code&gt; tab and switch the &lt;code&gt;Worker pool&lt;/code&gt; from default to newly created one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9njn5ewo75ditxwrhgqu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9njn5ewo75ditxwrhgqu.jpg" alt="Change workers pool" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create the schedule
&lt;/h1&gt;

&lt;p&gt;We have to repeat the steps we already try to done once.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;Settings&lt;/code&gt; then &lt;code&gt;Scheduling&lt;/code&gt; and create new schedule for drift detection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyote7zxms0tvdzzzf0dc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyote7zxms0tvdzzzf0dc.jpg" alt="Create schedule" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, there is no need to set the workers pool, it is already done on the stack level.&lt;/p&gt;

&lt;p&gt;When done, we should see confirmation and details of scheduled task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b515bfvcxgl6kd3equ0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b515bfvcxgl6kd3equ0.jpg" alt="Created schedule" width="421" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Drift detected!
&lt;/h1&gt;

&lt;p&gt;After the check is done, we can see the visualised report about the drift. We can look on this report from a few places, but we do it now from Stack's &lt;code&gt;Resources&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0k1ricflpu029v6mb2g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0k1ricflpu029v6mb2g.jpg" alt="Drift reported" width="364" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The drifted resources are marked with sign on it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reconcile
&lt;/h1&gt;

&lt;p&gt;The configuration we did is to detect drift only. I mentioned briefly, that we can also act on it. When we setup the drift detection, we can switch &lt;code&gt;Reconcile&lt;/code&gt; on. In this case, when drift will be detected, the corresponding stack will be triggered to fix it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Clearing drifts in automatic way should be the goal of every team when use IaC. Spacelift is doing it well.&lt;/p&gt;

&lt;p&gt;I think though, there are some points for improvements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Reload&lt;/code&gt; button. Even if the page is reloading periodically, I wish to have this option.&lt;/li&gt;
&lt;li&gt;More details about drifted resources. Something like &lt;code&gt;should be vs it is&lt;/code&gt; comparision for drifted resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Takeaways
&lt;/h1&gt;

&lt;p&gt;We learned how to create drift detection and how to act on it. We also know how to check and visualise drifts.&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Workers pool</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sat, 04 Mar 2023 20:28:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/workers-pool-48n3</link>
      <guid>https://dev.to/aws-builders/workers-pool-48n3</guid>
      <description>&lt;p&gt;In this episode we will create workers pool, to run workloads on "our" servers. This process requires a few steps, let's see how quickly and easily we can add new worker to the app.&lt;/p&gt;

&lt;p&gt;During this episode I will heavily use Spacelift documentation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create private key
&lt;/h1&gt;

&lt;p&gt;First step is to create the key which wil be used to authenticate worker with application.&lt;/p&gt;

&lt;p&gt;I'll use WSL2 for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; spacelift.key &lt;span class="nt"&gt;-out&lt;/span&gt; spacelift.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These files we need to setup our pool.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create workers pool
&lt;/h1&gt;

&lt;p&gt;Navigate to &lt;code&gt;Worker pools&lt;/code&gt; in the Spacelift app. You should see something similar like on the screen below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzgwe2gs8cqxx3hpoc8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzgwe2gs8cqxx3hpoc8.jpg" alt="Create workers pool" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;New worker pool&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwon7vxloydtn7osz843.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwon7vxloydtn7osz843.jpg" alt="Create workers pool" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the &lt;code&gt;csr&lt;/code&gt; file generated in previous step. Also, I changed space to &lt;code&gt;root&lt;/code&gt;. Yes, you can have pools set by space.&lt;/p&gt;

&lt;p&gt;Please remember, that one worker is able to run one execution. If this is good or bad, I leave it to you, however it doesn't sound very appealing if I will use virtual machines, am I right? But! We can use this VM as containers platform and it starts to look a little bit better.&lt;/p&gt;

&lt;p&gt;Ok, we can create our pool.&lt;/p&gt;

&lt;p&gt;My pool is created and the file with token is downloaded automatically. Well, I am not fully sure here. I almost missed this download, my browser didn't show anything. I knew there will be a token, so I was careful, but I think it will be nice if Spacelift improve the UX here.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create worker
&lt;/h1&gt;

&lt;p&gt;I decided to go easy way and I run the EC2 on AWS using Spacelift image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2apxacf80r4fcz5tluk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2apxacf80r4fcz5tluk.jpg" alt="Select Spacelift AMI on AWS" width="755" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've changed the default volume only. The type I selected is gp3 and I increased the size to 20G.&lt;/p&gt;

&lt;p&gt;The provided AMI is not completed. I mean, it contains everything what we need to &lt;em&gt;download and configure&lt;/em&gt; the launcher. Let's do it then!&lt;/p&gt;

&lt;p&gt;First, we need to encode the key we generated earlier&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;spacelift.key | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can login to the newly created machine and do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt;
wget https://downloads.spacelift.io/spacelift-launcher
&lt;span class="nb"&gt;install &lt;/span&gt;spacelift-launcher /usr/bin/spacelift-launcher
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/bin/spacelift-launcher
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SPACELIFT_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;token_from_file&amp;gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SPACELIFT_POOL_PRIVATE_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key_from_encode_operation&amp;gt;"&lt;/span&gt;
/usr/bin/spacelift-launcher
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What I do not like - to make the activity quickly I had to use &lt;code&gt;root&lt;/code&gt; account.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What we've just done is very unstable and should be treated as "let me check it" way only. When we do it properly, all these should be automated and run without any access from human side. In fact... Spacelift &lt;a href="https://github.com/spacelift-io/terraform-aws-spacelift-workerpool-on-ec2" rel="noopener noreferrer"&gt;provides a repository with IaC to provision the infrastructure&lt;/a&gt;!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's see our workers pool&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1glgkppek05wb11m02x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1glgkppek05wb11m02x.jpg" alt="Copnfigured workers pool" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yep, we are done!&lt;/p&gt;

&lt;h1&gt;
  
  
  The 'hmmm' moment
&lt;/h1&gt;

&lt;p&gt;I need to dig deeper into the workers pool. I saw couple of things which made me think. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What will happen if I stop my worker node? &lt;/li&gt;
&lt;li&gt;Logs for spacelift-launcher are set to &lt;code&gt;debug&lt;/code&gt; by default (at least, this is what I see after quick check).&lt;/li&gt;
&lt;li&gt;In logs I saw information about S3. And this is something what I have to explore. What S3? Why? Who owns it? I have some ideas, what is behind, but I have to confirm it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Takeways
&lt;/h1&gt;

&lt;p&gt;Spacelift allows us to create private pools of workers which can be used for different workloads. Creation is simple and management is quite effective.&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Drift detection</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Wed, 01 Mar 2023 22:31:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/drift-detection-g02</link>
      <guid>https://dev.to/aws-builders/drift-detection-g02</guid>
      <description>&lt;p&gt;This episode tells the story about what drift is and why it is important to understand how crucial this process is in order to ensure proper configuration and security of the infrastructure.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is drift
&lt;/h1&gt;

&lt;p&gt;I am sure many of you at least heard this term. Let's make it simple. Drift is unwanted by process, undocumented, most often manual, change in the configuration. Another words, someone logged into AWS (for example) and added something to Security Group, or IAM Policy. "For a moment, to check something". And this "small change" opens a lot of possibilities for attackers for long period of time until it is detected. If it is detected.&lt;/p&gt;

&lt;p&gt;It is because of drifts that many teams is affraid to run their IaC templates to not "break something". Well, the main issues here are missing process, best practices are not used and IaC is used as "scripted Operator's hands".&lt;/p&gt;

&lt;p&gt;So, let me be harsh. If anyone in your team says "do we really need to run this? I am not sure what will happen." Don't be angry on him. Be angry on his manager :)&lt;/p&gt;

&lt;p&gt;In case of IaC we have two states, or phases, to care to. Those are quite distinct and easy to define - one is before deployment, second - the lifecycle after deployment.&lt;/p&gt;

&lt;p&gt;Implement a proper mechanism based on best practices for the first phase is relatively easy. I spoke about it multiple times on different events (and probably I'll write something about it). Second one, however, is tricky. While first one is activated, or triggered by some event (like push to repo, for example), second must be executed continuously. Another factor here is the quality of drift detection, which is... questionable in many aspects.&lt;/p&gt;

&lt;p&gt;But we have Spacelift! Let's see what it can do for us.&lt;/p&gt;

&lt;h1&gt;
  
  
  Preparations
&lt;/h1&gt;

&lt;p&gt;Before we start to check the drifts, we add more resources to our template. Simple resources - Security Group, EC2 instance.&lt;/p&gt;

&lt;p&gt;First, Security Group. Simple one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"default-vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"my-sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test drifts"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default-vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test entry"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.10.10.10/32"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test entry"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For those ofyou who are unfamiliar with Terraform - in first resource (well, data, in fact) we gather information about the default VPC in the Region and then we use this information to create Security Group in this VPC.&lt;/p&gt;

&lt;p&gt;Now the time for EC2&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu-recent"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Canonical&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet_ids"&lt;/span&gt; &lt;span class="s2"&gt;"list-of-subnets"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default-vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;locals&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_ids_list&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tolist&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet_ids&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;list-of-subnets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ids&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_network_interface"&lt;/span&gt; &lt;span class="s2"&gt;"ec2-eni"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet_ids_list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;security_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my-sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"drift-test"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu-recent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;

  &lt;span class="nx"&gt;network_interface&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;network_interface_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_network_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ec2-eni&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;device_index&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we've done here is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;get the latest ID of the Ubuntu AMI&lt;/li&gt;
&lt;li&gt;get the list of Subnets available in VPC (we had to manipulate this a little with &lt;code&gt;locals&lt;/code&gt; and &lt;code&gt;tolist&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;create the ENI (network interface)&lt;/li&gt;
&lt;li&gt;create EC2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple :)&lt;/p&gt;

&lt;p&gt;When we apply the template through Spacelift, let's see our resources in &lt;code&gt;Resources&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nzllzyb4s4ntaxxhboz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nzllzyb4s4ntaxxhboz.jpg" alt="Created resources" width="380" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We should see 7 resources created.&lt;/p&gt;

&lt;h1&gt;
  
  
  Time to create a drift
&lt;/h1&gt;

&lt;p&gt;Let's login to the console and do some mess around. What I've done:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the instance type&lt;/li&gt;
&lt;li&gt;Change inbound rule in Security Group&lt;/li&gt;
&lt;li&gt;Add another inbound rule&lt;/li&gt;
&lt;li&gt;Remove outbound rule&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, we generated some drift.&lt;/p&gt;

&lt;h1&gt;
  
  
  Drift detection
&lt;/h1&gt;

&lt;p&gt;Drift detection is a process which allows us to catch the drift. And now you should see the difficulty already. In fact, drift can happen anytime, so, the detection should be continuous. But how to achieve it? When we continuously detect drifts using Terraform, we will permamently lock the state. We don't want that.&lt;/p&gt;

&lt;p&gt;Ah well, this is the topic for another story :)&lt;/p&gt;

&lt;p&gt;Let's create our check using Spacelift.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzenu994dh8pfnvqyoshy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzenu994dh8pfnvqyoshy.jpg" alt="Schedules" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;Settings&lt;/code&gt; and select the &lt;code&gt;Scheduling&lt;/code&gt; tab in your stack. Click &lt;code&gt;Add new schedule&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp2b1yajr685r2w4zhxa.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp2b1yajr685r2w4zhxa.jpg" alt="Schedules" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, there is a bad news. This is not working on public workers now. So, we need to setup the private worker, and we will do it in next episode and then we'll continue with drifts!&lt;/p&gt;

&lt;p&gt;But let's explore the configuration now. We can add different types of achtions, we selected &lt;code&gt;Drift detection&lt;/code&gt;. &lt;code&gt;Reconcile&lt;/code&gt; option is very powerful, and provides the kind of &lt;em&gt;self-healing&lt;/em&gt; approach. It meas, that if we enable the option, when drift will be detected, the stored template will be applied to recover the desired state of infrastructure. Desired means here stored in VCS, in the template.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Ignore state&lt;/code&gt; might be risky. Normally, we shouldn't run detection is our stack is in different state than &lt;code&gt;Finished&lt;/code&gt;, as this might cause problems.&lt;/p&gt;

&lt;p&gt;And finally &lt;code&gt;Schedule&lt;/code&gt; is a cron-like expression to set the schedule.&lt;/p&gt;

&lt;h1&gt;
  
  
  Takeaways
&lt;/h1&gt;

&lt;p&gt;We learn what is the main cause of drifts and what drifts are. We discussed how we can detect drifts and we checked if we can detect this problem using Spacelift. We learn, that on this day (3rd of March 2023) we have to do some additional work, which we will do in the next episode.&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Look around the stack functionality</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Tue, 28 Feb 2023 00:15:05 +0000</pubDate>
      <link>https://dev.to/aws-builders/look-around-the-stack-functionality-5d4i</link>
      <guid>https://dev.to/aws-builders/look-around-the-stack-functionality-5d4i</guid>
      <description>&lt;p&gt;So, we already configured the Spacelift environment and we run our first template through it. It is time to look around the Spacelift console a little bit.&lt;/p&gt;

&lt;p&gt;In this episode we will take a look into Stacks and what information we can collect.&lt;/p&gt;

&lt;p&gt;But the preparations first.&lt;/p&gt;

&lt;p&gt;I've created a new branch and added this part to &lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"mysecondbucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"thisismysecondtestbucketforspacelift"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then I created another branch (from the newly created one) ad added &lt;code&gt;outputs.tf&lt;/code&gt; with this content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"mysecondbucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"thisismysecondtestbucketforspacelift"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I published my branches, of course.&lt;/p&gt;

&lt;p&gt;Now, I merged the third branch to second one through Pull Request. Why I did this? To prove that Spacelift is smart :) With the configuration we did earlier, nothing should happen. And that is true, indeed.&lt;/p&gt;

&lt;p&gt;Now it is time to create PR to main branch. This time something happened.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;PRs&lt;/code&gt; tab in Stacks section (select your stack, of course).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4byvizg0mpeh9pju19b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4byvizg0mpeh9pju19b.jpg" alt="Pull Request" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have our PR here, in Spacelift! Let's click on it, and here we can see quite interesting things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6hz38o1bvj3hqbj4477.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6hz38o1bvj3hqbj4477.jpg" alt="Pull Request details" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc029xxc1efqagzq27wfa.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc029xxc1efqagzq27wfa.jpg" alt="Pull Request details" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the screenshots above we see details of the run performed against the code in PR. Spacelift did a &lt;code&gt;terraform plan&lt;/code&gt; for us.&lt;/p&gt;

&lt;h1&gt;
  
  
  More tabs
&lt;/h1&gt;

&lt;p&gt;We already visited &lt;code&gt;Settings&lt;/code&gt;, &lt;code&gt;Runs&lt;/code&gt; and &lt;code&gt;PRs&lt;/code&gt; tabs. Let's take a look on other things there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tasks
&lt;/h2&gt;

&lt;p&gt;In this tab we can run some commands against the collected inventory. By this term I mean the information and data which are available for Spacelift.&lt;/p&gt;

&lt;p&gt;Let's take a look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6reuszfvj5h5ze0crvt9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6reuszfvj5h5ze0crvt9.jpg" alt="Task" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course the command is &lt;code&gt;terraform output&lt;/code&gt;, I realized that typo after first run :)&lt;/p&gt;

&lt;p&gt;And the effect is below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84h2r78o9nzx8flnhozt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84h2r78o9nzx8flnhozt.jpg" alt="Task output" width="657" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, the process is quite similar to normal run. In fact, we can call this functionality as 'ad-hoc execution'. So, if any one though how to deal with state file... Well, you can do it here. Let's try with &lt;code&gt;terraform state list&lt;/code&gt; as an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm5d0pm4jrhozv9as0p8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm5d0pm4jrhozv9as0p8.jpg" alt="Task output" width="776" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks good, doesn't it? I miss one thing, though. Let me show you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc968q0xwudp2elpsre1s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc968q0xwudp2elpsre1s.jpg" alt="Task output" width="597" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine now even more crazy command. I would like to have an option to see these tasks by name. Why? Because we have possibility to re-run the task. With long and complicated command as title we need more "brain power" to locate and understand the task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment
&lt;/h2&gt;

&lt;p&gt;In this tab we see the default environment variables and we can define our own. Standard ones and secrets too. Nothing fancy, nothing different than typical behavior of CI/CD tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Here we can see graphical representation of resources created during the lifetime of the project. We can change the grouping and get details about each resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outputs
&lt;/h2&gt;

&lt;p&gt;Exactly the same like &lt;code&gt;terraform output&lt;/code&gt; in case of our project. I do not like the layout, though. Seems to be incosistent with other pages of the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependencies and Notifications
&lt;/h2&gt;

&lt;p&gt;At this point there is nothing much to say about them.&lt;/p&gt;

&lt;p&gt;In Dependencies tab we can monitor, control and manage dependencies between stacks. In our case now, nothing is there.&lt;/p&gt;

&lt;p&gt;We also do not have any notifications, so this tab is empty too.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;We went through all functionalities in &lt;code&gt;Stacks&lt;/code&gt; menu. We learn what information are there and how to get them and make useful. In next episode we will do something even more interesting :)&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>First configuration and execution</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sun, 26 Feb 2023 21:40:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/first-configuration-and-execution-2h2e</link>
      <guid>https://dev.to/aws-builders/first-configuration-and-execution-2h2e</guid>
      <description>&lt;p&gt;In this episode we will prepare our Spacelift account and we will look around the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before we start
&lt;/h2&gt;

&lt;p&gt;Before we start playing with Spacelift, we should create new repository (I called mine &lt;code&gt;spacelift-demo&lt;/code&gt;) and put there two files. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;providers.txt&lt;/code&gt; which is exactly the same like in first episode of this series&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt; with this content:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;assume_role_with_web_identity&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;role_arn&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::1234567890:role/spacelift-role"&lt;/span&gt;
    &lt;span class="nx"&gt;web_identity_token_file&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/mnt/workspace/spacelift.oidc"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;default_tags&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sandbox"&lt;/span&gt;
      &lt;span class="nx"&gt;Terraform&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"True"&lt;/span&gt;
      &lt;span class="nx"&gt;Repo&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-prep"&lt;/span&gt;
      &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Spacelift tutorial"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"mybucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;thisismytestbucketforspacelift&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple stuff, we create one AWS S3 Bucket only.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please remember, S3 Bucket name must be &lt;strong&gt;globally unique!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We configured the IAM Role which we want to use. Of course, again, it is hardcoded, what is not a good practice, but enough for the demo purposes.&lt;/p&gt;

&lt;p&gt;And we need to commit this to main branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally we can go forward
&lt;/h2&gt;

&lt;p&gt;After the login we can see the menu on the left side: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu387ewe1jil2uq9j1ja8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu387ewe1jil2uq9j1ja8.jpg" alt="Spacelift menu" width="226" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This menu should give us a glimpse of idea what Spacelift really is. As it is stated on the webpage &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Spacelift is a sophisticated CI/CD platform for Terraform, CloudFormation, Pulumi, Kubernetes, and Ansible&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Spacelift is a CI/CD tool. But is it only it? What we will explore is if Spacelift is more like an orchestrator for IaC.&lt;/p&gt;

&lt;p&gt;So, let's get started!&lt;/p&gt;

&lt;p&gt;First position in the menu is &lt;code&gt;Stacks&lt;/code&gt;. Stacks are main entity of the projects. Stack uses repository of the code and manages the resources and states.&lt;/p&gt;

&lt;p&gt;Let's create our first stack. Click &lt;code&gt;Add stack&lt;/code&gt; button on top right side of the screen. It seems hidden a little bit, I think that can be improved :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooiebko3jdi28lqoafeu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooiebko3jdi28lqoafeu.jpg" alt="Add stack" width="167" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the first screen we define basic information. Name of the stack, space (we will learn it more later), description, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoq80nt5kzeavkown5m4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoq80nt5kzeavkown5m4.jpg" alt="Add stack - first step" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, done, let's click &lt;code&gt;continue&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And here we can be hit by some unexpected thing. Or should I say - expected? :) We need to provide the repository. And we can't just refresh the page if we create one now. That is exactly why we created the repo earlier. It wil be good to have the possibility to refresh the state, or even... create repo from here.&lt;/p&gt;

&lt;p&gt;I provided the information like on screen below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb99ocr2wmuwg7vtnyp2q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb99ocr2wmuwg7vtnyp2q.jpg" alt="Add stack - second step" width="506" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we click &lt;code&gt;Continue&lt;/code&gt; we go to very important, I would say the most important, part of the configuration. We deal with Terraform here, and this part allows us to configure the behavior of Terraform. It is not the place to explain what statefile is, so let me navigate through these fields shortly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyij7p50kswogcaws0h0t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyij7p50kswogcaws0h0t.jpg" alt="Add stack - third step" width="783" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Backend&lt;/code&gt; in our case is Terraform. There are more of them. You may ask why there is no backend for Ansible, as Spacelift claims they can manage it too? Well, Ansible is serverless (oh wow, I just realized that synonym for this case :D), therefore there is no state to manage.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Smart sanitization&lt;/code&gt; - very nice function. Spacelift will try to determine if value is sensitive and should be hidden (useful for variables)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Version&lt;/code&gt; - quite obvious, isn't it?&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Manage state&lt;/code&gt; - crucial thing. We can allow Spacelift to care about the statefile.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Import existing statefile&lt;/code&gt; - very cool feature. It means that we can migrate existing resources, or should I say, management of existing resources to Spacelift.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And now we go to the final step of configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5y62aa9zio0fcbxvvbl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5y62aa9zio0fcbxvvbl.jpg" alt="Add stack - fourth step" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice thing - we can manage administrative stack. It means we can use Terraform to manage Spacelift! How cool is that? And what was first - chicken or egg? ;)&lt;/p&gt;

&lt;p&gt;I set some advanced options too. First, &lt;code&gt;Autodeploy&lt;/code&gt;. For this simple repo we don't need any additional actions, let's deploy it directly. Second, I enabled &lt;code&gt;Local preview&lt;/code&gt;. This looks like very cool option, we can check the stack with &lt;code&gt;spacectl&lt;/code&gt; command locally! We will test it soon.&lt;/p&gt;

&lt;p&gt;Finally, we can decide about runners and their images and we can even customize the workflows.&lt;/p&gt;

&lt;p&gt;Let's save the stack now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xhwni4t6e09y0ctmnjc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xhwni4t6e09y0ctmnjc.jpg" alt="Add stack - save" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nicely done!&lt;/p&gt;

&lt;h2&gt;
  
  
  Burn the AWS!!
&lt;/h2&gt;

&lt;p&gt;Or maybe not :) We have simple template prepared, so let's trigger it!&lt;/p&gt;

&lt;p&gt;Let's click this small button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o76ubm9utb8m4sqw7sq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o76ubm9utb8m4sqw7sq.jpg" alt="Purple, but powerful ;)" width="97" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And see what happened!&lt;/p&gt;

&lt;p&gt;The process is clean and clear.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe5yeyhyo50zcrd85tey.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe5yeyhyo50zcrd85tey.jpg" alt="Execution report screen" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case of the screen above you see the report after trigger through commit, however, it doesn't really matter, the info is very similar.&lt;/p&gt;

&lt;p&gt;First, I need to say what I do not like. The order. Well, yes, this works, but somehow I prefer to see it in oposite direction. The oldest on top, newest on the bottom. A matter of habbit, I suppose.&lt;/p&gt;

&lt;p&gt;We see couple of stages. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Queued&lt;/code&gt; - this is the time between trigger and actual start of the execution. Spacelift prepares some resources where our demo is executed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Preparing&lt;/code&gt; - Actual preparation of the resources mentioned in step above&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Initializing&lt;/code&gt; - in case of our demo - initialization of Terraform - &lt;code&gt;terraform init&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Planning&lt;/code&gt; - well, &lt;code&gt;terraform plan&lt;/code&gt; :)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Applying&lt;/code&gt; - ... you know, right? :) Do you remember, that we set immediate execution when we setup the stack? That is why apply was executed immediately after plan.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Finished&lt;/code&gt; - just info that run is ended.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What was done by our Terraform? We can check it from the level of Spacelift!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv926rdfrg79w3y10e2c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv926rdfrg79w3y10e2c.jpg" alt="Resources manipulation" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do you see the &lt;code&gt;ADD&lt;/code&gt; tab? We can see additionally the &lt;code&gt;+1&lt;/code&gt; note. This means... Yes, we added 1 resource!. When we go to this tab, we will be able too see all actions done by the execution. Sweet.&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>And we have (Space)lift (off)!</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Sun, 26 Feb 2023 21:40:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/and-we-have-spacelift-off-1eal</link>
      <guid>https://dev.to/aws-builders/and-we-have-spacelift-off-1eal</guid>
      <description>&lt;p&gt;This time we will configure and run our first small template through &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;. Our first step will be to configure the connection between AWS and the service. I assume, that the Spacelift account is created and connected with GitHub.&lt;/p&gt;

&lt;p&gt;In this short series we will learn how to configure, connect and start to use Spacelift.&lt;/p&gt;

&lt;p&gt;First thing to do for each real developer is to switch the GUI to dark mode :D Go to your account settings on the bottom left corner of your screen and find the &lt;code&gt;dark mode&lt;/code&gt; setting. Right, we are ready to go :)&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Connect Spacelift with Cloud provider
&lt;/h2&gt;

&lt;p&gt;To work with AWS we need to configure our connection to the vendor. The documentation provided by Spacelift is really rich and clear, however, I'd like to mention the security aspect. First way, the easier one, is to provide programmatically available user with role to assume. Is it a good solution? Well, good enough. However, I prefer to use another way, which also is provided by Spacelift - the OIDC connection. On the end of the day it does the same thing, but from the security standpoint - it is better.&lt;/p&gt;

&lt;p&gt;OIDC needs some configuration (obviously) and part of it is a thumbprint. As I do this as a tutorial, not fully blown production ready stuff, I show you how to get this thumbprint using CLI approach.&lt;/p&gt;

&lt;p&gt;First, let's collect url of our spacelift app. Simply, look on your browser :) In my case it it &lt;code&gt;https://&amp;lt;subdomain&amp;gt;.app.spacelift.io/&lt;/code&gt;. We have it, so let's generate the thumbprint.&lt;/p&gt;

&lt;p&gt;I use Ubuntu to get these information. In fact WSL2 on Windows :). Execute:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl https://&amp;lt;subdomain&amp;gt;.app.spacelift.io/.well-known/openid-configuration|jq&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Please note, I added &lt;code&gt;jq&lt;/code&gt; on the end to have nicer output.&lt;/p&gt;

&lt;p&gt;Find the line with &lt;code&gt;jwks_uri&lt;/code&gt;. Copy from it the domain name &lt;strong&gt;only&lt;/strong&gt; and use it in following command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;openssl s_client -servername &amp;lt;subdomain&amp;gt;.app.spacelift.io -showcerts -connect &amp;lt;subdomain&amp;gt;.app.spacelift.io:443&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ensure, you don't have https, etc. Just the domain.&lt;/p&gt;

&lt;p&gt;Scroll the output and find the certificate. There will be something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-----BEGIN CERTIFICATE-----
somestring
-----END CERTIFICATE-----
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file (for example &lt;code&gt;certificate.crt&lt;/code&gt;) and copy this whole part there.&lt;/p&gt;

&lt;p&gt;Now we are ready to generate thumbprint&lt;/p&gt;

&lt;p&gt;&lt;code&gt;openssl x509 -in certificate.crt -fingerprint -sha1 -noout |tr -d :&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As you can see I used the &lt;code&gt;tr&lt;/code&gt; command with pipe to get rid of &lt;code&gt;:&lt;/code&gt;. If you are not familiar with pipes and redirections in Linux, no worries, &lt;a href="https://killercoda.com/pawelpiwosz/course/linuxFundamentals/lf-05-pipes" rel="noopener noreferrer"&gt;here is my lab about it&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The string in output is a part which we need to use to complete our configuration of OIDC.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We can do it with Terraform, of course. In fact, if you plan to use it in the real project, I strongly recommend to do it with IaC. However, now you know how to do it from CLI :)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Let's terraform it
&lt;/h2&gt;

&lt;p&gt;You know what? Creating all these resources on AWS from GUI is so old-fashioned :) Let's have a small Terraform template for it! We need to create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OIDC itself&lt;/li&gt;
&lt;li&gt;IAM Role to assume by Spacelift&lt;/li&gt;
&lt;li&gt;IAM Policy which describes what Spacelift can do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, let's create the file &lt;code&gt;providers.tf&lt;/code&gt; with this content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;=1.3"&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 4.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we can forget about this file from now.&lt;/p&gt;

&lt;p&gt;We know what is our thumbprint, so we can create first block in &lt;code&gt;main.tf&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;default_tags&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sandbox"&lt;/span&gt;
      &lt;span class="nx"&gt;Terraform&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"True"&lt;/span&gt;
      &lt;span class="nx"&gt;Repo&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-prep"&lt;/span&gt;
      &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Spacelift tutorial"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_openid_connect_provider"&lt;/span&gt; &lt;span class="s2"&gt;"spacelift"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://&amp;lt;subdomain&amp;gt;.app.spacelift.io"&lt;/span&gt;

  &lt;span class="nx"&gt;client_id_list&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"&amp;lt;subdomain&amp;gt;.app.spacelift.io"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;thumbprint_list&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;thumbprint&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note, we also have the provider defined here.&lt;/p&gt;

&lt;p&gt;Now, it is time to define the IAM Role. Within this definition, we will ensure that the Role can be assumend by Spacelift only. We want to build the trusted relation between this entity and Spacelift to secure our connection as much as possible.&lt;/p&gt;

&lt;p&gt;The Role and condition inside is described well in Spacelift's documentation, so, I will not go into details. I just explain a few parts.&lt;/p&gt;

&lt;p&gt;First, we use Federated access and we use for it the OIDC we defined earlier.&lt;/p&gt;

&lt;p&gt;Second, please note the &lt;code&gt;Condition&lt;/code&gt; section of the Role. This makes the connection more secure by narrowing the entities which can use this role. We can create even more precise boundary, all is in documentation. However, this one is enough for us at this moment.&lt;/p&gt;

&lt;p&gt;And here is the Terraform code for our Role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;aws_iam_openid_connect_provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spacelift&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-role"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Role to assume by spacelift"&lt;/span&gt;

  &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;ROLE&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "${aws_iam_openid_connect_provider.spacelift.arn}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "&amp;lt;subdomain&amp;gt;.app.spacelift.io:aud": "&amp;lt;subdomain&amp;gt;.app.spacelift.io"
        }
      }
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;ROLE
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might ask, &lt;em&gt;why do you use this old-fashioned way with &lt;code&gt;&amp;lt;&amp;lt;ROLE&lt;/code&gt;?&lt;/em&gt;. Well, good question :) For two reasons. I learned Terraform this way and this approach helps me to better see where Role or Policy document ends. Quite handy, especially for long documents.&lt;/p&gt;

&lt;p&gt;Ok, finally, we will create "very secure" Policy and we will attach Policy to the Role. Please have in mind, that the Policy should by tailored to needs, not open like here. We do it for demo purposes, so we can live with it now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy"&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-policy"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-policy"&lt;/span&gt;

  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POLICY&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "*",
      "Resource": ["*"]
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;POLICY
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"spacelift-iam-attachment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spacelift-role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;policy_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_policy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spacelift-policy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No magic here, I suppose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Init
&lt;/h2&gt;

&lt;p&gt;We have our template, we can execute it.&lt;/p&gt;

&lt;p&gt;First, we need to initialize Terraform&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can format and validate the template&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform fmt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform validate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Well, I have to do one more thing. I use many AWS profiles, therefore I need to specify it. There are many ways to do so, I use the easier and less flexible one - I put it into template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"demos"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please remember, this is for demo purposes only, so, no issue with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execute Terraform
&lt;/h2&gt;

&lt;p&gt;We can deploy our stack.&lt;/p&gt;

&lt;p&gt;First, let's see if all is ok&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If all went ok, you should see something like&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Plan: 4 to add, 0 to change, 0 to destroy.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ok, so let's rock!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply -auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;Apply complete! Resources: 4 added, 0 changed, 0 destroyed.&lt;/code&gt; means success.&lt;/p&gt;

&lt;p&gt;But wait... What Role should I use later? We can go to the GUI and... &lt;strong&gt;NO!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Outputs
&lt;/h2&gt;

&lt;p&gt;Let's create one more file, called &lt;code&gt;outputs.tf&lt;/code&gt; and put there&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"IAM-Role-to-assume"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"IAM Role to assume by Spacelift"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spacelift-role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can run &lt;code&gt;terraform refresh&lt;/code&gt; and we already see the ARN of the Role. If this info is needed later, we can run &lt;code&gt;terraform output&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Spacelift
&lt;/h1&gt;

&lt;p&gt;Well, looks like this episode is more about Terraform than Spacelift :) It is important though, to have good connection created, so I believe this is not a big deal :)&lt;/p&gt;

&lt;p&gt;Ok, let's do our work on Spacelift side!&lt;/p&gt;

&lt;p&gt;Go to your dashboard, to the `Cloud integrations (bottom left of the screen)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2sgpbwkdjcqy2tdxdx5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2sgpbwkdjcqy2tdxdx5.jpg" alt="Add integration" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And configure your AWS integration accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe6fqtzpg27fkzznazen.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe6fqtzpg27fkzznazen.jpg" alt="Configure integration" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And... Yes, that's it :)&lt;/p&gt;

&lt;h1&gt;
  
  
  Key takeways
&lt;/h1&gt;

&lt;p&gt;For now, we know how to connect the dots. In the next episode we will learn what Spacelift is and how to start with it.&lt;/p&gt;

</description>
      <category>spacelift</category>
      <category>aws</category>
      <category>configuration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Serverless Game</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Mon, 09 Jan 2023 22:50:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/serverless-game-5415</link>
      <guid>https://dev.to/aws-builders/serverless-game-5415</guid>
      <description>&lt;p&gt;I started 2023 with something new. I was wondering, what could be interesting, fresh and exciting. And I found it.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Game
&lt;/h1&gt;

&lt;p&gt;I invented... the game. Well, no, I invented &lt;strong&gt;The Game&lt;/strong&gt; :) The Game of Serverless. My idea was to make it very interactive, community driven and community building one. That is why as a platform I selected&lt;/p&gt;

&lt;h1&gt;
  
  
  LinkedIn
&lt;/h1&gt;

&lt;p&gt;This is the place where we can find so many brilliant and smart people. People eager t oshare their wast knowledge and experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  First rule of The Game
&lt;/h2&gt;

&lt;p&gt;Every week (most probably Sunday evening CEST) I publish new requirements for the application and infrastructure. The Players' task is to propose the solution. &lt;/p&gt;

&lt;h2&gt;
  
  
  Second rule of The Game
&lt;/h2&gt;

&lt;p&gt;Important is that solution must utilize Serverless services of AWS.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Quite important disclaimer, I believe. This is the game, for fun. I do not expect and I do not ask for truly mature enterprise solutions. We, you all are paid for that. Here we share the tips and play the game for fun (I repeat myself, I know :) )&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;I play the role of "Game Master". My goal is to try to navigate you into the rocks. Another words, to force the situation, where Players will agree that Serverless is not an option. If this happen, I win :)&lt;/p&gt;

&lt;p&gt;You, my dear Players, you receive the requirements and based on it you propose an architecture. You may even draw it, if you wish :)&lt;/p&gt;

&lt;p&gt;The set of requirements is simple, at leas for now. In time the level should increase. For now we have business requirements (more or less). Soon we start to discuss non-functional requirements (or quality attributes, if you wish).&lt;/p&gt;

&lt;h2&gt;
  
  
  Third rule of The Game
&lt;/h2&gt;

&lt;p&gt;The requirements. Well, the rule here is simple. If I didn't reveal something, you are free to propose whatever you wish (from Serverless area, of course). I collect proposals from all Players and base on it I create new set of requirements.&lt;/p&gt;

&lt;p&gt;So, as you can see, it is indeed an interactive game. I have some requirements in mind. But... In a day when I wrote this article, we are in second week and I already had to modify the requirements I gave to Players :) This make it interesting, doesn't it?&lt;/p&gt;

&lt;h1&gt;
  
  
  I feel the thrill, how to join?
&lt;/h1&gt;

&lt;p&gt;It is simple, really. I invite everyone to play.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fourth rule of The Game
&lt;/h2&gt;

&lt;p&gt;This is the game for fun (did I said that already?). Be nice, kind and happy :)&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;p&gt;The best starting point is &lt;a href="https://github.com/pawelpiwosz/TheGame-Serverless" rel="noopener noreferrer"&gt;Github repository&lt;/a&gt;. I collect all data and progress there.&lt;/p&gt;

&lt;p&gt;You can join directly in my &lt;a href="https://www.linkedin.com/in/pawelpiwosz/" rel="noopener noreferrer"&gt;LinkedIn profile&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  I wait for you there!
&lt;/h1&gt;

&lt;p&gt;Let's have fun!&lt;/p&gt;

</description>
      <category>game</category>
      <category>serverless</category>
      <category>aws</category>
      <category>linkedin</category>
    </item>
    <item>
      <title>Deeper dive into SBOM</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Fri, 09 Dec 2022 19:42:20 +0000</pubDate>
      <link>https://dev.to/pawelpiwosz/deeper-dive-into-sbom-4ca9</link>
      <guid>https://dev.to/pawelpiwosz/deeper-dive-into-sbom-4ca9</guid>
      <description>&lt;p&gt;This episode is not about tools. This time we will take a look on the SBOM itself. We discussed basics of SBOMs, it is time to go a little bit deeper.&lt;/p&gt;

&lt;p&gt;First thing which needs explanation is what is the difference between Software Composition Analysis (SCA) and SBOM? Both are analyzing the dependencies!&lt;/p&gt;

&lt;p&gt;Well, not exactly.&lt;/p&gt;

&lt;p&gt;SCA is an automated process to identify open source components in code base and evaluate these components against licenses, vulnerabilitites, security issues, etc. SBOM is the report generated by the SCA software. SBOM is highly defined and structured document and not all SCA tools generate their reports in the acceptable as SBOM way. These SBOMs are then compared with multiple up-to-dated databases to ensure the report quality.&lt;/p&gt;

&lt;p&gt;So, in very simple way we can say that SCA is a tool where SBOM is a data. SBOM contains list of used components with some specific information. But SBOM itself doesn't care how its generation was done.&lt;/p&gt;

&lt;p&gt;We talk here about different aspects of one approach to ensure security and quality which can be applied in &lt;a href="https://csrc.nist.gov/projects/cyber-supply-chain-risk-management" rel="noopener noreferrer"&gt;C-SCRM&lt;/a&gt; in the organization.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I read a lot about it currently and I saw many reports where authors predict some numbers related to percentage of companies which will be use SCA and SBOMs in next few years. Honestly, I do not believe in these predictions. The awareness of these solutions is quite low now and implementation even lower. Yes, we are more aware of SCA, but SBOM, which looks like natural extension to the process, is not very well known. That is my personal opinion :)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How the SBOM should look is described, as I mentioned, in quite strict way in standard. If you like to go deeper, &lt;a href="https://www.ntia.gov/files/ntia/publications/ntia_sbom_framing_2nd_edition_20211021.pdf" rel="noopener noreferrer"&gt;here is the link&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Let's take a look on three main perspectives (as it is called), where SBOM is very useful. The perspectives are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Produce&lt;/li&gt;
&lt;li&gt;Choose&lt;/li&gt;
&lt;li&gt;Operate&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Produce software
&lt;/h2&gt;

&lt;p&gt;The company who creates the software, by attaching the SBOM to their package will gain not only external (sell) benefits, but internal (development) as well. For example, by monitoring of the vulnerabilities in used packages, by knowing potential end date of lofe of specific component used in their software, by knowing all dependencies included in the code.&lt;/p&gt;

&lt;p&gt;What is the benefit for external use? Well, simple - it gives the better possibilities to the partners to know "what is inside" and also creates, let's call it, "better picture" of the seller. Another words - "They know what they sell".&lt;/p&gt;

&lt;p&gt;Below is a representation of areas of interest in &lt;strong&gt;Produce software&lt;/strong&gt; perspective.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93grp8m23p3t0i712cw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93grp8m23p3t0i712cw9.png" alt="Produce perspective" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Choose software
&lt;/h2&gt;

&lt;p&gt;Now, the ball flies to the second side of the field. The company interested in buying the product is able to (also) verify vulnerabilities, control and be aware of lifetime of the used components, control and understand licenses of the components, can target security analysis towards already defined targets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbomdu73y3i1azkkksub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbomdu73y3i1azkkksub.png" alt="Choose perspective" width="800" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Operate software
&lt;/h2&gt;

&lt;p&gt;Final phase is when software is about to be bought and then operated. In this perspective the organization can use SBOM analysis as one of the decisive elements. During the operationa phase SBOM can help with independent mitigations (so the organization does not rely solely on the vendor), can better administer its assets and evaluate risks and usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiauox3ta9kd3x8zu4wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiauox3ta9kd3x8zu4wi.png" alt="Operate perspective" width="580" height="729"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's conclude the second theory lesson in this series.&lt;/p&gt;




&lt;p&gt;Cover image by &lt;a href="https://pixabay.com/pl/users/threemilesperhour-9661546/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3614254" rel="noopener noreferrer"&gt; Suzy&lt;/a&gt; from &lt;a href="https://pixabay.com/pl//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3614254" rel="noopener noreferrer"&gt; Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sbom</category>
      <category>cybersecurity</category>
      <category>compliance</category>
      <category>process</category>
    </item>
    <item>
      <title>SBOM with Checkov</title>
      <dc:creator>Paweł Piwosz</dc:creator>
      <pubDate>Fri, 25 Nov 2022 09:05:50 +0000</pubDate>
      <link>https://dev.to/pawelpiwosz/sbom-with-checkov-37ll</link>
      <guid>https://dev.to/pawelpiwosz/sbom-with-checkov-37ll</guid>
      <description>&lt;p&gt;This episode might be quite surprising, at least for those of us who know IaC and did quality and security scans of IaC templates.&lt;/p&gt;

&lt;p&gt;Well, yes, &lt;a href="https://www.checkov.io/" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; is a quality scanner, but from some time already it is more than that! Let's see on the frameworks which can be scanned by Checkov:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--framework&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;bitbucket_pipelines,circleci_pipelines,argo_workflows,arm,azure_pipelines,bicep,cloudformation,dockerfile,github_configuration,github_actions,gitlab_configuration,gitlab_ci,bitbucket_configuration,helm,json,yaml,kubernetes,kustomize,openapi,sca_package,sca_image,secrets,serverless,terraform,terraform_plan,all&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;[{&lt;/span&gt;bitbucket_pipelines,circleci_pipelines,argo_workflows,arm,azure_pipelines,bicep,cloudformation,dockerfile,github_configuration,github_actions,gitlab_configuration,gitlab_ci,bitbucket_configuration,helm,json,yaml,kubernetes,kustomize,openapi,sca_package,sca_image,secrets,serverless,terraform,terraform_plan,all&lt;span class="o"&gt;}&lt;/span&gt; ...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quite a number, don't you think?&lt;/p&gt;

&lt;p&gt;But... What about SBOMs? Can Checkov generate SBOM?&lt;/p&gt;

&lt;p&gt;No. Well, not really.&lt;/p&gt;

&lt;p&gt;But the report generated by Checkov can be exported in CDX format, what means, it can be consumed in the process!&lt;/p&gt;

&lt;p&gt;Let's take a look. I install Checkov and download random repos from GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;CloudFormation&lt;/li&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;Serverless&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;Helm
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;checkov
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, installation is not that hard, isn't it? ;P&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/dwmkerr/terraform-consul-cluster.git
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/splunk/splunk-aws-cloudformation.git
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/webdevops/Dockerfile.git
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/softprops/serverless-aws-rust-http.git
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/kubernetes/examples.git
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/prometheus-community/helm-charts.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok. I'll generate a report for each repo with CycloneDX output. Also, I will not specify the framework, so it is a huge possibility, that some of these repos contain not only the "main" framework, but others as well. Will see.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;checkov &lt;span class="nt"&gt;-d&lt;/span&gt; terraform-consul-cluster/ &lt;span class="nt"&gt;-o&lt;/span&gt; cyclonedx &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; tf.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For some reason, Checkov didn't save the report to the specified file, but created a folder. But it is not an issue, I used simple redirection and didn't spent time on it :)&lt;/p&gt;

&lt;p&gt;Report is not very readable for human, but it doesn't matter, it should be (and it is) readable for machine. Checkov uses the newest version for CycloneDX - 1.4.&lt;/p&gt;

&lt;p&gt;Let's take a look on details. In the "standard report" I found this issue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Check: CKV2_AWS_12: &lt;span class="s2"&gt;"Ensure the default security group of every VPC restricts all traffic"&lt;/span&gt;
        FAILED &lt;span class="k"&gt;for &lt;/span&gt;resource: module.consul-cluster.aws_vpc.consul-cluster
        File: /modules/consul/01-vpc.tf:2-10
        Guide: https://docs.bridgecrew.io/docs/networking_4

                2  | resource &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"consul-cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                3  |   cidr_block           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.vpc_cidr&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; // i.e. 10.0.0.0 to 10.0.255.255
                4  |   enable_dns_hostnames &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
                &lt;/span&gt;5  |
                6  |   tags &lt;span class="o"&gt;{&lt;/span&gt;
                7  |     Name    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Consul Cluster VPC"&lt;/span&gt;
                8  |     Project &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"consul-cluster"&lt;/span&gt;
                9  |   &lt;span class="o"&gt;}&lt;/span&gt;
                10 | &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we have in generated SBOM?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;vulnerability bom-ref&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"070be6ca-0732-4cf3-b0c7-a423fc0f45be"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;CKV2_AWS_12&amp;lt;/id&amp;gt;
    &amp;lt;&lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;name&amp;gt;checkov&amp;lt;/name&amp;gt;
    &amp;lt;/source&amp;gt;
    &amp;lt;ratings&amp;gt;
        &amp;lt;rating&amp;gt;
            &amp;lt;severity&amp;gt;unknown&amp;lt;/severity&amp;gt;
        &amp;lt;/rating&amp;gt;
    &amp;lt;/ratings&amp;gt;
    &amp;lt;description&amp;gt;Resource: module.consul-cluster.aws_vpc.consul-cluster. Ensure the default security group of every VPC restricts all traffic&amp;lt;/description&amp;gt;
    &amp;lt;advisories&amp;gt;
        &amp;lt;advisory&amp;gt;
            &amp;lt;url&amp;gt;https://docs.bridgecrew.io/docs/networking_4&amp;lt;/url&amp;gt;
        &amp;lt;/advisory&amp;gt;
    &amp;lt;/advisories&amp;gt;
    &amp;lt;affects&amp;gt;
        &amp;lt;target&amp;gt;
            &amp;lt;ref&amp;gt;pkg:terraform/cli_repo/terraform-consul-cluster/modules/consul/01-vpc.tf/module.consul-cluster.aws_vpc.consul-cluster@sha1:26077595ad94ad61098ccc203af70aaf518a847b&amp;lt;/ref&amp;gt;
        &amp;lt;/target&amp;gt;
    &amp;lt;/affects&amp;gt;
    &amp;lt;/vulnerability&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks quite nice. &lt;/p&gt;

&lt;p&gt;I generated SBOM reports from all repos I cloned. And I am really satisfied with results. Well done Bridgecrew!:) &lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I really like Checkov, and I say if for a few years now. It is more and more complex tool, even in the version available for free. I am really happy to see the SBOM option, as it becomes very important part of the process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The great news is that SBOMs can cover also infrastructure as Code. Imagine, you buy a car. And you receive report where you see that every single component in this car passed verification and validation. Every single one, except the wheels. What can go wrong? These wheels here - it is IaC.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Why I said &lt;em&gt;no&lt;/em&gt; on the beginning, when I asked myself if Checkov is a SBOM tool? Well, the point is that SBOM should contain all dependencies. Checkov's focus is on templates. Don't get me wrong, that is OK, there are other tools which should take care about code's dependencies. I said that to emphasize, Checkov cannot be only tool used in SBOM generation process.&lt;/p&gt;

&lt;p&gt;So, to be correct, Checkov is not SCA tool but can generate SBOM report for its part.&lt;/p&gt;




&lt;p&gt;Cover image by &lt;a href="https://pixabay.com/pl/users/threemilesperhour-9661546/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3614254" rel="noopener noreferrer"&gt; Suzy&lt;/a&gt; from &lt;a href="https://pixabay.com/pl//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3614254" rel="noopener noreferrer"&gt; Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sbom</category>
      <category>cybersecurity</category>
      <category>compliance</category>
      <category>process</category>
    </item>
  </channel>
</rss>
