<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jana Hockenberger</title>
    <description>The latest articles on DEV Community by Jana Hockenberger (@janahockenberger).</description>
    <link>https://dev.to/janahockenberger</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/janahockenberger"/>
    <language>en</language>
    <item>
      <title>Automatically Upload Lambda Layer Packages to S3</title>
      <dc:creator>Jana Hockenberger</dc:creator>
      <pubDate>Thu, 12 Jun 2025 10:26:46 +0000</pubDate>
      <link>https://dev.to/janahockenberger/automatically-upload-lambda-layer-packages-to-s3-3pf0</link>
      <guid>https://dev.to/janahockenberger/automatically-upload-lambda-layer-packages-to-s3-3pf0</guid>
      <description>&lt;h1&gt;
  
  
  Why using Lambda Layers?
&lt;/h1&gt;

&lt;p&gt;The benefits of AWS Lambda Layers should not be missed. With Lambda Layers you get the possibility to package code which can then be reused in different functions.&lt;/p&gt;

&lt;p&gt;In big environments usually all Lambda Layers are located in one specific account and then shared via Layer Version Permissions to then get used by the necessary functions.&lt;/p&gt;

&lt;p&gt;You can use Lambda layers either to write your own functions or to make entire packages available which are not covered by the Lambda &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html" rel="noopener noreferrer"&gt;runtimes&lt;/a&gt;. This blog article will provide you a solution on how you can upload Lambda layer packages with complex folder structures to a S3 Bucket. As you should always deploy your resources with Infrastructure as Code, a manual upload is not a satisfying solution here.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deep-Dive into the Lambda Function
&lt;/h1&gt;

&lt;p&gt;All steps are covered in one Lambda function. Depending on your favor, you can also split them up. In this example I will guide you through all steps being executed in one function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the procedure
&lt;/h2&gt;

&lt;p&gt;The base procedure is as follows: &lt;br&gt;
The package is placed in our Customizations for Control Tower (CfCT) repository. A Lambda functions checks the path where the package is located, and recursively loop through all folder, sub folders and files. &lt;br&gt;
The solution leverages the non-persistent storage provided by default in every Lambda function. The same folder and file structure then gets created in the local &lt;code&gt;/tmp&lt;/code&gt; directory. &lt;br&gt;
After that the local directory gets zipped and uploaded to S3 where another automatism gets the Zip File to then add it to the Lambda layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi698rmwb8e8myqj2ymed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi698rmwb8e8myqj2ymed.png" alt="Architecture" width="487" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog article should only focus on the upload part to S3. This part is quite tricky, because the folder structure is not clear and the automatism should stay dynamic enough to also be used for further packages which should get added as a layer with different folder structures.&lt;/p&gt;
&lt;h2&gt;
  
  
  Necessary environment variables
&lt;/h2&gt;

&lt;p&gt;This example uses Bitbucket as a repository hosting service. Prerequisites are a working CodeStar Connection between the CfCT pipeline including authentication to have the necessary access to the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BITBUCKET_WORKSPACE_NAME = os.environ['WORKSPACE_NAME']
BITBUCKET_REPO_NAME = os.environ['REPOSITORY_NAME']
BITBUCKET_TOKEN_PARAMETER = os.environ['BITBUCKET_TOKEN_PARAMETER']
BITBUCKET_BRANCH_NAME = os.environ['BRANCH_NAME']
BUCKET_NAME = os.environ['BUCKET_NAME']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Corresponding environment variables should be set in the Lambda function which have all necessary Bitbucket information included like the Bitbucket workspace, the repository name, the token parameter and the branch name. &lt;/p&gt;

&lt;p&gt;Another environment parameter is a list of the folder paths, where the packages are located. Also the S3 bucket where the Zip file will get uploaded should be set as an environment variable&lt;/p&gt;

&lt;h2&gt;
  
  
  How to trigger the Lambda function
&lt;/h2&gt;

&lt;p&gt;The Lambda function gets triggered as soon as the CodePipeline starts. You can realize this with a separate CodePipeline or an EventBridge Trigger which listens to the corresponding CloudTrail event.&lt;/p&gt;

&lt;p&gt;First the API token is retrieved from an encrypted SSM Parameter to be able to set up the connection to the Bitbucket repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def getApiToken():
    ssmClient = boto3.client('ssm')
    try:
        response = ssmClient.get_parameter(
            Name=BITBUCKET_TOKEN_PARAMETER,
            WithDecryption=True
        )
        token = response['Parameter']['Value']
        return token
    except ClientError as e:
        print(e)
        raise e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Collecting all items from the repository
&lt;/h2&gt;

&lt;p&gt;The Lambda functions loops through every entry of the folder path variable and calls the &lt;code&gt;getFolder&lt;/code&gt; method&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;getFolder&lt;/code&gt; method the base URL for Bitbucket is joined and the token is set in the headers variable. This step is necessary to access the remote repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def getFolder(token, folderPath, s3Client):
    print(f"Checking Folder Path {folderPath}")
    baseUrl = f'https://api.bitbucket.org/2.0/repositories/{BITBUCKET_WORKSPACE_NAME}/{BITBUCKET_REPO_NAME}/src/{BITBUCKET_BRANCH_NAME}/{folderPath}'
    print(f"Base Url: {baseUrl}")
    headers = {'Authorization': f'Bearer {token}'}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that the &lt;code&gt;getAllItems&lt;/code&gt; method gets called. An empty list variable gets initialized and with the help of the request package  a &lt;code&gt;get&lt;/code&gt; method gets called to capture all the files from the provided folder path out of the repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def getAllItems(url, headers):
    files = []
    while url:
        response = requests.get(url, headers=headers)
        response.raise_for_status()  
        data = response.json()
        files.extend(data.get('values', []))

        url = data.get('next', None)
    return files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Back to the &lt;code&gt;getFolder&lt;/code&gt; method, the &lt;code&gt;localFolderPath&lt;/code&gt; gets set to &lt;code&gt;/tmp&lt;/code&gt;  because this is where the package structure should be temporarily saved. With the help of the &lt;code&gt;os&lt;/code&gt; package and the &lt;code&gt;makedirs&lt;/code&gt; package a folder with the same name gets created in the Lambda functions environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;localFolderPath = os.path.join('/tmp', folderPath.lstrip('/'))
    os.makedirs(localFolderPath, exist_ok=True)
    print(f"Created local folder: {localFolderPath}")
    newFolderPath=""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Local creation of the package structure
&lt;/h2&gt;

&lt;p&gt;Now comes the complicated part: The function iterates through all items retrieved from the getAllItems method using a for-loop. The item object looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "path": "lambda/layers/xxxxx",
  "commit": {
    "hash": "xxxxx",
    "links": {
      "self": {
        "href": "https://api.bitbucket.org/2.0/repositories/xxxxx/commit/xxxxx"
      },
      "html": {
        "href": "https://bitbucket.org/xxxxx/commits/xxxxx"
      }
    },
    "type": "commit"
  },
  "type": "commit_file",
  "attributes": [],
  "escaped_path": "lambda/layers/xxxx",
  "size": 1779,
  "mimetype": "text/x-python",
  "links": {
    "self": {
      "href": "https://api.bitbucket.org/2.0/repositories/xxxxx"
    },
    "meta": {
      "href": "https://api.bitbucket.org/2.0/repositories/xxxxx"
    },
    "history": {
      "href": "https://api.bitbucket.org/2.0/repositories/xxxxx"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The item path and the item type of the current file gets saved into two variables&lt;/p&gt;

&lt;p&gt;If the item_type equals &lt;code&gt;commit directory&lt;/code&gt;, the old path plus the name of the item gets set as the new_folder_path and the folder gets created in the &lt;code&gt;/tmp&lt;/code&gt;  directory as well. After that, the &lt;code&gt;getFolder&lt;/code&gt; function gets called again with the new folder path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for item in repoItems:
        itemPath = item['path']
        itemType = item['type']

        # Check whether itemType is a directory
        if itemType == 'commit_directory':
            itemPath = itemPath.split('/')[-1]
            newFolderPath = os.path.join(folderPath, itemPath).lstrip('/')

            # Folder gets created locally
            localSubfolderPath = os.path.join('/tmp', newFolderPath)
            os.makedirs(localSubfolderPath, exist_ok=True)
            print(f"Found folder and created local subfolder: {localSubfolderPath}")

            # getFolder function gets called again with new folder path
            getFolder(token, newFolderPath, s3Client)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the &lt;code&gt;itemType&lt;/code&gt; equals &lt;code&gt;commit_file&lt;/code&gt;, the file name gets read out of the whole path. Then the &lt;code&gt;url&lt;/code&gt; variable gets set to point to the file in the Bitbucket directory and is created in the &lt;code&gt;/tmp&lt;/code&gt; directory under the correct sub folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elif itemType == 'commit_file':
            print(f"Found file: {itemPath}")
            fullPath = item['path']
            pathParts = fullPath.split('/')
            fileName = pathParts[-1]

            # Get the file content from Bitbucket and upload it to S3
            url = f'https://api.bitbucket.org/2.0/repositories/{BITBUCKET_WORKSPACE_NAME}/{BITBUCKET_REPO_NAME}/src/{BITBUCKET_BRANCH_NAME}/{folderPath}/{fileName}'
            headers = {'Authorization': f'Bearer {token}'}
            response = requests.get(url, headers=headers)
            if response.status_code == 200:
                localFilePath = os.path.join(localFolderPath, fileName)
                with open(localFilePath, 'wb') as file:
                    file.write(response.content)
                print(f"Found file and created local file: {localFilePath}")
        else:
            print(f"Unknown item type: {itemType} for {itemPath}")
    return newFolderPath
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Zip and upload of the package to S3
&lt;/h2&gt;

&lt;p&gt;After the for loop is finished, the processFolder function gets called to read out the package name of the Lambda package&lt;br&gt;
The &lt;code&gt;addFolderToArchive&lt;/code&gt; function gets called next.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def getAllItems(url, headers):
    files = []
    while url:
        response = requests.get(url, headers=headers)
        response.raise_for_status()  
        data = response.json()
        files.extend(data.get('values', []))

        url = data.get('next', None)
    return files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method is responsible for uploading the package structure to S3. First the current timestamp is generated, then the name of the Zip file is set. With the &lt;code&gt;shutil&lt;/code&gt; package the folder gets zipped via the &lt;code&gt;make_archive&lt;/code&gt; function and uploaded to the provided bucket from the environment variable. The last step is to save the package name with the timestamp in the end in a SSM parameter which can then further be used to build the part where the layer itself gets created and shared with the other accounts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def addFolderArchive(folderName, folderPath, s3Client):
    timeStamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
    zipFileName = f'{folderName}_{timeStamp}.zip'

    fullFolderPath = f'/tmp/{folderPath}/{folderName}'
    tempZipFile = f'/tmp/{zipFileName}'

    # Zip File gets created and uploaded to S3
    shutil.make_archive(tempZipFile[:-4], 'zip', fullFolderPath)
    s3Client.upload_file(tempZipFile, BUCKET_NAME, zipFileName)

    # SSM Parameter gets set with package name
    ssm_client = boto3.client('ssm')
    ssm_client.put_parameter(
        Name=f'/org/layer/package/{folderPath}/{folderName}/zipArchive',
        Description=f'Archive name for {folderName} in s3 Bucket {BUCKET_NAME}',
        Value=zipFileName,
        Type='String',
        Overwrite=True
    )

    # Local files are removed
    if os.path.exists(tempZipFile):
        os.remove(tempZipFile)
    if os.path.exists(fullFolderPath):
        shutil.rmtree(fullFolderPath)
    print(f"Folder {folderName} archived as {zipFileName} and uploaded to S3")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The local zip file gets created from the Lambda environment and the function is finished. &lt;/p&gt;

&lt;p&gt;The whole Lambda code can be found in my &lt;a href="https://github.com/janahockenberger/lambda-layer-package-upload" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; account.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! My name is Jana, I live in the Southwest of Germany and when I'm not smashing weights in the gym I love to architect solutions in AWS making my and the customers lives easier. &lt;/p&gt;

&lt;p&gt;My computer science journey started as an On-Premise System Administrator over the time developing to an AWS Architect. As I know both the "old" and "new" world, I know common pain points in architectures and being able to provide solutions to solve them and making them not even more efficient but also cheaper! &lt;/p&gt;

&lt;p&gt;I enjoy to learn and as the AWS portfolio is evolving all the time, I also try to stay up to date by getting certified and checking out newly launched products and services.&lt;/p&gt;

&lt;p&gt;If you want to lift your environment either to the cloud or want to leverage your already migrated environment to use more of the cloud services, hit me up or check out Public Cloud Group GmbH!&lt;/p&gt;

&lt;p&gt;If you want to support me, you can buy me a &lt;a href="https://coff.ee/janahockenberger" rel="noopener noreferrer"&gt;coffee&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  About PCG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pcg.io/" rel="noopener noreferrer"&gt;Public Cloud Group&lt;/a&gt; supports companies in their digital transformation through the use of public cloud solutions.&lt;/p&gt;

&lt;p&gt;With a product portfolio designed to accompany organisations of all sizes in their cloud journey and competence that is a synonym for highly qualified staff that clients and partners like to work with, PCG is positioned as a reliable and trustworthy partner for the hyperscalers, relevant and with repeatedly validated competence and credibility.&lt;/p&gt;

&lt;p&gt;We have the highest partnership status with the three relevant hyperscalers: Amazon Web Services (AWS), Google, and Microsoft. As experienced providers, we advise our customers independently with cloud implementation, application development, and managed &lt;/p&gt;

</description>
      <category>aws</category>
      <category>infrastructureascode</category>
      <category>lambda</category>
      <category>automation</category>
    </item>
    <item>
      <title>How To: Update AWS SRA in your Control Tower environment</title>
      <dc:creator>Jana Hockenberger</dc:creator>
      <pubDate>Mon, 14 Apr 2025 06:46:47 +0000</pubDate>
      <link>https://dev.to/janahockenberger/how-to-update-aws-sra-in-your-control-tower-environment-4naf</link>
      <guid>https://dev.to/janahockenberger/how-to-update-aws-sra-in-your-control-tower-environment-4naf</guid>
      <description>&lt;p&gt;This blogpost provides you with instructions on how to update the AWS SRA in your CfCT environment. With just a handful of steps you can easily update this on your own to make sure that you always have the current version installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does SRA provide?
&lt;/h3&gt;

&lt;p&gt;The AWS Security Reference Architecture GitHub Repository provides you a broad range of security services including granular parameterization. You can enable services like GuardDuty, define which services you want to manage or include a solution which for example notifies you when you have unencrypted EBS volumes. With all these settings in two files you save a lot of time on developing custom build StackSets which you would need to manage yourself.  AWS SRA follows the Best-Practices approach and sets delegated administrator for services where it makes sense. It is under constant development supplying you with the newest services and solutions AWS launches. You can find further information about this Repo on the following &lt;a href="https://github.com/aws-samples/aws-security-reference-architecture-examples?tab=readme-ov-file#aws-sra-easy-setup-with-an-aws-control-tower-landing-zone-recommended" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately so far SRA doesn’t provide an update procedure which results in adding new services as separate StackSets outside of SRA - not a really smooth solution.&lt;/p&gt;

&lt;p&gt;As we faced this issue in our environment too we took a deeper look at the whole SRA setup process and what we need to touch to be able to update the framework. Indeed we then were able to update our SRA with just a small amount of easy steps. Just follow the instructions below!&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Update SRA:
&lt;/h3&gt;

&lt;p&gt;The whole code for the SRA services and solution is located in a S3 Bucket named &lt;code&gt;sra-staging-ACCOUNTID-REGION&lt;/code&gt; in the account from where you deployed this solution. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ruzkgc4mkjcp1s6xf8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ruzkgc4mkjcp1s6xf8d.png" alt="Image description" width="761" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First step is to create a folder named &lt;code&gt;archive&lt;/code&gt; and to move all folders inside this S3 Bucket to the &lt;code&gt;archive&lt;/code&gt; folder. This is done in case anything goes wrong on the following steps, you are still able to roll back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gxyoumkjj2rppuxjg83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gxyoumkjj2rppuxjg83.png" alt="Image description" width="715" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqbwkfxhpgu4hn1j2lei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqbwkfxhpgu4hn1j2lei.png" alt="Image description" width="715" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you now look at the root directory of the S3 Bucket it should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8bql7owpj82fzh7d4oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8bql7owpj82fzh7d4oe.png" alt="Image description" width="761" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open the CodeBuild console and click on the CodeBuild project &lt;code&gt;sra-codebuild-project&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t933kvh3zpu1nvqdp5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t933kvh3zpu1nvqdp5x.png" alt="Image description" width="747" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This CodeBuild project calls the public SRA GitHub repository and copies all the files to the S3 Bucket. So in case it detects that the folders are not present, it will copy them again. This will enable us to get the most recent version of the SRA solutions.&lt;/p&gt;

&lt;p&gt;To achieve this, run “Start build” for the CodeBuild project. After around 5 minutes, the run should show as successful:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjcudw84i0gbdr1a189w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjcudw84i0gbdr1a189w.png" alt="Image description" width="745" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As new solutions may also include new available parameters, this is the next step we need to check. When first setting up SRA you download a &lt;code&gt;manifest.yaml&lt;/code&gt; and a &lt;code&gt;sra-easy-setup.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Download these files again with the curl command provided on the SRA instruction page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjebm3q2oef87bodgrm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjebm3q2oef87bodgrm6.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the files in your IDE and compare them to the current SRA files in your repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpd6xtikob178vrdod3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpd6xtikob178vrdod3f.png" alt="Image description" width="800" height="713"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do this with both, the &lt;code&gt;manifest.yaml&lt;/code&gt; and the &lt;code&gt;sra-easy-setup.yaml&lt;/code&gt;. Transfer the new content to your existing file, &lt;strong&gt;do not replace them!&lt;/strong&gt; These files include all the parameters you set and you don’t want them to be overwritten.&lt;/p&gt;

&lt;p&gt;After you finished this step, you can push your changes and wait for your CodePipeline to update the SRA StackSet. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t78a8gphwxpcib3rkf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t78a8gphwxpcib3rkf5.png" alt="Image description" width="671" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you did everything correct, your StackSet Update should be green and you can now use the new SRA features!&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! My name is Jana, I live in the Southwest of Germany and when I'm not smashing weights in the gym I love to architect solutions in AWS making my and the customers lives easier. &lt;/p&gt;

&lt;p&gt;My computer science journey started as an On-Premise System Administrator over the time developing to an AWS Architect. As I know both the "old" and "new" world, I know common pain points in architectures and being able to provide solutions to solve them and making them not even more efficient but also cheaper! &lt;/p&gt;

&lt;p&gt;I enjoy to learn and as the AWS portfolio is evolving all the time, I also try to stay up to date by getting certified and checking out newly launched products and services.&lt;/p&gt;

&lt;p&gt;If you want to lift your environment either to the cloud or want to leverage your already migrated environment to use more of the cloud services, hit me up or check out Public Cloud Group GmbH!&lt;/p&gt;

&lt;p&gt;If you want to support me, you can buy me a &lt;a href="https://coff.ee/janahockenberger" rel="noopener noreferrer"&gt;coffee&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  About PCG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pcg.io/" rel="noopener noreferrer"&gt;Public Cloud Group&lt;/a&gt; supports companies in their digital transformation through the use of public cloud solutions.&lt;/p&gt;

&lt;p&gt;With a product portfolio designed to accompany organisations of all sizes in their cloud journey and competence that is a synonym for highly qualified staff that clients and partners like to work with, PCG is positioned as a reliable and trustworthy partner for the hyperscalers, relevant and with repeatedly validated competence and credibility.&lt;/p&gt;

&lt;p&gt;We have the highest partnership status with the three relevant hyperscalers: Amazon Web Services (AWS), Google, and Microsoft. As experienced providers, we advise our customers independently with cloud implementation, application development, and managed services.&lt;/p&gt;

</description>
      <category>controltower</category>
      <category>security</category>
      <category>aws</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>🧽 Cleaning up Security Hub with AWS Resource Explorer 🫧</title>
      <dc:creator>Jana Hockenberger</dc:creator>
      <pubDate>Mon, 09 Dec 2024 14:02:55 +0000</pubDate>
      <link>https://dev.to/janahockenberger/cleaning-up-security-hub-with-aws-resource-explorer-1nfo</link>
      <guid>https://dev.to/janahockenberger/cleaning-up-security-hub-with-aws-resource-explorer-1nfo</guid>
      <description>&lt;p&gt;Config and Security Hub are probably one of the most used services to get an overview over the compliance of your resources and your overall security store. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Config&lt;/strong&gt; lets you run predefined or custom rules over your AWS resources to check whether they are compliant or not. &lt;br&gt;
&lt;strong&gt;Security Hub&lt;/strong&gt; uses different Security Standards including predefined Severity categories to categorize findings and providing an overall overview over your Security status. Since some time already Config Results are automatically transferred to Security Hub giving you the possibility to just check one tool for your current security status.&lt;/p&gt;
&lt;h2&gt;
  
  
  Config and Security Hub as Mess-Makers
&lt;/h2&gt;

&lt;p&gt;As AWS environments are hardly static in its behavior, a lot of resources will be removed, created or modified making the environment being in a constant state of change. If a resource which has been scanned by Config gets deleted, it will result in Security Hub in a &lt;code&gt;NOT_AVAILABLE&lt;/code&gt; finding. The default view in Security Hub will still show you this finding as the findings Record State is still set to active. &lt;br&gt;
If AWS realizes that the state of these &lt;code&gt;NOT_AVAILABLE&lt;/code&gt; findings didn't change for over 90 days, they will get archived automatically, quite a long time period right? &lt;/p&gt;

&lt;p&gt;Everyone working with the Security Hub in bigger environments using several Config Rules and Security Standards will know how overwhelming it feels like opening the Security Hub console feeling like you can never bear this amount of findings. But as written above not all of these findings belong to still existent findings.&lt;/p&gt;

&lt;p&gt;If you're orderly like me, you would rather prefer a solution which will automatically check the existence resources with the &lt;code&gt;NOT_AVAILABLE&lt;/code&gt; state and resolve the findings belonging to resources that are already deleted.&lt;/p&gt;

&lt;p&gt;So lets dive into an early &lt;strong&gt;Spring Cleaning&lt;/strong&gt; and clean up this mess with a simple lambda function!&lt;/p&gt;
&lt;h2&gt;
  
  
  What is the Resource Explorer?
&lt;/h2&gt;

&lt;p&gt;This solution is leveraging the Resource Explorer services and is a prerequisite to keep this automation running. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AWS Resource Explorer&lt;/strong&gt; is a resource search and discovery service letting you gather all resources in your organization. You can provide a broad range of inputs like the ARN, a string or a tag key. The output provides you all information which you would also get when calling the resource.&lt;/p&gt;

&lt;p&gt;So if we would search for a resource which doesn't exist, we just don't get any output. Makes sense right?&lt;/p&gt;

&lt;p&gt;Before deploying the Lambda function, make sure to set up Resource Explorer correctly also including all relevant regions where you have deployed resources. All information regarding the setup can be found in the AWS &lt;a href="https://docs.aws.amazon.com/resource-explorer/latest/userguide/welcome.html" rel="noopener noreferrer"&gt;Documentations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All set up? Let's look at the Lambda!&lt;/p&gt;
&lt;h2&gt;
  
  
  Cleaning up the mess
&lt;/h2&gt;

&lt;p&gt;The function starts off with a Security Hub paginator to gather all current findings. We are filtering the output by the &lt;code&gt;ComplianceStatus&lt;/code&gt; which should be &lt;code&gt;NOT_AVAILABLE&lt;/code&gt;, the &lt;code&gt;RecordState&lt;/code&gt; set to &lt;code&gt;ACTIVE&lt;/code&gt; to not get already archived ones and the &lt;code&gt;WorkflowStatus&lt;/code&gt; not being set to &lt;code&gt;RESOLVED&lt;/code&gt;.&lt;br&gt;
Then we start looping over the findings capturing some variables out of the output like the ARN of the resource belonging to the findings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    securityhub = boto3.client('securityhub')
    paginator = securityhub.get_paginator('get_findings')


    finding_filters = {
        'ComplianceStatus': [
            {
                'Value': "NOT_AVAILABLE",
                'Comparison': 'EQUALS'
            }
        ],
        'RecordState': [
            {
                'Value': "ACTIVE",
                'Comparison': 'EQUALS'
            }
        ],
        'WorkflowStatus': [
            {
                'Value': "RESOLVED",
                'Comparison': 'NOT_EQUALS'
            }
        ]
    }

    page_iterator = paginator.paginate(Filters=finding_filters)
    for page in page_iterator:
        for finding in page['Findings']:
            resource = finding['Resources'][0] 
            resource_id = resource['Id']

            exists = resource_exists(resource_id)

            if exists==False:
                resolve_sechub_finding(finding)


    return {
        'statusCode': 200,
        'body': json.dumps('Security Hub Finding Compliance Status Check Completed.')
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the &lt;code&gt;resource_exists&lt;/code&gt; method gets called using the Resource Explorer to check whether the resource is still existent. As we learned earlier, if it has been deleted, the Resource Explorer just delivers no output, which will return a &lt;code&gt;False&lt;/code&gt; boolean in our function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def resource_exists(resource_id):

    resexp = boto3.client('resource-explorer-2')

    try:
        print(f"Search for resource {resource_id}")
        results = resexp.search(QueryString=resource_id)
        if resource_id in results['Resources'][0]['Arn']:
            print(f"Resource {resource_id} still exists.")
            return True
        else:
            print(f"Resource {resource_id} doesn't exist anymore.")
    except Exception as e:

        return False

    return False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is probably obvious. We check whether the resource check returned a &lt;code&gt;False&lt;/code&gt;, and if so, we set the &lt;code&gt;WorkflowStatus&lt;/code&gt; of the corresponding finding to &lt;code&gt;Resolved&lt;/code&gt; adding a note that this state change was executed by the automation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def resolve_sechub_finding(finding):

    sechub = boto3.client('securityhub')

    try:
        print(f"Finding {finding['Id']} will be resolve as resource not exists any more.")
        response = sechub.batch_update_findings(
                    FindingIdentifiers = [{'Id': finding['Id'], 'ProductArn': finding['ProductArn']}],
                    Workflow = {
                        'Status': 'RESOLVED'
                    },
                    Note = {
                        "Text": "This resource no longer exists. Findings for this resource have been set to RESOLVED.",
                        "UpdatedBy": "DeletedresourceFindingResolver"
                    }
                )

    except Exception as e:
        print(e)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if you now open the default view of the Security Hub Findings we can enjoy our tidied up results leaving off just the eligible Findings.&lt;/p&gt;

&lt;p&gt;I would suggest to also add an Eventbridge Scheduled Rule to have this function running on a regular basis. In huge environments it may also make sense to play around with the Memory to not run into a timeout.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! My name is Jana, I live in the Southwest of Germany and when I'm not smashing weights in the gym I love to architect solutions in AWS making my and the customers lives easier. &lt;/p&gt;

&lt;p&gt;My computer science journey started as an On-Premise System Administrator over the time developing to an AWS Architect. As I know both the "old" and "new" world, I know common pain points in architectures and being able to provide solutions to solve them and making them not even more efficient but also cheaper! &lt;/p&gt;

&lt;p&gt;I enjoy to learn and as the AWS portfolio is evolving all the time, I also try to stay up to date by getting certified and checking out newly launched products and services.&lt;/p&gt;

&lt;p&gt;If you want to lift your environment either to the cloud or want to leverage your already migrated environment to use more of the cloud services, hit me up or check out Public Cloud Group GmbH!&lt;/p&gt;

&lt;p&gt;If you want to support me, you can buy me a &lt;a href="https://coff.ee/janahockenberger" rel="noopener noreferrer"&gt;coffee&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  About PCG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pcg.io/" rel="noopener noreferrer"&gt;Public Cloud Group&lt;/a&gt; supports companies in their digital transformation through the use of public cloud solutions.&lt;/p&gt;

&lt;p&gt;With a product portfolio designed to accompany organisations of all sizes in their cloud journey and competence that is a synonym for highly qualified staff that clients and partners like to work with, PCG is positioned as a reliable and trustworthy partner for the hyperscalers, relevant and with repeatedly validated competence and credibility.&lt;/p&gt;

&lt;p&gt;We have the highest partnership status with the three relevant hyperscalers: Amazon Web Services (AWS), Google, and Microsoft. As experienced providers, we advise our customers independently with cloud implementation, application development, and managed services.&lt;/p&gt;

</description>
      <category>resourceexplorer</category>
      <category>securityhub</category>
      <category>aws</category>
      <category>python</category>
    </item>
    <item>
      <title>Automated Control Rollout in AWS Control Tower</title>
      <dc:creator>Jana Hockenberger</dc:creator>
      <pubDate>Fri, 15 Nov 2024 07:35:25 +0000</pubDate>
      <link>https://dev.to/janahockenberger/automated-control-rollout-in-aws-control-tower-21oc</link>
      <guid>https://dev.to/janahockenberger/automated-control-rollout-in-aws-control-tower-21oc</guid>
      <description>&lt;h2&gt;
  
  
  The Power of Control Tower Controls
&lt;/h2&gt;

&lt;p&gt;Control Tower provides a lot of helpful features when trying to assist you to manage your multi-account environment. One of these features is the usage of the so-called &lt;strong&gt;Controls&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Control Tower Controls help you to setup guardrails making your environment more secure and helping you ensuring governance across all OUs and accounts. &lt;/p&gt;

&lt;p&gt;These Controls can be split up into three different groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Preventive Controls&lt;/strong&gt; → With the usage of Service Control Policies this type of control prevent you doing something by setting a Deny to a specific action. 
An example for a Preventive Control is the 
&lt;code&gt;Disallow Creation of Access Keys for the Root User&lt;/code&gt; Control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detective Controls&lt;/strong&gt; → This creates a Config Rule checking if the corresponding resource has a non compliant state which then can be leveraged to a remediation action. An example Control here is &lt;code&gt;Detect Whether Unrestricted Incoming TCP Traffic is Allowed&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Controls&lt;/strong&gt; → The setting of IAM Permissions is used here to proactively make sure that specific actions for specific service or user can not be done. An example is &lt;code&gt;Require AWS Lambda function policies to prohibit public access&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bad News
&lt;/h2&gt;

&lt;p&gt;When using a Control Tower environment with a nested OU structure, all underlying OUs inherit the Preventive Controls set on a higher level.&lt;/p&gt;

&lt;p&gt;Unfortunately this inheritance does not occur for Detective and Proactive Controls. As AWS continues releasing the Controls and your environment might me also expanding this can result in OUs not having the full range of neede OUs enabled. So in case you don’t want to generate an overhead when trying to activate the Controls on every underlying OU, just keep reading to see how I resolved this issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving the Problem
&lt;/h2&gt;

&lt;p&gt;The infrastructure is built as shown in the picture below. The main actions are implemented with the usage of a AWS Step Function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kdqncg2uv7tlyxcbtud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kdqncg2uv7tlyxcbtud.png" alt="Image description" width="672" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have two main topics to take a look here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Inherit activated Controls to all underlying OUs &lt;/li&gt;
&lt;li&gt;Making sure new OUs and accounts also get the corresponding Controls&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Let’s look at Step 1 first:
&lt;/h4&gt;

&lt;p&gt;First of all is to make sure we have a place to document all the Controls we want to activate in our environment. I used the SSM Parameter store as it lets you save values in a JSON format which we will work with in this solution. This Control list is set in the Cloudformation code and then deployed to the parameter store.&lt;/p&gt;

&lt;p&gt;As soon as a change happens to this parameter in the code a Lambda Custom Resource gets triggered. Custom Resources in Cloudformation are mostly connected with a Lambda function which gets executes as soon as the Custom Resource gets updated. So in case we update our Control List later, the Lambda function gets executed. The function the starts the execution of a Step Function which is used to enable all the Controls on all OUs in the following way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnga29ykb5qfxmuvhrbg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnga29ykb5qfxmuvhrbg3.png" alt="Image description" width="624" height="1397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The step function first executes a parallel state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First State: Reading out the SSM Parameter value by using the &lt;code&gt;GetParameter&lt;/code&gt; API call and transforming the JSON to an array with a separate Lambda function. This is done to let the Step Function be able to loop over the different Control values set in the Parameter Store.&lt;/li&gt;
&lt;li&gt;Second State: Execute a Lambda function reading out all of OUs of the organization. In case an OU should be left out, it will not be part of the output here.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both outputs are passed to the following Map State which then first loops over all Controls which were read out earlier from the parameter store. &lt;/p&gt;

&lt;p&gt;An inner loop then loops over all OUs followed by a Pass State combining the Control value and the OU value. This will get passed to the &lt;code&gt;EnableControl&lt;/code&gt; API call to activate the Control to the corresponding OU. Unfortunately as the automation runs in to an error in case the Control has already been activated an catch expression was inserted here to not let the Step Function fail and continue with the next OU.&lt;/p&gt;

&lt;p&gt;The Map States then loop over every Control and over every OU making sure all combinations were handled.&lt;/p&gt;

&lt;p&gt;But how do we make sure that also newly created OUs will get the Controls?&lt;/p&gt;

&lt;h4&gt;
  
  
  This is done in Step 2:
&lt;/h4&gt;

&lt;p&gt;For this case we just need to implement an EventBridge rule listening to the &lt;code&gt;ManageOrganizationalUnit&lt;/code&gt; CloudTrail event, handing this over to the Step function used above. As Control Tower takes some time to also register the OU and rolling out all necessary stacks a Wait State is set in the beginning of the Step Function. This makes sure that all prior needed steps are already finished when the Control Rollout takes place. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to deploy
&lt;/h2&gt;

&lt;p&gt;Check out my &lt;a href="https://github.com/janahockenberger/automated-control-rollout" rel="noopener noreferrer"&gt;Github&lt;/a&gt; for all necessary code to this solution!&lt;/p&gt;

&lt;p&gt;Of course the code should be deployed on the account where Control Tower has been activated. When deploying the template you will need to insert the Controls in an ARN format, divided by commata as seen in the screenshot below. As SSM Parameter Store saves everything as a String, make sure to not forget the apostrophes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixj1zp3j8yc8kp8js6v5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixj1zp3j8yc8kp8js6v5.png" alt="Image description" width="728" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the deployment the Custom Resource will already get triggered, so take a look at the Step Function to see how the Controls are getting enabled!&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! My name is Jana, I live in the Southwest of Germany and when I'm not smashing weights in the gym I love to architect solutions in AWS making my and the customers lives easier. &lt;/p&gt;

&lt;p&gt;My computer science journey started as an On-Premise System Administrator over the time developing to an AWS Architect. As I know both the "old" and "new" world, I know common pain points in architectures and being able to provide solutions to solve them and making them not even more efficient but also cheaper! &lt;/p&gt;

&lt;p&gt;I enjoy to learn and as the AWS portfolio is evolving all the time, I also try to stay up to date by getting certified and checking out newly launched products and services.&lt;/p&gt;

&lt;p&gt;If you want to lift your environment either to the cloud or want to leverage your already migrated environment to use more of the cloud services, hit me up or check out Public Cloud Group GmbH!&lt;/p&gt;

&lt;p&gt;If you want to support me, you can buy me a &lt;a href="https://coff.ee/janahockenberger" rel="noopener noreferrer"&gt;coffee&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  About PCG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pcg.io/" rel="noopener noreferrer"&gt;Public Cloud Group&lt;/a&gt; supports companies in their digital transformation through the use of public cloud solutions.&lt;/p&gt;

&lt;p&gt;With a product portfolio designed to accompany organisations of all sizes in their cloud journey and competence that is a synonym for highly qualified staff that clients and partners like to work with, PCG is positioned as a reliable and trustworthy partner for the hyperscalers, relevant and with repeatedly validated competence and credibility.&lt;/p&gt;

&lt;p&gt;We have the highest partnership status with the three relevant hyperscalers: Amazon Web Services (AWS), Google, and Microsoft. As experienced providers, we advise our customers independently with cloud implementation, application development, and managed services.&lt;/p&gt;

</description>
      <category>controltower</category>
      <category>aws</category>
      <category>stepfunction</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>AWS Automatically Accept Transit Gateway Attachments for allowed CIDR and Account pairs</title>
      <dc:creator>Jana Hockenberger</dc:creator>
      <pubDate>Mon, 28 Oct 2024 11:49:27 +0000</pubDate>
      <link>https://dev.to/janahockenberger/aws-automatically-accept-transit-gateway-attachments-for-allowed-cidr-and-account-pairs-384o</link>
      <guid>https://dev.to/janahockenberger/aws-automatically-accept-transit-gateway-attachments-for-allowed-cidr-and-account-pairs-384o</guid>
      <description>&lt;p&gt;This solution will provide an approach to automatically accept Transit Gateway attachments by checking a centrally managed list of allowed CIDR and AccountId value pairs when using a centralized Transit Gateway.&lt;/p&gt;

&lt;p&gt;Prerequisite for this solution is the existence of a centralized Transit Gateway which is shared with all accounts via Resource Access Manager.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems when creating a Transit Gateway
&lt;/h2&gt;

&lt;p&gt;When creating a Transit Gateway you are offered the option to &lt;code&gt;AutoAcceptSharedAttachments&lt;/code&gt; which means that if any account that uses the centralized Transit Gateway creates an attachment, it is automatically accepted. &lt;br&gt;
Which in turn means that if we do not activate this feature we would have to manually accept all created Transit Gateway attachments. Depending on the size of your environment this could lead to a great overhead wasting valuable time resources. &lt;/p&gt;

&lt;p&gt;Also probably not all created Transit Gateway Attachments are wanted but just a defined list of CIDR Blocks and Account IDs should be allowed to communicate with all VPCs and with the On-Premise datacenter.&lt;/p&gt;

&lt;p&gt;So what if we just want an AutoAccept for specific CIDR blocks we defined prior? Then this solution will help you!&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview over the Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0x5ta4uqkpbcd3zmxng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0x5ta4uqkpbcd3zmxng.png" alt="Image description" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This solution uses a DynamoDB table to manage the list of allowed CIDR Blocks and Account Ids pairs. Using &lt;code&gt;cidr_block&lt;/code&gt; as the primary key makes sure to not include the same CIDR Block multiple times in the Transit Gateway Attachments. As one account can include multiple VPCs with Transit Gateway attachments, setting it as the primary key wouldn't be suitable here. &lt;/p&gt;

&lt;p&gt;The values of the DynamoDB Table are set in the code in a Custom Resource which is used to initialize the DynamoDB Table. The Custom Resource executes a Lambda function during deployment or when the stack changes and creates the values for the DynamoDB Table which are documented in the code.&lt;/p&gt;

&lt;p&gt;Also in every account an EventBridge Rule gets created which sends an event to the default base as soon as it captures the CloudTrail event &lt;code&gt;CreateTransitGatewayVpcAttachment&lt;/code&gt;. The default bus of the corresponding account sends the event to the account where the Transit Gateway is created and triggers an Eventbridge Rule there which executes a Lambda Function. &lt;br&gt;
As well a cross-account Lambda Role gets created in every account to read-out the CIDR Block of the VPC mentioned in the event.&lt;/p&gt;

&lt;p&gt;The Lambda function then checks if an entry for the provided CIDR Block and Account Id exists in the DynamoDB table and accepts the Attachment when a match is found. &lt;/p&gt;
&lt;h3&gt;
  
  
  Deployment Steps
&lt;/h3&gt;

&lt;p&gt;The solutions consists of two stacks.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;tgw-attach-auto-accept-core.yaml&lt;/code&gt; file needs to be created in the Account where the Transit Gateway is located.&lt;br&gt;
The following resources will be created in the first stack:&lt;/p&gt;

&lt;p&gt;A DynamoDB Table for the entries of allowed CIDR Blocks and Account Ids. The CIDR Block functions as the primary key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TGWAttachmentDynamoDB:
    Type: 'AWS::DynamoDB::Table'
    Properties:
      TableName: 'TGWAttachmentAcceptedCIDR'
      AttributeDefinitions:
        - AttributeName: 'cidr_block'
          AttributeType: 'S'
        - AttributeName: 'account_id'
          AttributeType: 'S'
      KeySchema:
        - AttributeName: 'cidr_block'
          KeyType: 'HASH'  
        - AttributeName: 'account_id'
          KeyType: 'RANGE' 
      BillingMode: PAY_PER_REQUEST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Custom Resources to initialize the DynamoDB Table entries and set the values. Every time the list of values in the Custom Resource gets updated, the Lambda Function will be executed again to update the DynamoDB Table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DynamoDBTableInitializer:
    Type: Custom::DynamoDBTableInitializer
    Properties: 
      ServiceToken: !GetAtt TableInitializerFunction.Arn
      TableName: !Ref TGWAttachmentDynamoDB
      TableItems:
        - cidr_block: "1.2.3.4/5"
          account_id: "123456789012"
        - cidr_block: "6.7.8.9/12"
          account_id: "234567890123"
        ### more entries can be added here

  TableInitializerFunction:
    Type: AWS::Lambda::Function
    Properties: 
      Code:
        ZipFile: |
          import json
          import boto3
          import cfnresponse

          def lambda_handler(event, context):
              try:
                  table_name = event['ResourceProperties']['TableName']
                  items = event['ResourceProperties']['TableItems']

                  dynamodb = boto3.resource('dynamodb')
                  table = dynamodb.Table(table_name)

                  for item in items:
                      table.put_item(Item=item)

                  response_data = {'Status': 'Success'}
                  cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)
              except Exception as e:
                  response_data = {'Status': str(e)}
                  cfnresponse.send(event, context, cfnresponse.FAILED, response_data)
      Handler: 'index.lambda_handler'
      Runtime: python3.11
      Role: !GetAtt TableInitializerFunctionRole.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An Eventbridge Rule which receives the "CreateTransitGatewayVpcAttachment" event from the default bus and triggers the Lambda function to check and accept the attachment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CloudWatchRule:
    Type: "AWS::Events::Rule"
    Properties:
      Description: "Accepts the TGW Attachment and sets a Route in the Route Table to OnP-remise"
      State: ENABLED
      EventPattern: {
        "source": [
          "aws.ec2"
        ],
        "detail-type": [
          "AWS API Call via CloudTrail"
        ],
        "detail": {
          "eventSource": [
            "ec2.amazonaws.com"
          ],
          "eventName": [
            "CreateTransitGatewayVpcAttachment"
          ]
        }
      }
      Name: TGWAcceptAutoAttach
      Targets:
      - Arn: !GetAtt LambdaFunction.Arn
        Id: LambdaFunction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Lambda Function to check the source account which created the attachment, read out the DynamoDB table and if required accept the Transit Gateway Attachment (code in &lt;a href="https://github.com/janahockenberger/tgw-attach-auto-accept" rel="noopener noreferrer"&gt;Github&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;After that the &lt;code&gt;tgw-attach-auto-accept-global.yaml&lt;/code&gt; needs to be executed in all accounts, deploying it as a Stackset is recommended. This will create the following resources:&lt;/p&gt;

&lt;p&gt;An Eventbridge Rule to capture &lt;code&gt;CreateTransitGatewayVpcAttachment&lt;/code&gt; CloudTrail events from the specific accounts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TGWAttachAutoAcceptRule:
    Type: "AWS::Events::Rule"
    Condition: IsNotTGWAccount
    Properties: 
      Description: CloudWatch Event Rule for automatically accepting TGW Attachments
      EventPattern: {
        "source": [ 
          "aws.ec2"
        ], 
        "detail-type": [ 
          "AWS API Call via CloudTrail" 
        ], 
        "detail": { 
          "eventSource": [ 
            "ec2.amazonaws.com" 
          ], 
          "eventName": [
            "CreateTransitGatewayVpcAttachment"
          ] 
        } 
      } 
      Name: "RuleForTGWAttachAutoAccept"
      Targets: 
        - Arn: !Sub "arn:aws:events:${AWS::Region}:${TgwAccount}:event-bus/default"
          Id: "TGWAttachAutoAcceptRule"
          RoleArn: !GetAtt TGWAttachAutoAcceptRole.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Cross-Account IAM Role to read-out the CIDR Block from the corresponding VPC which is given in the event&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CrossAccountLambdaRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: CrossAccountLambdaRole
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${TgwAccount}:root'
            Action:
              - sts:AssumeRole
      MaxSessionDuration: 3600
      Path: /
      Policies:
        - PolicyName: CrossAccountLambdaRolePolicy
          PolicyDocument: 
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - ec2:DescribeVpcs
                Resource: "*"
              - Effect: Allow
                Action:
                  - iam:PassRole
                Resource: 'arn:aws:iam::*:role/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the successful deployment and adding the relevant CIDR Block and Account Id pairs, you should be able to see the Transitgateway Attachment gets enabled automatically!&lt;/p&gt;

&lt;p&gt;The complete solution is accessible in &lt;a href="https://github.com/janahockenberger/tgw-attach-auto-accept" rel="noopener noreferrer"&gt;Github&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! My name is Jana, I live in the Southwest of Germany and when I'm not smashing weights in the gym I love to architect solutions in AWS making my and the customers lives easier. &lt;/p&gt;

&lt;p&gt;My computer science journey started as an On-Premise System Administrator over the time developing to an AWS Architect. As I know both the "old" and "new" world, I know common pain points in architectures and being able to provide solutions to solve them and making them not even more efficient but also cheaper! &lt;/p&gt;

&lt;p&gt;I enjoy to learn and as the AWS portfolio is evolving all the time, I also try to stay up to date by getting certified and checking out newly launched products and services.&lt;/p&gt;

&lt;p&gt;If you want to lift your environment either to the cloud or want to leverage your already migrated environment to use more of the cloud services, hit me up or check out Public Cloud Group GmbH!&lt;/p&gt;

&lt;p&gt;If you want to support me, you can buy me a &lt;a href="https://coff.ee/janahockenberger" rel="noopener noreferrer"&gt;coffee&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  About PCG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pcg.io/" rel="noopener noreferrer"&gt;Public Cloud Group&lt;/a&gt; supports companies in their digital transformation through the use of public cloud solutions.&lt;/p&gt;

&lt;p&gt;With a product portfolio designed to accompany organisations of all sizes in their cloud journey and competence that is a synonym for highly qualified staff that clients and partners like to work with, PCG is positioned as a reliable and trustworthy partner for the hyperscalers, relevant and with repeatedly validated competence and credibility.&lt;/p&gt;

&lt;p&gt;We have the highest partnership status with the three relevant hyperscalers: Amazon Web Services (AWS), Google, and Microsoft. As experienced providers, we advise our customers independently with cloud implementation, application development, and managed services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>transitgateway</category>
      <category>cloudformation</category>
    </item>
  </channel>
</rss>
