<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rajesh Murali Nair</title>
    <description>The latest articles on DEV Community by Rajesh Murali Nair (@luffy7258).</description>
    <link>https://dev.to/luffy7258</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luffy7258"/>
    <language>en</language>
    <item>
      <title>Deploy-Time Intelligence in AWS CDK: Custom Resources in Action</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Mon, 19 Jan 2026 15:48:43 +0000</pubDate>
      <link>https://dev.to/luffy7258/environment-aware-eks-add-on-configuration-in-a-multi-account-platform-2441</link>
      <guid>https://dev.to/luffy7258/environment-aware-eks-add-on-configuration-in-a-multi-account-platform-2441</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In real-world AWS platforms, a single CDK codebase is often deployed across multiple AWS accounts, each representing a different environment such as &lt;strong&gt;development, staging, or production&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While AWS CDK excels at defining infrastructure, it has a limitation:&lt;br&gt;
&lt;strong&gt;it cannot make decisions at deploy time based on values stored inside the account&lt;/strong&gt;, such as values stored in AWS Systems Manager (SSM) Parameter Store.&lt;/p&gt;

&lt;p&gt;In this blog, we solve a practical platform-engineering problem using a &lt;strong&gt;Lambda-backed Custom Resource&lt;/strong&gt; to make &lt;strong&gt;environment-aware decisions&lt;/strong&gt; when installing an &lt;strong&gt;EKS Helm add-on&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real-Life Problem
&lt;/h2&gt;

&lt;p&gt;You are a platform engineer managing EKS clusters across multiple AWS accounts:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Environment&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Expectation&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Development&lt;/td&gt;
&lt;td&gt;Low cost, minimal redundancy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Staging&lt;/td&gt;
&lt;td&gt;Production-like but smaller&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production&lt;/td&gt;
&lt;td&gt;High availability&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Your organization already stores the environment type centrally as an SSM parameter:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/platform/account/env&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;with values like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;development&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;staging&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;production&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you want to install **ingress-nginx **on every EKS cluster, but configure it differently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;development&lt;/code&gt; → &lt;code&gt;replicaCount = 1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;staging&lt;/code&gt; / &lt;code&gt;production&lt;/code&gt; → &lt;code&gt;replicaCount = 2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why Not Use CDK Context? (Short Version)&lt;/p&gt;

&lt;p&gt;At first glance, CDK context variables may seem like a simpler solution for environment-based configuration. However, context values are resolved &lt;strong&gt;at synthesis time&lt;/strong&gt;, not during deployment. This means they must be provided externally (via &lt;code&gt;cdk.json&lt;/code&gt; or CI/CD pipelines) and are unaware of &lt;strong&gt;account-level metadata&lt;/strong&gt; such as values stored in SSM Parameter Store. In multi-account platforms, this often leads to manual coordination, configuration drift, and governance issues. Since the environment classification already lives inside the AWS account and should be owned by the platform, using a &lt;strong&gt;deploy-time Custom Resource&lt;/strong&gt; ensures the configuration is accurate, consistent, and centrally controlled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why CDK Alone Is Not Enough
&lt;/h2&gt;

&lt;p&gt;AWS CDK evaluates logic &lt;strong&gt;during synthesis&lt;/strong&gt;, but the SSM parameter value is only reliably available &lt;strong&gt;during deployment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You cannot use CDK &lt;code&gt;if&lt;/code&gt; statements&lt;/li&gt;
&lt;li&gt;You cannot hardcode environment values&lt;/li&gt;
&lt;li&gt;You cannot rely on CDK context safely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What you need is &lt;strong&gt;deploy-time logic&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Lambda-Backed Custom Resource
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Custom Resource&lt;/strong&gt; allows CloudFormation to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Invoke a Lambda function during stack creation or update&lt;/li&gt;
&lt;li&gt;Wait for the result&lt;/li&gt;
&lt;li&gt;Consume returned attributes as inputs for other resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this case, the Custom Resource:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the environment value from SSM&lt;/li&gt;
&lt;li&gt;Computes the correct Helm value&lt;/li&gt;
&lt;li&gt;Returns it to CDK&lt;/li&gt;
&lt;li&gt;CDK passes it into the Helm chart&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Deployment flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;CDK creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster&lt;/li&gt;
&lt;li&gt;SSM parameter /platform/account/env&lt;/li&gt;
&lt;li&gt;Lambda function&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom Resource triggers the Lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambda computes Helm values&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm chart is installed using returned values&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This keeps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One CDK codebase&lt;/li&gt;
&lt;li&gt;Zero manual steps&lt;/li&gt;
&lt;li&gt;Environment-aware behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CDK Stack Code (Python)
&lt;/h2&gt;

&lt;p&gt;Below is the &lt;strong&gt;CDK stack&lt;/strong&gt; that creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster&lt;/li&gt;
&lt;li&gt;SSM parameter&lt;/li&gt;
&lt;li&gt;Lambda function&lt;/li&gt;
&lt;li&gt;Custom Resource&lt;/li&gt;
&lt;li&gt;Helm chart with dynamic values
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import (
    Stack,
    aws_eks as eks,
    aws_ec2 as ec2,
    aws_iam as iam,
    aws_ssm as ssm,
    aws_signer as signer,
    aws_lambda as _lambda,
    custom_resources as cr,
    CustomResource,
    Token
)
from aws_cdk.lambda_layer_kubectl_v34 import KubectlV34Layer
from constructs import Construct
import os

class TestStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, *, vpc: ec2.IVpc = None, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)

        aws_account_id = self.node.try_get_context("aws-account-id")

        # Create EKS Cluster
        cluster = eks.Cluster(
            self, "MyEKS",
            version=eks.KubernetesVersion.V1_34,
            endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE,
            default_capacity=0,
            default_capacity_instance=ec2.InstanceType.of(
                ec2.InstanceClass.T3,
                ec2.InstanceSize.MEDIUM
            ),
            kubectl_layer=KubectlV34Layer(self, "kubectl"),
            vpc=vpc,
            vpc_subnets=[
                ec2.SubnetSelection(
                    subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS
                )
            ],
            cluster_name="MyEKS",
            tags={"Name": "MyEKS", "Purpose": "Swisscom-Interview"}
        )

        # EKS Admin Access
        admin_user = iam.User(self, "EKSAdmin")
        cluster.aws_auth.add_user_mapping(
            admin_user, groups=["system:masters"]
        )

        # Store environment in SSM
        ssm.StringParameter(
            self, "MyEnvParam",
            parameter_name="/platform/account/env",
            string_value="development",
            description="Environment Name"
        )

        # Lambda code signing
        signing_profile = signer.SigningProfile(
            self, "SigningProfile",
            platform=signer.Platform.AWS_LAMBDA_SHA384_ECDSA
        )

        code_signing_config = _lambda.CodeSigningConfig(
            self, "CodeSigningConfig",
            signing_profiles=[signing_profile]
        )

        # Lambda Function
        fn = _lambda.Function(
            self, "MySSMParamLambda",
            runtime=_lambda.Runtime.PYTHON_3_13,
            handler="index.lambda_handler",
            code=_lambda.Code.from_asset(
                os.path.join(
                    os.path.dirname(__file__),
                    "lambda_functions"
                )
            ),
            environment={
                "SSM_PARAM_NAME": "/platform/account/env"
            },
            code_signing_config=code_signing_config
        )

        fn.add_to_role_policy(
            iam.PolicyStatement(
                actions=["ssm:GetParameter"],
                resources=[
                    f"arn:aws:ssm:{self.region}:{self.account}:parameter/platform/account/env"
                ]
            )
        )

        # Custom Resource Provider
        provider = cr.Provider(
            self, "EnvToHelmProvider",
            on_event_handler=fn
        )

        env_cr = CustomResource(
            self, "EnvToHelmValues",
            service_token=provider.service_token
        )

        replica_count = Token.as_number(
            env_cr.get_att("ReplicaCount")
        )

        # Install ingress-nginx Helm chart
        cluster.add_helm_chart(
            "nginx-ingress",
            chart="nginx-ingress",
            repository="https://helm.nginx.com/stable",
            namespace="kube-system",
            values={
                "controller": {
                    "replicaCount": replica_count
                }
            }
        )

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lambda Code (Custom Resource Logic)
&lt;/h2&gt;

&lt;p&gt;This Lambda:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads the environment from SSM&lt;/li&gt;
&lt;li&gt;Computes the correct replica count&lt;/li&gt;
&lt;li&gt;Returns values back to CloudFormation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import os
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

ssm = boto3.client("ssm")
PARA_NAME = os.environ["SSM_PARAM_NAME"]

def get_parameter(para_name):
    parameter = ssm.get_parameter(Name=para_name)
    env = parameter["Parameter"]["Value"].strip().lower()

    if env == "development":
        value = 1
    elif env in ["staging", "production"]:
        value = 2
    else:
        raise ValueError(
            f"Invalid environment {env} in SSM Parameter {para_name}"
        )

    logger.info(
        f"Computed replicaCount={value} for env={env}"
    )

    return {
        "Environment": env,
        "ReplicaCount": value
    }

def lambda_handler(event, context):

    if event.get("RequestType") == "Delete":
        return {
            "PhysicalResourceId": event.get(
                "PhysicalResourceId", "env"
            ),
            "Data": {}
        }

    data = get_parameter(PARA_NAME)

    return {
        "PhysicalResourceId": f"{PARA_NAME}:{data['Environment']}",
        "Data": data
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why This Pattern Works Well
&lt;/h2&gt;

&lt;p&gt;This approach provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Single CDK codebase&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Environment-aware behavior&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No manual Helm overrides&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deploy-time decision making&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Testable business logic&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It scales well as you add more add-ons such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cert-manager&lt;/li&gt;
&lt;li&gt;external-dns&lt;/li&gt;
&lt;li&gt;cluster-autoscaler&lt;/li&gt;
&lt;li&gt;logging agents&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS CDK is declarative by nature, but real platforms require &lt;strong&gt;deploy-time intelligence&lt;/strong&gt;. By combining &lt;strong&gt;Lambda-backed Custom Resources&lt;/strong&gt; with CDK, you can make infrastructure decisions based on &lt;strong&gt;real account metadata&lt;/strong&gt;, not hardcoded assumptions.&lt;/p&gt;

&lt;p&gt;This pattern is a powerful tool for platform teams aiming for &lt;strong&gt;consistency, safety, and automation across environments&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Building a Receipt-Scanning Feature for a Budgeting App with Amazon Bedrock</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Tue, 16 Dec 2025 12:57:17 +0000</pubDate>
      <link>https://dev.to/luffy7258/building-a-receipt-scanning-feature-for-a-budgeting-app-with-amazon-bedrock-4cn3</link>
      <guid>https://dev.to/luffy7258/building-a-receipt-scanning-feature-for-a-budgeting-app-with-amazon-bedrock-4cn3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;While building a budgeting app, I identified a feature that had value beyond personal expense tracking. By enabling users to scan supermarket receipts, the application can extract structured purchase data and analyze individual spending patterns automatically.&lt;/p&gt;

&lt;p&gt;This capability not only simplifies budgeting for users but also highlights a broader opportunity for the retail industry. Receipt-level data can provide insights into consumer behavior and enable retailers to deliver more targeted, data-driven promotional offers tailored to specific customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem Statement: Manual Expense Tracking Doesn’t Scale
&lt;/h2&gt;

&lt;p&gt;Tracking expenses manually is time-consuming and error-prone. Most budgeting applications rely on users to enter purchase details by hand, which often leads to incomplete data and poor long-term adoption.&lt;/p&gt;

&lt;p&gt;Supermarket receipts contain rich information—item names, prices, categories, totals—but this data is usually locked away in unstructured formats such as images or PDFs. Without automation, extracting and organizing this information becomes a significant challenge, limiting both accurate budget tracking and deeper spending analysis.&lt;/p&gt;

&lt;p&gt;This problem becomes more pronounced as transaction volume grows, making a scalable, automated receipt-processing solution essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Overview
&lt;/h2&gt;

&lt;p&gt;The receipt-scanning feature allows users to capture supermarket receipts and automatically convert them into structured expense data within the budgeting app.&lt;/p&gt;

&lt;p&gt;From a user’s perspective, the workflow is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user uploads a photo of a supermarket receipt.&lt;/li&gt;
&lt;li&gt;The application processes the image and extracts key purchase details such as store name, items, prices, total amount, and purchase date.&lt;/li&gt;
&lt;li&gt;The extracted data is then categorized and stored, making it immediately available for budget tracking and spending analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By automating this process, the feature removes the need for manual expense entry while enabling more accurate, item-level insights into consumer spending patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Walkthrough: Receipt Processing Pipeline
&lt;/h2&gt;

&lt;p&gt;This section walks through the architecture shown above, focusing on how each AWS service contributes to the receipt-scanning feature, from ingestion to persistent storage.&lt;/p&gt;

&lt;p&gt;The goal of this design is to keep the workflow event-driven, scalable, and simple, while clearly separating responsibilities between OCR, AI reasoning, and data storage.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Architecture Diagram&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse6a2ow1oqm05hzefbmo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse6a2ow1oqm05hzefbmo.jpeg" alt=" " width="800" height="1692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Ingestion: Uploading Receipts to Amazon S3&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The workflow starts when a user uploads a receipt image using either &lt;strong&gt;a mobile application or a web interface&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All receipt images are stored in an &lt;strong&gt;Amazon S3 bucket&lt;/strong&gt; named &lt;code&gt;receipts&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;S3 acts as a &lt;strong&gt;durable, cost-effective entry poin&lt;/strong&gt;t for unstructured data (images)&lt;/li&gt;
&lt;li&gt;The bucket is configured with an &lt;strong&gt;event notification&lt;/strong&gt; that triggers processing as soon as a new object is uploaded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using S3 for ingestion removes the need for a dedicated API layer just to accept images and ensures uploads scale automatically with usage.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Event Trigger: AWS Lambda&lt;/strong&gt; (&lt;code&gt;receipt-analyzer&lt;/code&gt;)
&lt;/h4&gt;

&lt;p&gt;When a new receipt image is uploaded, S3 triggers an &lt;strong&gt;AWS Lambda function&lt;/strong&gt; called &lt;code&gt;receipt-analyzer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When a new receipt image is uploaded, S3 triggers an AWS Lambda function called receipt-analyzer.&lt;/p&gt;

&lt;p&gt;This Lambda function acts as the &lt;strong&gt;orchestrator&lt;/strong&gt; for the entire pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It reads the S3 event metadata&lt;/li&gt;
&lt;li&gt;Coordinates calls to downstream services&lt;/li&gt;
&lt;li&gt;Normalizes and persists the final output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because Lambda is event-driven and serverless, the system only runs compute when a receipt actually arrives.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from datetime import datetime
from decimal import Decimal

# Initialize clients
textract = boto3.client('textract', region_name='us-east-1')
bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')
dynamodb = boto3.resource('dynamodb', region_name='us-east-1')

# Configuration
BEDROCK_MODEL_ID = 'anthropic.claude-3-sonnet-20240229-v1:0'
DYNAMODB_TABLE_NAME = 'receipt-processing-results'

def convert_floats_to_decimal(obj):
    """Recursively convert float values to Decimal for DynamoDB compatibility"""
    if isinstance(obj, list):
        return [convert_floats_to_decimal(item) for item in obj]
    elif isinstance(obj, dict):
        return {key: convert_floats_to_decimal(value) for key, value in obj.items()}
    elif isinstance(obj, float):
        return Decimal(str(obj))
    else:
        return obj

def lambda_handler(event, context):
    # Extract S3 bucket and object key from event
    bucket_name = event['Records'][0]['s3']['bucket']['name']
    document_name = event['Records'][0]['s3']['object']['key']

    # Step 1: Extract raw text using Textract OCR
    textract_response = textract.detect_document_text(
        Document={'S3Object': {'Bucket': bucket_name, 'Name': document_name}}
    )

    # Step 2: Concatenate lines of text
    text_lines = [block['Text'] for block in textract_response['Blocks'] if block['BlockType'] == 'LINE']
    full_text = "\n".join(text_lines)

    # Step 3: Prepare prompt for Bedrock
    prompt = f"""You are an AI assistant that extracts structured data from receipts. Given the receipt text below, return a JSON with the following fields:
- supermarket_name
- location (address)
- items (list of item name and price)
- total_amount
- date_of_purchase

Receipt Text:
\"\"\"
{full_text}
\"\"\"

Return only the JSON object, no explanation."""

    # Step 4: Invoke Bedrock
    bedrock_response = bedrock.invoke_model(
        modelId=BEDROCK_MODEL_ID,
        contentType='application/json',
        accept='application/json',
        body=json.dumps({
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 1024,
            "messages": [
                {
                    "role": "user",
                    "content": prompt
                }
            ],
            "temperature": 0.3,
            "top_p": 0.9
        })
    )

    # Step 5: Parse Bedrock response
    response_body = json.loads(bedrock_response['body'].read().decode())
    model_output = response_body['content'][0]['text'].strip()

    # Try to parse model output as JSON
    try:
        extracted_data = json.loads(model_output)
    except json.JSONDecodeError:
        extracted_data = {"error": "Failed to parse response", "raw_output": model_output}

    # Step 6: Convert floats to Decimal for DynamoDB
    extracted_data_decimal = convert_floats_to_decimal(extracted_data)

    # Step 7: Save to DynamoDB
    table = dynamodb.Table(DYNAMODB_TABLE_NAME)

    # Create DynamoDB item
    dynamodb_item = {
        'document_id': document_name,
        'bucket_name': bucket_name,
        'processed_timestamp': datetime.utcnow().isoformat(),
        'extracted_data': extracted_data_decimal,
        'raw_text': full_text
    }

    try:
        # Save to DynamoDB
        table.put_item(Item=dynamodb_item)

        return {
            'statusCode': 200,
            'body': json.dumps({
                'message': 'Receipt processed and saved successfully',
                'document_id': document_name,
                'extracted_data': extracted_data  # Return original for JSON serialization
            })
        }
    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps({
                'error': 'Failed to save to DynamoDB',
                'details': str(e),
                'extracted_data': extracted_data
            })
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;3. Text Extraction: Amazon Textract&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The first processing step inside the Lambda is &lt;strong&gt;optical character recognition (OCR)&lt;/strong&gt; using &lt;strong&gt;Amazon Textract&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Textract extracts raw text from the receipt image&lt;/li&gt;
&lt;li&gt;All detected &lt;code&gt;LINE&lt;/code&gt; blocks are concatenated into a single text representation&lt;/li&gt;
&lt;li&gt;No assumptions are made about receipt layout or formatting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, the data is still &lt;strong&gt;unstructured&lt;/strong&gt;—just plain text—but it provides a reliable foundation for semantic analysis.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Semantic Parsing: Amazon Bedrock (Claude 3 Sonnet)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Once raw text is extracted, the Lambda invokes &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; using the &lt;code&gt;anthropic.claude-3-sonnet&lt;/code&gt; model.&lt;/p&gt;

&lt;p&gt;Instead of trying to manually parse receipts with rules or regex, the model is prompted to &lt;strong&gt;reason over the text&lt;/strong&gt; and return a clean JSON structure containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supermarket name&lt;/li&gt;
&lt;li&gt;Store location&lt;/li&gt;
&lt;li&gt;Item list (name and price)&lt;/li&gt;
&lt;li&gt;Total amount&lt;/li&gt;
&lt;li&gt;Date of purchase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The prompt explicitly instructs the model to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Return only &lt;strong&gt;JSON&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Follow a fixed schema&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach dramatically simplifies downstream processing and makes the output predictable enough for database storage.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. Persistence: Amazon DynamoDB&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;After successful extraction, the structured result is stored in Amazon DynamoDB, in a table named &lt;code&gt;receipt-processing-results&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Each receipt is saved as a single item with the following attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;document_id&lt;/strong&gt; (&lt;em&gt;String, primary key&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;bucket_name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;extracted_data&lt;/strong&gt; (&lt;em&gt;structured JSON&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;processed_timestamp&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;raw_text&lt;/strong&gt; (&lt;em&gt;original OCR output&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example of &lt;code&gt;extracted_data&lt;/code&gt; field:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "location": {
    "S": "Markt 54, 3431 LB Nieuwegein"
  },
  "items": {
    "L": [
      {
        "M": {
          "name": {
            "S": "S. MARIA TORTILLA W.W"
          },
          "price": {
            "N": "2.99"
          }
        }
      }
    ]
  },
  "total_amount": {
    "N": "2.99"
  },
  "date_of_purchase": {
    "S": "14/06/2025"
  },
  "supermarket_name": {
    "S": "Dirk van den Broek"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DynamoDB was chosen because it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scales automatically with receipt volume&lt;/li&gt;
&lt;li&gt;Provides low-latency access for dashboards and queries&lt;/li&gt;
&lt;li&gt;Works well for item-centric access patterns (one receipt per item)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Storing both structured data and raw text allows future reprocessing if extraction logic or prompts improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Architecture Works Well
&lt;/h2&gt;

&lt;p&gt;This design has a few key advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fully serverless&lt;/strong&gt; – no servers to manage or scale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven&lt;/strong&gt; – processing happens only when new data arrives&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separation of concerns&lt;/strong&gt; – OCR, reasoning, and storage are cleanly isolated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensible&lt;/strong&gt; – easy to add user IDs, GSIs, or analytics pipelines later&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also keeps the system flexible: Textract can be swapped or enhanced, prompts can evolve, and DynamoDB schemas can grow without breaking the ingestion flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This feature demonstrates how a focused, single-purpose workflow can deliver meaningful value when built with the right AWS services. By combining &lt;strong&gt;Amazon S3, AWS Lambda, Amazon Textract, Amazon Bedrock (Claude 3 Sonnet), and Amazon DynamoDB&lt;/strong&gt;, unstructured receipt images are transformed into structured, queryable data with minimal operational complexity.&lt;/p&gt;

&lt;p&gt;The event-driven, serverless design scales automatically with usage and keeps costs aligned with demand. Separating OCR from AI-based reasoning also makes the solution flexible—models, prompts, or extraction logic can evolve over time without requiring architectural changes.&lt;/p&gt;

&lt;p&gt;Most importantly, this approach removes manual effort for users while creating a strong foundation for future capabilities such as spending analytics, budget insights, and personalized recommendations. With small incremental extensions, this same architecture can support more advanced financial intelligence use cases without sacrificing simplicity or scalability.&lt;/p&gt;

</description>
      <category>bedrock</category>
      <category>aws</category>
      <category>ai</category>
    </item>
    <item>
      <title>Exam Guide : GitHub Foundation Part 2</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Mon, 01 Sep 2025 12:42:12 +0000</pubDate>
      <link>https://dev.to/luffy7258/exam-guide-github-foundation-part-2-33ch</link>
      <guid>https://dev.to/luffy7258/exam-guide-github-foundation-part-2-33ch</guid>
      <description>&lt;h2&gt;
  
  
  Contribution to open-source project on GitHub
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Find an Open-Source Project That Needs Contributions
&lt;/h3&gt;

&lt;p&gt;The easiest way to start contributing is to look at the projects you already use or want to use. Since you’re familiar with them, you’ll have a better understanding of how they work and where they can improve.&lt;/p&gt;

&lt;p&gt;Here are some common opportunities to contribute:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fix typos or broken links in the README or documentation.&lt;/li&gt;
&lt;li&gt;Update outdated documentation to make it clearer for new users.&lt;/li&gt;
&lt;li&gt;Report bugs or fix small issues you’ve encountered.&lt;/li&gt;
&lt;li&gt;Add tests or improve existing test coverage.&lt;/li&gt;
&lt;li&gt;Improve accessibility or translations to make the project more inclusive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understand Project Guidelines Before Contributing
&lt;/h3&gt;

&lt;p&gt;Every open-source community is unique, with its own culture and participation rules. Once you’ve found a project you’d like to contribute to, take time to &lt;strong&gt;familiarize yourself with its guidelines&lt;/strong&gt;. This ensures your contributions align with the project’s expectations and community standards.&lt;/p&gt;

&lt;p&gt;Most open-source repositories include key documents at the top level:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LICENSE&lt;/strong&gt; – Defines how the project can be used, shared, or modified. If no license is present, the project is not truly open source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README&lt;/strong&gt; – The welcome page of the project, usually explaining what it does, how to get started, and how to engage with the community.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CONTRIBUTING&lt;/strong&gt; – Provides step-by-step instructions for contributing, including setup details and the review process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CODE_OF_CONDUCT&lt;/strong&gt; – Outlines expected behavior and community standards, ensuring a safe and welcoming environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Identify Tasks to Work On
&lt;/h3&gt;

&lt;p&gt;Once you’ve chosen a project and reviewed its contribution guidelines, the next step is to &lt;strong&gt;find tasks where you can contribute&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You might start small—fixing a broken link, updating documentation, or reporting a bug you’ve noticed. To discover structured opportunities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visit the project’s&lt;/strong&gt; &lt;code&gt;/contribute&lt;/code&gt; &lt;strong&gt;URL, which highlights beginner-friendly issues.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Look for labels such as:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;good first issue&lt;/code&gt; – &lt;strong&gt;simple issues ideal for new contributors.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;help wanted&lt;/code&gt; – &lt;strong&gt;tasks where maintainers are actively seeking assistance.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;beginner-friendly&lt;/code&gt; – &lt;strong&gt;issues designed to welcome new contributors.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;discussion&lt;/code&gt; – &lt;strong&gt;opportunities to provide input or feedback.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Sponsor a project
&lt;/h3&gt;

&lt;p&gt;Contributing to open source isn’t limited to writing code—financial support is also a powerful way to give back. Many open-source projects are maintained by volunteers who dedicate their time and skills to building, securing, and improving the tools we all rely on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With GitHub Sponsors, you can directly fund individuals or projects. Sponsorship helps maintainers:&lt;/li&gt;
&lt;li&gt;Continue improving their projects.&lt;/li&gt;
&lt;li&gt;Cover costs for infrastructure, tools, or security.&lt;/li&gt;
&lt;li&gt;Receive recognition for their work and community impact.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Contribute to an open-source repository
&lt;/h3&gt;

&lt;p&gt;Once you’ve identified a task or area to contribute, the next step is preparing your contribution. Success in open source isn’t just about writing code—it’s also about clear communication and collaboration.&lt;/p&gt;

&lt;p&gt;Why Communication Matters&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It prevents duplication of effort (two people working on the same issue).&lt;/li&gt;
&lt;li&gt;It ensures your contribution aligns with the project’s goals and practices.&lt;/li&gt;
&lt;li&gt;It fosters collaboration and makes it more likely your work will be accepted.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Communicate Your Intent to Maintainers
&lt;/h3&gt;

&lt;p&gt;Before diving into any work, it’s important to &lt;strong&gt;communicate your intent to contribute&lt;/strong&gt;. This helps avoid duplication, ensures alignment with project goals, and opens the door for feedback from maintainers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Working on an existing issue&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the &lt;strong&gt;Assignees&lt;/strong&gt; section to see if anyone is already working on it.&lt;/li&gt;
&lt;li&gt;Look at the &lt;strong&gt;Linked pull requests&lt;/strong&gt; section—if there’s a PR, someone may already be addressing the issue.&lt;/li&gt;
&lt;li&gt;Read through the comments to confirm no one else has claimed it.&lt;/li&gt;
&lt;li&gt;If the issue is available, post a comment to state your interest. This signals to others that you’re taking it on and invites maintainers to provide guidance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Proposing a new issue or feature&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your bug or feature isn’t listed in the issue tracker, create a new issue.&lt;/li&gt;
&lt;li&gt;Follow any issue template provided.&lt;/li&gt;
&lt;li&gt;Clearly explain the problem, your proposed solution, and your intent to work on it.&lt;/li&gt;
&lt;li&gt;For larger features or changes, wait for &lt;strong&gt;maintainer approval&lt;/strong&gt; before moving forward.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Manage an InnerSource program by using GitHub
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is InnerSource?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Open source&lt;/strong&gt; allows anyone to freely use, view, modify, and share software, with the belief that open collaboration leads to better and more reliable solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;InnerSource&lt;/strong&gt; applies these &lt;strong&gt;open-source practices inside an organization&lt;/strong&gt;. In other words, it’s like running an open-source project &lt;em&gt;behind your company’s firewall&lt;/em&gt;. An InnerSource program mirrors the structure of open source—issues, pull requests, discussions—but limits access to the company’s employees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of InnerSource
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Internal Visibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers gain access to other teams’ code, issues, and project plans.&lt;/li&gt;
&lt;li&gt;Encourages code reuse and learning from existing solutions.&lt;/li&gt;
&lt;li&gt;Improves productivity by reducing duplication of effort.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Reduced Friction&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams dependent on fixes or new features can directly propose changes.&lt;/li&gt;
&lt;li&gt;If contributions can’t be merged, consumer teams can fork the project to meet their needs.&lt;/li&gt;
&lt;li&gt;Creates smoother collaboration between otherwise siloed teams.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Standardized Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Promotes consistent contribution models across teams.&lt;/li&gt;
&lt;li&gt;Standardizes how processes and communication are documented.&lt;/li&gt;
&lt;li&gt;Makes it easier for any developer in the company to contribute to any project.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Set up an InnerSource program on GitHub
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Set repository visibility and permissions
&lt;/h4&gt;

&lt;p&gt;GitHub repositories can be configured with different &lt;strong&gt;visibility levels&lt;/strong&gt; and &lt;strong&gt;permission levels&lt;/strong&gt; to control who can see and interact with your project.&lt;/p&gt;

&lt;h4&gt;
  
  
  Repository Visibility
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Public&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visible to everyone, inside and outside your organization.&lt;/li&gt;
&lt;li&gt;Best for &lt;strong&gt;open-source projects&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Internal&lt;/strong&gt; (&lt;em&gt;GitHub Enterprise only&lt;/em&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visible only to members of the enterprise.&lt;/li&gt;
&lt;li&gt;Ideal for &lt;strong&gt;InnerSource projects&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Private&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visible only to the repository owner and explicitly added users or teams.&lt;/li&gt;
&lt;li&gt;Use this for projects requiring &lt;strong&gt;restricted access&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Users who don’t meet the visibility requirement will see a “&lt;strong&gt;not found&lt;/strong&gt;” page when trying to access the repo.&lt;/p&gt;

&lt;h4&gt;
  
  
  Repository Permission Levels
&lt;/h4&gt;

&lt;p&gt;Once visibility is set, you can assign fine-grained &lt;strong&gt;permissions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read&lt;/strong&gt; – View and discuss the project (good for non-code contributors).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Triage&lt;/strong&gt; – Manage issues and pull requests without write access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write&lt;/strong&gt; – Push code and actively contribute to the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain&lt;/strong&gt; – Manage repository settings (excluding sensitive/destructive actions).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt; – Full access, including sensitive actions like security management or deleting the repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Create Discoverable Repositories
&lt;/h4&gt;

&lt;p&gt;As your InnerSource program grows, it’s important to make repositories easy to find and use. Follow these best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use a descriptive name&lt;/strong&gt; (e.g., warehouse-api, supply-chain-web).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add a concise description&lt;/strong&gt; (1–2 sentences explaining purpose).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License the repository&lt;/strong&gt; so users know how they can use and share it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Include a README.md&lt;/strong&gt; as the landing page with setup and usage info.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How GitHub Surfaces Your README
&lt;/h4&gt;

&lt;p&gt;When you add a &lt;strong&gt;README.md&lt;/strong&gt; file, GitHub automatically displays it on the repository’s landing page. The file can be placed in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.github&lt;/code&gt; &lt;strong&gt;directory&lt;/strong&gt; (highest priority)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Repository root directory&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docs&lt;/code&gt; &lt;strong&gt;directory&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If multiple README files exist, GitHub follows this order of priority when deciding which one to display.&lt;/p&gt;

&lt;h4&gt;
  
  
  Manage Projects on GitHub
&lt;/h4&gt;

&lt;p&gt;To make collaboration smoother, GitHub automatically looks for a &lt;code&gt;CONTRIBUTING.md&lt;/code&gt; file in your repository (in the root, &lt;code&gt;/docs&lt;/code&gt;, or &lt;code&gt;/.github&lt;/code&gt; directory).&lt;/p&gt;

&lt;p&gt;This file should outline your project’s &lt;strong&gt;contribution policy&lt;/strong&gt;, including conventions, workflows, and expectations for contributors.&lt;/p&gt;

&lt;p&gt;If a &lt;code&gt;CONTRIBUTING.md&lt;/code&gt; file exists, GitHub will display a link to it whenever someone opens an &lt;strong&gt;issue&lt;/strong&gt; or a &lt;strong&gt;pull request&lt;/strong&gt;, encouraging contributors to follow your guidelines.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create issue and pull request templates
&lt;/h4&gt;

&lt;p&gt;GitHub lets you create &lt;strong&gt;starter templates&lt;/strong&gt; for issues and pull requests to make contributions more structured and consistent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Issue Templates&lt;/strong&gt; (.github/ISSUE_TEMPLATE.md)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically load when someone creates a new issue.&lt;/li&gt;
&lt;li&gt;Help contributors provide all required details without referring back to CONTRIBUTING.md.&lt;/li&gt;
&lt;li&gt;Work like a form—users just fill in the template.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Pull Request Templates&lt;/strong&gt; (.github/PULL_REQUEST_TEMPLATE.md)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide a standard structure for PR descriptions.&lt;/li&gt;
&lt;li&gt;Encourage contributors to include details like the purpose of changes, testing steps, and related issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Maintain a secure repository by using GitHub repository
&lt;/h2&gt;

&lt;p&gt;A secure development strategy is critical to protect data, prevent unauthorized access, and ensure compliance.&lt;/p&gt;

&lt;p&gt;Key considerations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Educate continuously&lt;/strong&gt; – Security evolves, so ongoing training is essential.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write secure code&lt;/strong&gt; – Design and implement features with security in mind.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensure compliance&lt;/strong&gt; – Test against regulations during development and after deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Tab
&lt;/h2&gt;

&lt;p&gt;GitHub offers security features that help keep data secure in repositories and across organization. &lt;strong&gt;Security Tab in a specific repository consists of the below options&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56zfjdzqh6dynlxybzqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56zfjdzqh6dynlxybzqa.png" alt="GitHub Security Tab" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the Security tab, you can add features to your GitHub workflow to help avoid vulnerabilities in your repository and codebase. These features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security policies&lt;/strong&gt; that allow you to specify how to report a security vulnerability in your project by adding a SECURITY.md file to your repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependabot alerts&lt;/strong&gt; that notify you when GitHub detects that your repository is using a vulnerable dependency or malware.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security advisories&lt;/strong&gt; that you can use to privately discuss, fix, and publish information about security vulnerabilities in your repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code scanning&lt;/strong&gt; that helps you find, triage, and fix vulnerabilities and errors in your code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret scanning&lt;/strong&gt; that detects tokens, credentials, and secrets committed to your repo and can block them before the push. Push protection is enabled by default on public repositories.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Communicate a Security Policy with &lt;code&gt;SECURITY.md&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Include a &lt;code&gt;SECURITY.md&lt;/code&gt; file in your repository to guide responsible disclosure of vulnerabilities. It helps researchers and contributors report issues safely and ensures faster resolution of security concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep Sensitive Files Out of Your Repository with .gitignore
&lt;/h2&gt;

&lt;p&gt;Accidentally committing sensitive files—like API keys or private configs—can expose serious security risks. A .gitignore file prevents this by telling Git which files or patterns to ignore during commits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Helps avoid committing build artifacts, logs, or secrets.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supports multiple .gitignore files; child directory rules override parent ones.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Maintain a root .gitignore for global rules, with optional project-specific files where needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Branch protection rules
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Branch protection&lt;/strong&gt; rules help enforce workflows and maintain code quality before changes are merged. For example, you can require an approving review or passing status checks for pull requests targeting protected branches.&lt;/p&gt;

&lt;p&gt;Common workflows include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running builds to verify code compiles.&lt;/li&gt;
&lt;li&gt;Running linters to enforce coding standards.&lt;/li&gt;
&lt;li&gt;Running automated tests to catch regressions.&lt;/li&gt;
&lt;li&gt;Enforcing reviews and status checks before merging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Add a &lt;code&gt;CODEOWNERS&lt;/code&gt; File
&lt;/h2&gt;

&lt;p&gt;A &lt;code&gt;CODEOWNERS&lt;/code&gt; file defines who is responsible for specific parts of a repository. When someone opens a pull request that modifies those files, GitHub automatically requests a review from the designated code owners.&lt;/p&gt;

&lt;p&gt;Key points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Place the &lt;code&gt;CODEOWNERS&lt;/code&gt; file in the root, &lt;code&gt;.github/&lt;/code&gt;, or &lt;code&gt;docs/&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;Use file patterns to specify ownership (e.g., &lt;code&gt;*.js @frontend-team&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Multiple owners can be assigned to the same file or directory.&lt;/li&gt;
&lt;li&gt;Helps enforce accountability and ensures the right people review changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automated Security in GitHub
&lt;/h2&gt;

&lt;p&gt;GitHub provides several built-in &lt;strong&gt;automated security features&lt;/strong&gt; to help you identify and remediate risks in your repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository Dependency Graphs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub scans package manifests (e.g., package.json, requirements.txt).&lt;/li&gt;
&lt;li&gt;Creates a &lt;strong&gt;dependency graph&lt;/strong&gt; to show all direct and indirect dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dependabot Alerts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitors your dependency graph for known vulnerabilities.&lt;/li&gt;
&lt;li&gt;Cross-references package versions with &lt;strong&gt;GitHub Security Advisories&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Sends alerts when a risk is detected.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dependabot Automated Updates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dependabot doesn’t just alert you—it can &lt;strong&gt;create pull requests&lt;/strong&gt; to update vulnerable dependencies automatically.&lt;/li&gt;
&lt;li&gt;Maintainers review, test, and merge these PRs to stay secure with minimal effort.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automated Code Scanning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scans your code for &lt;strong&gt;security vulnerabilities and coding errors&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Prevents new issues from being introduced in pull requests.&lt;/li&gt;
&lt;li&gt;Powered by &lt;strong&gt;CodeQL&lt;/strong&gt;, which lets you query code as data and use custom or community-maintained queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Secret Scanning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detects &lt;strong&gt;hardcoded secrets&lt;/strong&gt; (API keys, tokens, credentials) in repositories.&lt;/li&gt;
&lt;li&gt;Enabled by default for public repos; can be enabled for private repos by admins.&lt;/li&gt;
&lt;li&gt;Alerts service providers, who may revoke or rotate exposed credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Administration
&lt;/h2&gt;

&lt;p&gt;GitHub provides different levels of &lt;strong&gt;administration&lt;/strong&gt;—team, organization, and enterprise—each offering specific controls and best practices for managing collaboration, security, and policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Administration at the Team Level&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Teams are substructures within organizations that simplify permission management and collaboration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capabilities for team maintainers / repo admins&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create, delete, or rename teams.&lt;/li&gt;
&lt;li&gt;Add/remove members or sync membership with IdP groups (e.g., Microsoft Entra ID).&lt;/li&gt;
&lt;li&gt;Manage outside collaborators (consultants, contractors).&lt;/li&gt;
&lt;li&gt;Enable/disable team discussions and set visibility.&lt;/li&gt;
&lt;li&gt;Assign automatic code reviewers and schedule reminders.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Best practices&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create nested teams to reflect company hierarchy.&lt;/li&gt;
&lt;li&gt;Build interest/skill-based teams (e.g., JavaScript, data science) to streamline PR reviews.&lt;/li&gt;
&lt;li&gt;Use IdP synchronization to automate onboarding/offboarding and reduce manual updates.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Administration at the Organization Level&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Organizations are shared spaces where multiple teams collaborate across repositories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Owner/administrator capabilities&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Invite or remove members.&lt;/li&gt;
&lt;li&gt;Create teams and grant maintainer permissions.&lt;/li&gt;
&lt;li&gt;Manage repository permissions and default settings.&lt;/li&gt;
&lt;li&gt;Add/remove outside collaborators.&lt;/li&gt;
&lt;li&gt;Configure security and billing.&lt;/li&gt;
&lt;li&gt;Apply organization-wide changes or migrations using scripts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Best practices&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ideally set up &lt;strong&gt;one organization&lt;/strong&gt; to simplify management.&lt;/li&gt;
&lt;li&gt;Multiple organizations increase setup effort, risk of misconfiguration, and may add costs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Administration at the Enterprise Level&lt;/strong&gt;&lt;br&gt;
Enterprise accounts (GitHub Enterprise Cloud or Server) allow centralized management across multiple organizations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Owner capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;strong&gt;SAML single sign-on&lt;/strong&gt; with IdP integration.&lt;/li&gt;
&lt;li&gt;Add or remove organizations.&lt;/li&gt;
&lt;li&gt;Manage enterprise-wide billing.&lt;/li&gt;
&lt;li&gt;Define &lt;strong&gt;global repository, project, and team policies&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Apply migrations or custom scripts across the enterprise.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;GitHub Connect&lt;/strong&gt; to integrate Enterprise Server with GitHub.com.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does GitHub organization and permissions work?
&lt;/h2&gt;

&lt;p&gt;Organizations in GitHub allow multiple people to collaborate on repositories while controlling access with &lt;strong&gt;permission levels&lt;/strong&gt;, &lt;strong&gt;forking rules&lt;/strong&gt;, and &lt;strong&gt;insights&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Repository Permission Levels&lt;/strong&gt;&lt;br&gt;
Repositories support five standard permission levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read&lt;/strong&gt; – View or discuss project content (non-code contributors).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Triage&lt;/strong&gt; – Manage issues and pull requests without write access (ideal for project managers).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write&lt;/strong&gt; – Push commits and make direct contributions (standard for developers).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain&lt;/strong&gt; – Manage repository settings without destructive actions (for project leads).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt; – Full access, including sensitive actions like security management or deleting the repo (for admins/owners).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;What is Repository Forking?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Forking creates a &lt;strong&gt;personal copy of a repository&lt;/strong&gt; under your account. This allows experimentation or contributions without affecting the original project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public repos&lt;/strong&gt; → Always forkable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private repos&lt;/strong&gt; → Forking can be disabled or restricted to org members.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal repos&lt;/strong&gt; → Can only be forked within the same enterprise account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Note&lt;/u&gt;&lt;/strong&gt;: &lt;strong&gt;If you disable forking for a private repository, no one (including organization members) will be able to fork it.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Viewing Repository Insights&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;Insights tab&lt;/strong&gt; provides data to track repository health, activity, and security.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contributors&lt;/strong&gt; – See commits, additions, and deletions by contributor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic&lt;/strong&gt; – Monitor unique visitors and page views.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commits&lt;/strong&gt; – Analyze commit activity over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Frequency&lt;/strong&gt; – Track lines added and deleted.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dependency Graph&lt;/strong&gt; – Identify and monitor dependencies for vulnerabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best Practices with Insights:&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Monitor &lt;strong&gt;contributor activity&lt;/strong&gt; to identify engaged team members.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Track &lt;strong&gt;traffic trends&lt;/strong&gt; to understand repository engagement.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Regularly review the &lt;strong&gt;dependency graph&lt;/strong&gt; to address security risks.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managing enterprise permissions
&lt;/h2&gt;

&lt;p&gt;GitHub provides different &lt;strong&gt;permission levels&lt;/strong&gt; at both the &lt;strong&gt;organization&lt;/strong&gt; and &lt;strong&gt;enterprise levels&lt;/strong&gt;. These roles help maintain security, proper access, and governance across teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Organization Permissions Levels
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Owner&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full control of the organization. Can add/remove members. Should be limited to at least two people for safety.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Member&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Standard role. Can create and manage repositories and teams.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Moderator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manage community interactions in public repositories (block/unblock contributors, set interaction limits, hide comments).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Billing Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manage organization billing information.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manage security alerts and settings across the organization; read access to all repositories.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Outside Collaborator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;External contributors (e.g., consultants, contractors) with access to one or more repositories, but not full org membership.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Enterprise Permission Levels
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Owner&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Complete control of the enterprise. Can manage administrators, add/remove organizations, enforce policies, and manage billing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Member&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same permissions as organization members.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Billing Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;View and edit billing information, manage billing managers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Guest Collaborator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited access to specific repositories/organizations (Enterprise Managed Users only).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Team Synchronization
&lt;/h2&gt;

&lt;p&gt;If your company uses &lt;strong&gt;Microsoft Entra ID or Okta&lt;/strong&gt; as an identity provider (IdP), you can manage GitHub team membership through &lt;strong&gt;team synchronization&lt;/strong&gt;. With team sync enabled, GitHub automatically mirrors IdP group changes—removing the need for manual updates or custom scripts.&lt;/p&gt;

&lt;p&gt;This centralized approach simplifies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding&lt;/strong&gt; new employees&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adjusting access&lt;/strong&gt; when users move between teams&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Removing access&lt;/strong&gt; when employees leave the organization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Requirement&lt;/u&gt;: Team sync requires your IdP admin to enable SAML SSO and SCIM provisioning.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Searching GitHub
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Global search&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Searches across all of GitHub.&lt;/li&gt;
&lt;li&gt;Returns results from code, issues, PRs, users, Marketplace, etc.&lt;/li&gt;
&lt;li&gt;Supports full search syntax (&lt;code&gt;is:pr&lt;/code&gt;, &lt;code&gt;is:issue&lt;/code&gt;, etc.), though not every filter works for every result type.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Context search&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scoped to the current repo and specific tab (Issues, PRs).&lt;/li&gt;
&lt;li&gt;Exposes type-specific filters (author, labels, projects, etc.).&lt;/li&gt;
&lt;li&gt;Best when searching inside a single repo.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Common search filters&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;is:open is:issue assignee:@me&lt;/code&gt; → Open issues assigned to me.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;is:closed is:pr author:contoso&lt;/code&gt; → Closed PRs created by @contoso.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;is:pr sidebar in:comments&lt;/code&gt; → PRs with “sidebar” in comments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;is:open is:issue label:bug -linked:pr&lt;/code&gt; → Open bug issues without a linked PR.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Organizing Work
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Milestones&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Group issues/PRs into goals (sprints, releases, phases).&lt;/li&gt;
&lt;li&gt;Track progress automatically (done vs remaining).&lt;/li&gt;
&lt;li&gt;Search by milestone: &lt;code&gt;is:open is:issue milestone:"Sprint 1"&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Labels&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metadata to classify issues/PRs (bug, priority, team).&lt;/li&gt;
&lt;li&gt;Searchable with &lt;code&gt;label:bug&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Assignees&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign issues/PRs to one or more people.&lt;/li&gt;
&lt;li&gt;Searchable with &lt;code&gt;assignee:@me&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Git History &amp;amp; Collaboration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Git blame&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shows commit history for each line of a file.&lt;/li&gt;
&lt;li&gt;Helps identify who changed what and when.&lt;/li&gt;
&lt;li&gt;GitHub adds a UI view for easy navigation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cross-linking&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Link issues, commits, PRs, and projects to provide context.&lt;/li&gt;
&lt;li&gt;Autolinks happen automatically with &lt;code&gt;#123&lt;/code&gt; or commit IDs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Saved replies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prewritten responses for maintainers.&lt;/li&gt;
&lt;li&gt;Great for common feedback (setup instructions, bug triage).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Issue templates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better than saved replies for collecting structured info up front.&lt;/li&gt;
&lt;li&gt;Can define bug/feature request templates in repo settings.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Mentions (@username)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notifies specific users to join the discussion.&lt;/li&gt;
&lt;li&gt;Keeps context even after issues are closed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>github</category>
    </item>
    <item>
      <title>Exam Guide : GitHub Foundation Part 1</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Fri, 29 Aug 2025 11:50:07 +0000</pubDate>
      <link>https://dev.to/luffy7258/exam-guide-github-foundation-part-1-3hgp</link>
      <guid>https://dev.to/luffy7258/exam-guide-github-foundation-part-1-3hgp</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Git
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Version Control?
&lt;/h3&gt;

&lt;p&gt;A version control system (VCS) is a program or set of programs that tracks changes to a collection of files. One goal of a VCS is to easily recall earlier versions of individual files or of the entire project. Another goal is to allow several team members to work on a project, even on the same files, at the same time without affecting each other's work.&lt;/p&gt;

&lt;p&gt;Another name for a VCS is a software configuration management (SCM) system.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Git?
&lt;/h3&gt;

&lt;p&gt;Git is distributed, which means that a project's complete history is stored both on the client and on the server. You can edit files without a network connection, check them in locally, and sync with the server when a connection becomes available. If a server goes down, you still have a local copy of the project. Technically, you don't even have to have a server. Changes could be passed around in e-mail or shared by using removable media, but no one uses Git this way in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Git Terminology?
&lt;/h3&gt;

&lt;p&gt;Before diving into Git commands, it’s important to get familiar with some basic terms you’ll hear often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working Tree – The collection of files and directories you’re actively working on.&lt;/li&gt;
&lt;li&gt;Repository (Repo) – The top-level directory where Git stores all history and metadata of your project.&lt;/li&gt;
&lt;li&gt;Hash – A unique ID Git generates (using SHA-1) to track file changes.&lt;/li&gt;
&lt;li&gt;Object – Core building blocks in Git (blobs, trees, commits, tags), each identified by a hash.&lt;/li&gt;
&lt;li&gt;Commit – A snapshot of changes you’ve saved into Git’s history.&lt;/li&gt;
&lt;li&gt;Branch – A series of commits with a name (like main), used to work on features independently.&lt;/li&gt;
&lt;li&gt;Remote – A link to another repository, often called origin, used for pushing and pulling changes.&lt;/li&gt;
&lt;li&gt;Commands, Subcommands &amp;amp; Options – How you interact with Git (e.g., git push, git reset --hard).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is the key difference between Git &amp;amp; GitHub?
&lt;/h3&gt;

&lt;p&gt;It’s easy to mix up Git and GitHub, but they are not the same thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git&lt;/strong&gt; is a &lt;strong&gt;distributed version control system&lt;/strong&gt; (DVCS). It helps developers track changes in code, work with branches, and collaborate by pushing and pulling updates between local and remote repositories. Git itself runs on your computer and doesn’t require the internet to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;, on the other hand, is a &lt;strong&gt;cloud-based platform built on top of Git&lt;/strong&gt;. It provides an online place to host repositories, making collaboration much easier. Beyond storing code, GitHub adds collaboration tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issues &amp;amp; Discussions – for bug tracking and communication.&lt;/li&gt;
&lt;li&gt;Pull Requests (PRs) – for code review and merging changes.&lt;/li&gt;
&lt;li&gt;Actions – for automating workflows like CI/CD.&lt;/li&gt;
&lt;li&gt;Projects &amp;amp; Labels – for organizing and managing work.&lt;/li&gt;
&lt;li&gt;Forks – for contributing to other projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: &lt;strong&gt;Git is the tool, GitHub is the platform&lt;/strong&gt;. You use Git to manage your code versioning, and GitHub to share it, collaborate, and manage the bigger picture of software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Git Commands
&lt;/h3&gt;

&lt;p&gt;Git works like a camera taking snapshots of your project. Every time you make changes, you can decide whether to keep a “snapshot” (commit) so Git can track the history of your project. Here are some essential commands to get started:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git status&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Shows the current state of the working directory and staging area&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git add &amp;lt;file&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stages changes in a file (prepares it for the next commit)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;`git commit -m ""&lt;/td&gt;
&lt;td&gt;Saves a snapshot of staged changes with a message describing the update&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git log&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Displays the history of commits with author, date, and commit messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git help&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Provides help for Git commands. &lt;code&gt;Use git &amp;lt;command&amp;gt; --help&lt;/code&gt; for details.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Introduction to GitHub
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is GitHub?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt; is a cloud-based platform built on top of Git. While Git handles version control, GitHub makes it easier to &lt;strong&gt;collaborate, share, and manage projects online&lt;/strong&gt;. It provides a website, command-line tools, and workflows that bring developers, teams, and organizations together.&lt;/p&gt;

&lt;p&gt;At its core, GitHub Enterprise focuses on five pillars: &lt;strong&gt;AI, Collaboration, Productivity, Security, and Scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI&lt;/strong&gt; – GitHub leverages generative AI through tools like &lt;strong&gt;Copilot, Copilot Chat&lt;/strong&gt;, and &lt;strong&gt;Copilot Agents&lt;/strong&gt;. These help write code faster, generate documentation, suggest fixes, and even improve security by catching issues early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaboration&lt;/strong&gt; – With features like &lt;strong&gt;Repositories, Issues, Pull Requests, and Discussions&lt;/strong&gt;, GitHub makes teamwork seamless. Teams can work in parallel, review code efficiently, and shorten delivery cycles.&lt;/p&gt;

&lt;p&gt;Productivity – Built-in CI/CD pipelines (GitHub Actions) automate repetitive tasks such as testing, building, and deploying software. This lets developers spend more time solving problems rather than managing workflows.&lt;/p&gt;

&lt;p&gt;Security – GitHub integrates security at every stage of development with tools like CodeQL, Dependabot, secret scanning, and a security overview dashboard. This ensures vulnerabilities are detected and resolved early.&lt;/p&gt;

&lt;p&gt;Scale – GitHub powers the world’s largest developer community with 100M+ developers and 420M+ repositories. Its extensible platform continuously evolves through contributions from open source and enterprise developers alike.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a repository?
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;repository (repo)&lt;/strong&gt; is the home for your project in Git or GitHub. It contains all of your project’s files along with their &lt;strong&gt;entire revision history&lt;/strong&gt;, so you can track changes over time, restore older versions, and collaborate with others.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Gists?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Gists&lt;/strong&gt; are a GitHub feature designed for sharing &lt;strong&gt;small pieces of code, notes, or configuration files&lt;/strong&gt; in a lightweight way. Unlike full repositories, gists are quick to create and perfect for sharing short, useful snippets. Since they are essentially &lt;strong&gt;mini Git repositories&lt;/strong&gt;, you can fork, clone, and even apply version control to them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Features of Gists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public and Secret Gists&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Public Gists: Visible to everyone and discoverable through GitHub search. Great for sharing examples, fixes, or snippets with the community.&lt;/li&gt;
&lt;li&gt;Secret Gists: Hidden from search results and not publicly listed. However, anyone with the link can view them. Useful for sharing code privately with a limited group.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Version Control&lt;/strong&gt; : Every change you make is tracked, so you can view edit history or roll back to previous versions.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Forking and Cloning&lt;/strong&gt; : Just like repositories, gists can be forked and cloned, enabling others to adapt your snippet to their needs.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Embedding&lt;/strong&gt; : Gists can be embedded directly into blogs, forums, or websites, making them ideal for tutorials or documentation.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Markdown Support&lt;/strong&gt; : Gists support Markdown, letting you add headings, links, rich text, or even images alongside your code for better context.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Collaboration&lt;/strong&gt; : Others can fork and comment on your gists, enabling lightweight collaboration around snippets and solutions.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Use Cases for Gists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sharing quick code examples or bug fixes&lt;/li&gt;
&lt;li&gt;Storing personal configuration files or scripts&lt;/li&gt;
&lt;li&gt;Creating templates for reusable code patterns&lt;/li&gt;
&lt;li&gt;Sharing error logs or debugging details with collaborators&lt;/li&gt;
&lt;li&gt;Embedding snippets into blogs, articles, or documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Important Notes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Do not store sensitive information&lt;/strong&gt; (like API keys, passwords, or secrets) in gists—even in secret gists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret ≠ Private&lt;/strong&gt;: Secret gists are simply hidden, but anyone with the link can still access them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Limitations of Gists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Not suited for large projects or multi-file structures (use a repository instead).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret gists are still accessible via URL, so they are not a substitute for true privacy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are Wikis?
&lt;/h3&gt;

&lt;p&gt;A wiki in GitHub is a dedicated space within a repository for &lt;strong&gt;long-form documentation&lt;/strong&gt;. While a &lt;strong&gt;README&lt;/strong&gt; gives a &lt;strong&gt;quick overview&lt;/strong&gt;, a &lt;strong&gt;wiki&lt;/strong&gt; is ideal for &lt;strong&gt;detailed guides, design notes, or tutorials&lt;/strong&gt;. It serves as a living knowledge base for your project. &lt;strong&gt;For private repos, only users with read access can view the wiki&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components of the Github Glow
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;GitHub Flow&lt;/strong&gt; is a lightweight, branch-based workflow designed for continuous collaboration and deployment. It is built around three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branches&lt;/strong&gt; – A branch is where you do your work independently from the main codebase. You can experiment, fix bugs, or add features without affecting the default branch (main).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commits&lt;/strong&gt; – A commit saves your changes to the repository, capturing a snapshot of what the code looked like at a certain point in time. Each commit includes metadata like the author, timestamp, and a unique identifier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull Requests&lt;/strong&gt; – A pull request is how you propose changes to be merged into the default branch. It’s the place for discussion, feedback, code reviews, and automated checks before integration. &lt;strong&gt;GitHub also supports Draft Pull Requests&lt;/strong&gt;, which let you open a pull request that's not yet ready for review.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  File States in Git
&lt;/h4&gt;

&lt;p&gt;Within a Git repository, every file goes through &lt;strong&gt;specific states&lt;/strong&gt; as part of version control. Understanding these states is critical for working with Git effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Untracked&lt;/strong&gt; – A new file that Git doesn’t know about yet. It hasn’t been added to version control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracked&lt;/strong&gt; – A file that Git is monitoring. Tracked files can exist in several substates:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unmodified&lt;/strong&gt; – The file hasn’t changed since the last commit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modified&lt;/strong&gt; – The file has been changed but not yet staged.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staged&lt;/strong&gt; – The modified file has been added to the staging area (git add) and is ready to be committed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Committed&lt;/strong&gt; – The file is saved in the repository’s database as part of a commit (the latest snapshot).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  GitHub Flow
&lt;/h3&gt;

&lt;p&gt;The GitHub Flow is a simple, lightweight workflow designed to help you safely make and share changes. It’s widely used because it supports &lt;strong&gt;continuous integration and continuous delivery (CI/CD)&lt;/strong&gt; while keeping things easy to manage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Git Flow
&lt;/h3&gt;

&lt;p&gt;While GitHub Flow is simple, Git Flow is a more structured branching model—suited for projects with &lt;strong&gt;scheduled or versioned releases&lt;/strong&gt;. It introduces long-lived branches and specific rules for release management.&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Flow vs Git Flow
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature / Aspect&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;GitHub Flow&lt;/strong&gt; 🌐&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Git Flow&lt;/strong&gt; 🔀&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lightweight workflow for continuous delivery and collaboration.&lt;/td&gt;
&lt;td&gt;Structured branching model for release-driven development.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Default Branch&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;main&lt;/code&gt; (or &lt;code&gt;master&lt;/code&gt; in older repos).&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;master&lt;/code&gt; (production) and &lt;code&gt;develop&lt;/code&gt; (integration).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Branch Types&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Feature branches → Pull Requests → Merge into &lt;code&gt;main&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;master&lt;/code&gt;, &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;feature/*&lt;/code&gt;, &lt;code&gt;release/*&lt;/code&gt;, &lt;code&gt;hotfix/*&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Simplicity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Very simple and easy to adopt.&lt;/td&gt;
&lt;td&gt;More complex, with multiple long-lived branches.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Release Strategy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Continuous delivery—changes merged frequently.&lt;/td&gt;
&lt;td&gt;Versioned releases with explicit preparation branches.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Startups, small/medium teams, fast-moving projects.&lt;/td&gt;
&lt;td&gt;Enterprises, regulated industries, projects needing predictable release cycles.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;History Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Often uses squash merges or rebase for a clean history.&lt;/td&gt;
&lt;td&gt;Requires merge commits to maintain branching structure.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Speed vs Control&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prioritizes speed and agility.&lt;/td&gt;
&lt;td&gt;Prioritizes control and release planning.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  GitHub Issues &amp;amp; Discussions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;GitHub Issues&lt;/strong&gt; are the primary way to &lt;strong&gt;track tasks, ideas, bugs, and feedback&lt;/strong&gt; within a repository. They can be created in multiple ways and help teams stay organized. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;labels, mentions, and reactions&lt;/strong&gt; to categorize and prioritize work.&lt;/li&gt;
&lt;li&gt;Create &lt;strong&gt;issue templates&lt;/strong&gt; to maintain consistency across contributions.&lt;/li&gt;
&lt;li&gt;Convert discussions into issues when an actionable task emerges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GitHub Discussions&lt;/strong&gt;, on the other hand, are designed for &lt;strong&gt;open conversations not tied directly to code&lt;/strong&gt;. They work like a community forum where you can share ideas, ask questions, or provide feedback. Discussions can be &lt;strong&gt;public or private&lt;/strong&gt;, depending on repository settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key points about Discussions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Repository owners or users with Write access can enable Discussions.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Any authenticated user with repo access can create a discussion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainers&lt;/strong&gt; can &lt;strong&gt;pin important discussions&lt;/strong&gt; (e.g., announcements, FAQs, or active topics) for visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discussions can be converted into issues when needed&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GitHub platform management
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Managing Notifications and Subscriptions
&lt;/h4&gt;

&lt;p&gt;Staying updated on important activity is key when working in GitHub. &lt;strong&gt;Notifications&lt;/strong&gt; let you track updates across repositories, teams, and projects, while &lt;strong&gt;subscriptions&lt;/strong&gt; help you control what you actually want to hear about.&lt;/p&gt;

&lt;p&gt;You can subscribe to notifications for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Specific issues, pull requests, or gists&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository activity&lt;/strong&gt; (issues, PRs, releases, discussions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow statuses&lt;/strong&gt; from GitHub Actions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;All activity across a repository&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You’re automatically subscribed when you interact with something&lt;/strong&gt; (e.g., commenting, opening an issue, or being assigned).&lt;/li&gt;
&lt;li&gt;You can also manually &lt;strong&gt;watch, unwatch, or customize&lt;/strong&gt; your subscriptions to match your preferences.&lt;/li&gt;
&lt;li&gt;If updates become irrelevant, simply &lt;strong&gt;unsubscribe&lt;/strong&gt; or adjust notification settings to reduce noise.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Subscribing to Threads and Finding Mentions
&lt;/h4&gt;

&lt;p&gt;GitHub gives you fine-grained control over how you follow conversations and stay in the loop.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Subscribing to threads&lt;/strong&gt; – &lt;strong&gt;You can manually subscribe to any issue, pull request, or discussion—even if you weren’t part of the original conversation&lt;/strong&gt;. Simply click &lt;strong&gt;Subscribe&lt;/strong&gt; in the right-hand sidebar.&lt;/li&gt;
&lt;li&gt;Finding mentions – To locate conversations where you’ve been tagged, use the search qualifier:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;mentions:&amp;lt;username&amp;gt;&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This ensures you don’t miss discussions requiring your attention.&lt;/p&gt;

&lt;h4&gt;
  
  
  Filtering Notifications
&lt;/h4&gt;

&lt;p&gt;You can adjust your &lt;strong&gt;watch settings&lt;/strong&gt; on a repository to control the amount of updates you receive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watching – Get notifications for all activity.&lt;/li&gt;
&lt;li&gt;Not watching – Only receive updates if you participate or are @mentioned.&lt;/li&gt;
&lt;li&gt;Ignore – Receive no notifications for that repository.&lt;/li&gt;
&lt;li&gt;Custom – Fine-tune notifications (issues only, PRs only, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Manage this by selecting &lt;strong&gt;Watch&lt;/strong&gt; at the top of a repository page.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring Notification Settings
&lt;/h4&gt;

&lt;p&gt;GitHub also lets you choose &lt;strong&gt;where&lt;/strong&gt; you receive notifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Email&lt;/strong&gt; – Delivered to your registered address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web&lt;/strong&gt; – Viewable directly in your GitHub dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile&lt;/strong&gt; – Push notifications via the GitHub mobile app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom&lt;/strong&gt; – Configure which events go to which channel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These preferences can be managed under User Settings → Notifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are GitHub Pages?
&lt;/h3&gt;

&lt;p&gt;GitHub Pages is a free static site hosting service offered by GitHub. It allows you to create and host a personal, organizational, or project website directly from a GitHub repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It serves &lt;strong&gt;HTML, CSS, and JavaScript&lt;/strong&gt; files straight from your repo.&lt;/li&gt;
&lt;li&gt;Optionally, you can run the files through a &lt;strong&gt;build process&lt;/strong&gt; (for example, with Jekyll) before publishing.&lt;/li&gt;
&lt;li&gt;You can specify a &lt;strong&gt;source branch and folder&lt;/strong&gt; (like /docs) as the content root for your site.&lt;/li&gt;
&lt;li&gt;Once published, GitHub makes the site publicly accessible under a github.io domain or a custom domain you configure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction to GitHub's Products
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GitHub accounts &amp;amp; plans
&lt;/h3&gt;

&lt;p&gt;When working with GitHub, it’s important to understand two concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Account Types&lt;/strong&gt; – define who owns resources (personal, organization, or enterprise).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plans&lt;/strong&gt; – define what features are available (free or paid tiers).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  GitHub Account Types
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1.Personal Accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identity on GitHub (username + profile).&lt;/li&gt;
&lt;li&gt;Can own repositories, packages, and projects.&lt;/li&gt;
&lt;li&gt;Supports GitHub Free or GitHub Pro plans.&lt;/li&gt;
&lt;li&gt;Unlimited public and private repos (Free private repos have limited features).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2.Organization Accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared accounts where teams collaborate across projects.&lt;/li&gt;
&lt;li&gt;Members log in with personal accounts; permissions are role-based (member, owner, security manager).&lt;/li&gt;
&lt;li&gt;Can own repositories, packages, and projects.&lt;/li&gt;
&lt;li&gt;Supports GitHub Free for Organizations or GitHub Team/Enterprise plans.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.Enterprise Accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designed for large-scale organizations with multiple teams and orgs.&lt;/li&gt;
&lt;li&gt;Enterprise owners can manage orgs, enforce policies, and control billing centrally.&lt;/li&gt;
&lt;li&gt;Supports GitHub Enterprise (Cloud or Server).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  GitHub Plans Comparison
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature / Plan&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Free&lt;/strong&gt; (Personal/Org)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Pro&lt;/strong&gt; (Personal)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Team&lt;/strong&gt; (Organizations)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Enterprise&lt;/strong&gt; (Cloud/Server)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Repositories&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unlimited public &amp;amp; private (limited features on private for Free)&lt;/td&gt;
&lt;td&gt;Unlimited public &amp;amp; private&lt;/td&gt;
&lt;td&gt;Unlimited public &amp;amp; private&lt;/td&gt;
&lt;td&gt;Unlimited public &amp;amp; private&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Collaborators&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited + org-wide policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Community only&lt;/td&gt;
&lt;td&gt;Email support&lt;/td&gt;
&lt;td&gt;Email support&lt;/td&gt;
&lt;td&gt;Dedicated Enterprise support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub Actions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2,000 minutes/month&lt;/td&gt;
&lt;td&gt;3,000 minutes/month&lt;/td&gt;
&lt;td&gt;3,000 minutes/month&lt;/td&gt;
&lt;td&gt;50,000 minutes/month (Enterprise Cloud)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Packages Storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;500 MB&lt;/td&gt;
&lt;td&gt;2 GB&lt;/td&gt;
&lt;td&gt;2 GB&lt;/td&gt;
&lt;td&gt;50 GB (Enterprise Cloud)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Codespaces&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;120 core hours + 15 GB storage/month&lt;/td&gt;
&lt;td&gt;180 core hours + 20 GB storage/month&lt;/td&gt;
&lt;td&gt;Org-level management + same limits as Pro&lt;/td&gt;
&lt;td&gt;Custom enterprise limits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Features&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Dependabot alerts, 2FA enforcement&lt;/td&gt;
&lt;td&gt;Same as Free&lt;/td&gt;
&lt;td&gt;Required reviewers, protected branches, code owners&lt;/td&gt;
&lt;td&gt;Advanced Security (CodeQL, secret scanning, policies)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Collaboration Tools&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Issues, PRs, Wikis, Pages (limited for Free orgs)&lt;/td&gt;
&lt;td&gt;Wikis, Pages, Insights&lt;/td&gt;
&lt;td&gt;Draft PRs, team reviewers, scheduled reminders, advanced insights&lt;/td&gt;
&lt;td&gt;Centralized policies, GitHub Connect, EMU (Enterprise Managed Users)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Individuals or small teams starting out&lt;/td&gt;
&lt;td&gt;Individual developers needing advanced insights&lt;/td&gt;
&lt;td&gt;Teams needing structured collaboration&lt;/td&gt;
&lt;td&gt;Large orgs needing compliance, governance, and scale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hosting Options&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GitHub.com&lt;/td&gt;
&lt;td&gt;GitHub.com&lt;/td&gt;
&lt;td&gt;GitHub.com&lt;/td&gt;
&lt;td&gt;GitHub.com (Enterprise Cloud) or Self-hosted (Enterprise Server)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Enterprise Managed Users (EMU)
&lt;/h4&gt;

&lt;p&gt;Enterprise Managed Users allow organizations to control identities using their identity provider, enabling central access management and increased security.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Mobile vs GitHub Desktop
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature / Tool&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;GitHub Mobile&lt;/strong&gt; (iOS/Android)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;GitHub Desktop&lt;/strong&gt; (Windows/macOS)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Primary Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;On-the-go collaboration &amp;amp; notifications&lt;/td&gt;
&lt;td&gt;Local Git workflow management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Commit &amp;amp; Stage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No (can edit PR files only)&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Manage Notifications&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Review PRs &amp;amp; Issues&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Edit Files&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ In pull requests&lt;/td&gt;
&lt;td&gt;✅ Locally with commits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Clone Repos&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Verify Sign-in&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Secure with 2FA&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CI Status View&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Platforms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;iOS, Android&lt;/td&gt;
&lt;td&gt;Windows, macOS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  License Usage Stats in Machine and Peripheral Devices
&lt;/h2&gt;

&lt;p&gt;Tracking license usage is essential in &lt;strong&gt;GitHub Enterprise&lt;/strong&gt; for both cost optimization and security compliance. Machine accounts and peripheral services can consume licenses just like regular users, which directly impacts enterprise costs and resource allocation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Understanding Machine Accounts and Peripheral Services
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Machine Accounts
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Special GitHub accounts used for automation, integrations, or running scripts.&lt;/li&gt;
&lt;li&gt;Commonly tied to CI/CD tools (e.g., GitHub Actions, Jenkins, CircleCI).&lt;/li&gt;
&lt;li&gt;Consume a GitHub license like any standard user.&lt;/li&gt;
&lt;li&gt;Act independently of human users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Peripheral Services
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;External services or tools interacting with GitHub via API requests.&lt;/li&gt;
&lt;li&gt;Examples include:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipelines&lt;/strong&gt; (GitHub Actions, self-hosted runners, Jenkins).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Tools&lt;/strong&gt; (Dependabot, CodeQL, Snyk).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third-party Integrations&lt;/strong&gt; (Slack, Jira, Datadog).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is GitHub Copilot and Its Different Plans?
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot is your AI-powered pair programmer that helps you write code faster and smarter. Built by &lt;strong&gt;GitHub&lt;/strong&gt; and &lt;strong&gt;OpenAI&lt;/strong&gt;, it uses the &lt;strong&gt;OpenAI Codex&lt;/strong&gt; and newer models like GPT-4o to generate real-time suggestions in dozens of programming languages.&lt;/p&gt;

&lt;p&gt;Research shows that developers using GitHub Copilot experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;46% of new code written by AI.&lt;/li&gt;
&lt;li&gt;55% boost in productivity.&lt;/li&gt;
&lt;li&gt;74% feel more focused on meaningful work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Copilot integrates directly into popular IDEs like &lt;strong&gt;VS Code, Visual Studio, JetBrains, and Neovim&lt;/strong&gt;, offering code autocompletion, explanations, and even test generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Copilot Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Copilot for Chat&lt;/strong&gt; – AI-powered chat inside your IDE to explain code, generate tests, and suggest bug fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot for Pull Requests&lt;/strong&gt; – AI-generated PR descriptions and tags, powered by GPT-4.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot for the CLI&lt;/strong&gt; – Helps compose complex terminal commands and flags, reducing time spent searching syntax.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GitHub Copilot Plans Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature / Plan&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Free&lt;/strong&gt; (Individual)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Pro&lt;/strong&gt; (Individual)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Pro+&lt;/strong&gt; (Individual)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Business&lt;/strong&gt; (Teams/Orgs)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Enterprise&lt;/strong&gt; (Large Orgs)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code Completions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2,000/month&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited (priority capacity)&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited (personalized w/ internal code)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Chat Requests&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;50/month&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI Models&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GPT-4o, Claude 3.5&lt;/td&gt;
&lt;td&gt;Latest models, priority access&lt;/td&gt;
&lt;td&gt;Premium models, priority infra&lt;/td&gt;
&lt;td&gt;Org-wide access to latest models&lt;/td&gt;
&lt;td&gt;Fine-tuned org-specific models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IDE Integration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VS Code, Visual Studio, JetBrains, Neovim&lt;/td&gt;
&lt;td&gt;Same as Free&lt;/td&gt;
&lt;td&gt;Same as Pro&lt;/td&gt;
&lt;td&gt;Same as Pro + Mobile support&lt;/td&gt;
&lt;td&gt;Deep GitHub integration + enterprise IDEs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Test Generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes + org customization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pull Request Enhancements&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Basic&lt;/td&gt;
&lt;td&gt;✅ Basic&lt;/td&gt;
&lt;td&gt;✅ AI-powered tags &amp;amp; security filtering&lt;/td&gt;
&lt;td&gt;✅ Advanced PR summaries &amp;amp; org-wide policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CLI Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Features&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic (none)&lt;/td&gt;
&lt;td&gt;Basic (none)&lt;/td&gt;
&lt;td&gt;Basic (none)&lt;/td&gt;
&lt;td&gt;Vulnerability filtering, IP indemnity, compliance tools&lt;/td&gt;
&lt;td&gt;Advanced Security + org-wide governance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Management Tools&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;Centralized management &amp;amp; policies&lt;/td&gt;
&lt;td&gt;Enterprise-wide management &amp;amp; identity (EMU)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;❌ Limited&lt;/td&gt;
&lt;td&gt;✅ Fine-tuned org/private models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Beginners exploring AI coding&lt;/td&gt;
&lt;td&gt;Individual developers needing unlimited AI help&lt;/td&gt;
&lt;td&gt;Power users needing priority performance&lt;/td&gt;
&lt;td&gt;Organizations needing secure collaboration&lt;/td&gt;
&lt;td&gt;Enterprises needing personalization, compliance, and scale&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What is Code Scanning?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Code scanning&lt;/strong&gt; is a GitHub feature that uses &lt;strong&gt;CodeQL&lt;/strong&gt; to analyze the code in your repository for security &lt;strong&gt;vulnerabilities&lt;/strong&gt; and &lt;strong&gt;coding errors&lt;/strong&gt;. It helps developers catch problems early and ensure safer, cleaner code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Availability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled by default for all public repositories.&lt;/li&gt;
&lt;li&gt;Available for private repositories when GitHub Advanced Security is enabled.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a potential vulnerability or error is detected, GitHub displays an alert in the repository’s Security tab.&lt;/li&gt;
&lt;li&gt;Once the issue is fixed, the alert is automatically closed.&lt;/li&gt;
&lt;li&gt;Developers can triage and prioritize fixes, preventing both existing and newly introduced issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Triggers for scanning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;on:push&lt;/code&gt; – runs scans whenever new code is pushed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;on:pull_request&lt;/code&gt; – scans pull requests before merging, ensuring vulnerabilities aren’t introduced.&lt;/li&gt;
&lt;li&gt;Scans can also be scheduled at specific days/times.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Powered by GitHub Actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code scanning workflows run on GitHub Actions.&lt;/li&gt;
&lt;li&gt;Each run consumes GitHub Actions minutes (free for public repositories and self-hosted runners).&lt;/li&gt;
&lt;li&gt;For private repositories, included usage depends on your GitHub plan.&lt;/li&gt;
&lt;li&gt;Usage beyond the free limit is controlled by spending limits (default $0/month for billed accounts, unlimited for invoiced customers).&lt;/li&gt;
&lt;li&gt;Minutes reset monthly, while storage usage accumulates until cleared.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is GitHub Codespaces?
&lt;/h2&gt;

&lt;p&gt;GitHub Codespaces is a cloud-hosted development environment powered by GitHub. It lets you spin up a fully configured, containerized workspace directly from a repository—without needing to set up dependencies or environments on your local machine.&lt;/p&gt;

&lt;p&gt;Codespaces are configurable, so teams can define a repeatable environment for all developers, ensuring consistency across projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Codespace Lifecycle
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create – Start a Codespace from GitHub.com, VS Code, or the GitHub CLI.&lt;/li&gt;
&lt;li&gt;Connect/Disconnect – You can disconnect and reconnect without interrupting processes.&lt;/li&gt;
&lt;li&gt;Stop/Restart – Resume where you left off without losing changes.&lt;/li&gt;
&lt;li&gt;Delete – Remove once work is complete (unpushed changes trigger a warning).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ways to Create a Codespace
&lt;/h3&gt;

&lt;p&gt;You can create a Codespace in four main ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From a template repository – to start a new project.&lt;/li&gt;
&lt;li&gt;From a branch – for feature development.&lt;/li&gt;
&lt;li&gt;From a pull request – to explore or review in-progress work.&lt;/li&gt;
&lt;li&gt;From a specific commit – to investigate bugs at a certain point in history.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Long-running vs Temporary – Use Codespaces for short tests or extended feature development.&lt;/li&gt;
&lt;li&gt;Prebuilds – Admins can enable prebuilds to speed up environment setup.&lt;/li&gt;
&lt;li&gt;Timeouts – Codespaces stop after 30 minutes of inactivity. Data is preserved until explicitly deleted.&lt;/li&gt;
&lt;li&gt;Rebuilds – Rebuild to apply configuration changes. Cached images speed things up, while a full rebuild clears the cache for fresh setup.&lt;/li&gt;
&lt;li&gt;Retention – Inactive Codespaces are automatically deleted after 30 days (configurable).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What You Can Customize in GitHub Codespaces?
&lt;/h3&gt;

&lt;p&gt;GitHub Codespaces is highly configurable, allowing you to personalize and optimize your development environment. Here are the key customization options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Settings Sync – Sync your VS Code settings between the desktop and web client.&lt;/li&gt;
&lt;li&gt;Dotfiles – Use a dotfiles repo to set up scripts, shell preferences, and custom configurations.&lt;/li&gt;
&lt;li&gt;Rename a Codespace – Change the autogenerated name to easily identify different Codespaces.&lt;/li&gt;
&lt;li&gt;Change Your Shell – Use your preferred shell (bash, zsh, etc.), set a new default, or configure via dotfiles.&lt;/li&gt;
&lt;li&gt;Change the Machine Type – Adjust CPU/RAM resources depending on workload.&lt;/li&gt;
&lt;li&gt;Set the Default Editor – Choose how Codespaces open:

&lt;ul&gt;
&lt;li&gt;VS Code (desktop or web)&lt;/li&gt;
&lt;li&gt;JetBrains Gateway (for JetBrains IDEs)&lt;/li&gt;
&lt;li&gt;JupyterLab (for data science workflows)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Set the Default Region – Decide where your data is stored geographically.&lt;/li&gt;

&lt;li&gt;Set the Timeout – Change the default 30-minute inactivity timeout to longer or shorter intervals.&lt;/li&gt;

&lt;li&gt;Configure Automatic Deletion – Define how long stopped Codespaces are retained (up to 30 days).&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project vs Projects (Classic)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Projects&lt;/strong&gt; (New)&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Projects (Classic)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tables &amp;amp; Boards&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tables and Boards with flexible layouts (including Lists and Timeline).&lt;/td&gt;
&lt;td&gt;Boards only.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sort, rank, and group items using &lt;strong&gt;custom fields&lt;/strong&gt; (text, number, date, iteration, single select).&lt;/td&gt;
&lt;td&gt;Columns and Cards only.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Insights&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Create &lt;strong&gt;visuals and charts&lt;/strong&gt; for historical and current progress tracking.&lt;/td&gt;
&lt;td&gt;Simple &lt;strong&gt;progress bar&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Automation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Advanced automation via &lt;strong&gt;GraphQL API, Actions, and column presets&lt;/strong&gt;.&lt;/td&gt;
&lt;td&gt;Limited automation with column presets when issues/PRs are added, edited, or closed.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  GitHub Actions with Workflows
&lt;/h3&gt;

&lt;p&gt;GitHub Actions allows you to add powerful automation to your projects. By creating workflows, &lt;strong&gt;you can define tasks that run automatically when certain events occur in your repository&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A workflow is made up of one or more jobs, and each job runs a series of steps.&lt;/li&gt;
&lt;li&gt;Workflows can be triggered by events such as issue creation, pull requests, or pushes to a branch.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>github</category>
    </item>
    <item>
      <title>AWS CDK: The Beginner’s Guide</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Tue, 26 Aug 2025 16:27:31 +0000</pubDate>
      <link>https://dev.to/luffy7258/aws-cdk-the-beginners-guide-48em</link>
      <guid>https://dev.to/luffy7258/aws-cdk-the-beginners-guide-48em</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;The other day, I had an interview with a bank for a Platform Engineer role, and one of the main tools they use is &lt;strong&gt;AWS CDK&lt;/strong&gt;. Up until then, my go-to Infrastructure as Code tool had always been &lt;strong&gt;Terraform&lt;/strong&gt;, so I was curious: &lt;em&gt;How does CDK really work, and what should someone know before using it in the real world?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That curiosity gave me the idea for this blog. Instead of just keeping my learnings to myself, I wanted to share them with you in a beginner-friendly way. Throughout this series, I’ll be creating multiple small projects to break down the concepts step by step, so it’s easier to follow along and connect the dots. My goal is to give you a clear picture of what you need to know before using AWS CDK in any company or environment—so that when the time comes, you’ll be ready to confidently create and manage AWS resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What is CDK?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Before diving into the details, let’s start with the basics—&lt;strong&gt;what exactly is AWS CDK?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS CDK (Cloud Development Kit) is an &lt;strong&gt;open-source Infrastructure as Code (IaC) framework&lt;/strong&gt; that allows you to define your cloud resources using familiar programming languages like Python, TypeScript, Java, or C#. Instead of writing long YAML or JSON templates, you can write actual code to describe your infrastructure.&lt;/p&gt;

&lt;p&gt;Think of it like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With &lt;strong&gt;Terraform/CloudFormation&lt;/strong&gt;, you usually declare what you want in a static way.&lt;/li&gt;
&lt;li&gt;With &lt;strong&gt;AWS CDK&lt;/strong&gt;, you get the power of a full programming language to describe what you want, along with the flexibility to use loops, conditions, and abstractions to make it reusable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, AWS CDK lets you bring your developer mindset into infrastructure management.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Understanding Important AWS CDK Core Concepts&lt;/strong&gt;
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Projects&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;AWS CDK project&lt;/strong&gt; is just the folder where all your infrastructure code lives.&lt;/p&gt;

&lt;p&gt;When you create a CDK project, some files are &lt;strong&gt;always there&lt;/strong&gt; no matter which language you choose, while others are &lt;strong&gt;specific to the language&lt;/strong&gt; you’re using.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Universal Files &amp;amp; Folders (common in all CDK projects)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.git/&lt;/code&gt; → Git repo for version control (if Git is installed).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.gitignore&lt;/code&gt; → Tells Git which files to ignore.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;README.md&lt;/code&gt; → Basic instructions for your project (you can update it).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cdk.json&lt;/code&gt; → Configuration for CDK (how to run the app).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Python-Specific Files &amp;amp; Folders&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.venv/&lt;/code&gt; → Python virtual environment (for dependencies).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app.py&lt;/code&gt; → The entry point of your CDK app (like main() in a program).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;my_cdk_project/&lt;/code&gt; → Folder with your stack files (where you define resources).

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;__init__.py&lt;/code&gt; → Makes it a Python package.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;*_stack.py&lt;/code&gt; → Defines your AWS resources.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;requirements.txt&lt;/code&gt; → Lists your Python dependencies.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;requirements-dev.txt&lt;/code&gt; → Extra dependencies for development only.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;source.bat&lt;/code&gt; → Helper script for Windows users.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;tests/&lt;/code&gt; → Python unit tests for your stacks.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Constructs&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Constructs&lt;/strong&gt; are the &lt;strong&gt;basic building blocks&lt;/strong&gt; of an AWS CDK application.&lt;br&gt;
Think of them as &lt;strong&gt;Lego blocks&lt;/strong&gt; each block represents one or more AWS resources (like an S3 bucket, a Lambda function, or a VPC) along with its configuration.&lt;/p&gt;

&lt;p&gt;When you build a CDK app, you’re really just &lt;strong&gt;putting together constructs&lt;/strong&gt; piece by piece to describe your infrastructure.&lt;/p&gt;
&lt;h4&gt;
  
  
  Types / Levels of Constructs
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;th&gt;When to Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L1&lt;/td&gt;
&lt;td&gt;Direct CloudFormation resources (lowest-level)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;s3.CfnBucket&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;For full control or new AWS features not yet in L2/L3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2&lt;/td&gt;
&lt;td&gt;AWS CDK abstractions with defaults and helpers&lt;/td&gt;
&lt;td&gt;&lt;code&gt;s3.Bucket&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Everyday usage (balance of flexibility and ease)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L3&lt;/td&gt;
&lt;td&gt;Pre-built patterns (multiple resources together)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;apigateway.LambdaRestApi&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Quick solutions for common use cases&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Apps&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;App&lt;/strong&gt; in AWS CDK is like the &lt;strong&gt;top-level container&lt;/strong&gt; for everything you build.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;App&lt;/strong&gt; can have one or many &lt;strong&gt;Stacks&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Each &lt;strong&gt;Stack&lt;/strong&gt; is a group of AWS resources.&lt;/li&gt;
&lt;li&gt;Each resource is created using &lt;strong&gt;Constructs&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App → Stack(s) → Construct(s) → AWS Resource(s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Think of it like a house:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;App&lt;/strong&gt; = the whole house.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Stacks&lt;/strong&gt; = rooms in the house (living room, kitchen, bedroom).&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Constructs&lt;/strong&gt; = furniture in each room (sofa, fridge, bed).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: Simple CDK App in Python&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
import aws_cdk as cdk
from aws_cdk import aws_s3 as s3

class MyFirstStack(cdk.Stack):
    def __init__(self, scope: cdk.App, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # Construct: An S3 bucket
        s3.Bucket(self, "MyFirstBucket",
                  versioned=True,
                  removal_policy=cdk.RemovalPolicy.DESTROY)

app = cdk.App()                        # App (the whole house)
MyFirstStack(app, "MyFirstStack")      # Stack (a room in the house)
app.synth()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What’s Happening Here?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;App&lt;/strong&gt; → &lt;code&gt;cdk.App()&lt;/code&gt; is the root of your project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stack&lt;/strong&gt; → &lt;code&gt;MyFirstStack&lt;/code&gt; groups resources into one deployable unit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Construct&lt;/strong&gt; → &lt;code&gt;s3.Bucket&lt;/code&gt; is the actual AWS resource created inside the stack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you deploy this app, AWS CDK creates a &lt;strong&gt;CloudFormation stack&lt;/strong&gt; with &lt;strong&gt;one S3 bucket&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;AWS CDK Stacks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Stack&lt;/strong&gt; in AWS CDK is the &lt;strong&gt;smallest unit you can deploy&lt;/strong&gt;.&lt;br&gt;
It’s basically a &lt;strong&gt;box that holds a group of AWS resources&lt;/strong&gt; (like an S3 bucket, a Lambda function, or a VPC).&lt;/p&gt;

&lt;p&gt;When you deploy a stack, all the resources inside it are created &lt;strong&gt;together&lt;/strong&gt; in AWS as a &lt;strong&gt;CloudFormation&lt;/strong&gt; stack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you delete the stack → all the resources inside it are deleted.&lt;/li&gt;
&lt;li&gt;If you update the stack → AWS updates those resources as a group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: A Simple CDK Stack (Python)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
import aws_cdk as cdk
from aws_cdk import aws_s3 as s3

class MyStorageStack(cdk.Stack):
    def __init__(self, scope: cdk.App, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # Construct: An S3 bucket
        s3.Bucket(self, "MyStorageBucket",
                  versioned=True,
                  removal_policy=cdk.RemovalPolicy.DESTROY)

app = cdk.App()
MyStorageStack(app, "MyStorageStack")
app.synth()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;AWS CDK Stages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Stage&lt;/strong&gt; in AWS CDK is a way to group &lt;strong&gt;one or more stacks&lt;/strong&gt; together and treat them as a &lt;strong&gt;single deployment unit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Stages are especially useful when you want to promote your infrastructure through &lt;strong&gt;different environments&lt;/strong&gt; like &lt;strong&gt;Dev → Test → Prod&lt;/strong&gt;. Instead of deploying stacks one by one, you wrap them inside a stage and deploy the stage as a whole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Stage with Multiple Stacks (Python)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stacks/app_stack.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import Stack
from constructs import Construct

# Define the app stack
class AppStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)

        # The code that defines your application goes here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;stacks/database_stack.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import Stack
from constructs import Construct

# Define the database stack
class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)

        # The code that defines your database goes here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;my_stage.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import Stage
from constructs import Construct
from .app_stack import AppStack
from .database_stack import DatabaseStack

# Define the stage
class MyAppStage(Stage):
    def __init__(self, scope: Construct, id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, id, **kwargs)

        # Add both stacks to the stage
        AppStack(self, "AppStack")
        DatabaseStack(self, "DatabaseStack")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
import os

import aws_cdk as cdk

from cdk_demo_app.my_stage import MyAppStage

#  Create a CDK app
app = cdk.App()

# Create the development stage
MyAppStage(app, 'Dev',
           env=cdk.Environment(account='123456789012', region='us-east-1'),
           )

# Create the production stage
MyAppStage(app, 'Prod',
           env=cdk.Environment(account='098765432109', region='us-east-1'),
           )

app.synth()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;That’s a wrap on the &lt;strong&gt;beginner’s guide to AWS CDK&lt;/strong&gt;—we covered what CDK is, how projects are structured, and the core pieces (apps → stacks → constructs → resources), with simple examples to get you moving.&lt;/p&gt;

&lt;p&gt;Next up, I’ll dive into &lt;strong&gt;real-world setups&lt;/strong&gt;: building and deploying &lt;strong&gt;multi-stack&lt;/strong&gt; apps across &lt;strong&gt;multiple environments&lt;/strong&gt; (Dev/Test/Prod) and setting up a &lt;strong&gt;pipeline&lt;/strong&gt; using AWS CDK with &lt;strong&gt;Python&lt;/strong&gt;. We’ll also touch on cross-stack references, environment configs, and CDK Pipelines basics.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscdk</category>
    </item>
    <item>
      <title>Building Brick Breaker with Amazon Q: AI-Powered Retro Game Development</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Tue, 10 Jun 2025 23:25:34 +0000</pubDate>
      <link>https://dev.to/luffy7258/building-brick-breaker-with-amazon-q-ai-powered-retro-game-development-7l</link>
      <guid>https://dev.to/luffy7258/building-brick-breaker-with-amazon-q-ai-powered-retro-game-development-7l</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Why Am I Building a Game?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Inspired by the &lt;a href="https://community.aws/content/2y6egGcPAGQs8EwtQUM9KAONojz/build-games-challenge-build-classics-with-amazon-q-developer-cli" rel="noopener noreferrer"&gt;Build Games Challenge&lt;/a&gt;, I wanted to take a hands-on approach to rediscover the joy of classic game development. It’s more than nostalgia — it’s a fun and practical way to learn modern tools like the Amazon Q Developer CLI. Building a retro game gives me a focused, creative playground to explore coding, problem-solving, and AI-driven development workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why I Chose Brick Breaker&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I selected Brick Breaker as it had brought back memories of my first telephone because of the iconic Nokia 3310. Snake and Brick Breaker were two of the games that I would play for hours and hours on it. They were simple, they were addictive, and it felt really satisfying clearing off all the bricks. To restore it now with the technology of the times and AI support is like a tribute to those good olden days of gaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How I Got the Most Out of Amazon Q (and Had Fun Doing It)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;During my experience with AI, I realized that well-defined and worded questions yield the most precise results. I realized that whenever I explicitly specified the aim, organization, and setting of a task, the result was considerably more accurate. The following is the most effective prompt I have given:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a complete Brick Breaker game in Python using PyGame.
It should include a paddle controlled by left/right arrow keys, a bouncing ball, a grid of breakable bricks, game over when the ball falls, and a score tracker.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Why did it succeed?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It clearly defines the language, library, main mechanics, and play flow. Having it well-structured but concise allowed Q's answers to be clean and ready-to-run. I also found it beneficial that I introduced few enhancements in the game from the prompts like "Create multiple levels with increasing ball speed and more bricks", "Add sound effect for the ball collision with the brick" etc which did not modify the game flow rather enhanced my game.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How AI Solved Classic Game Dev Challenges&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Challenge&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Before Amazon Q&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;With Amazon Q&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Collision Detection&lt;/td&gt;
&lt;td&gt;Needed manual bounding-box logic; had bugs while deleting bricks while iterating&lt;/td&gt;
&lt;td&gt;Created safe iteration using &lt;code&gt;bricks[:]&lt;/code&gt;, handled direction of bounce of ball correctly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Angle-Based Ball Bounce&lt;/td&gt;
&lt;td&gt;Math needed to account for angle based upon paddle hit location&lt;/td&gt;
&lt;td&gt;Automatically generated offset-based reflection logic with &lt;code&gt;ball.centerx - paddle.centerx&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Smoother Paddle Controls&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;KEYDOWN&lt;/code&gt;/&lt;code&gt;KEYUP&lt;/code&gt; were used by the developers, leading to laggy or nonresponsive paddles&lt;/td&gt;
&lt;td&gt;Used &lt;code&gt;pygame.key.get_pressed()&lt;/code&gt; with &lt;code&gt;clock.tick()&lt;/code&gt; for smooth input&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power-Ups (such as additional ball)&lt;/td&gt;
&lt;td&gt;Difficult to make fall behavior, randomness, collision and reset&lt;/td&gt;
&lt;td&gt;Created random drop logic, tracking through lists, and paddle collision interaction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Brick Level &amp;amp; Layout Design&lt;/td&gt;
&lt;td&gt;Tedious mathematical treatment of the coordinates, manual or repetitive code&lt;/td&gt;
&lt;td&gt;Automated function for placement based on grids and even multi-level transfers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sound Effects&lt;/td&gt;
&lt;td&gt;Manual loading, event hooking, and format support needed&lt;/td&gt;
&lt;td&gt;Q added bounce and brick-hit noises with &lt;code&gt;pygame.mixer.Sound()&lt;/code&gt; properly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High Score Saving&lt;/td&gt;
&lt;td&gt;Required file I/O with error handling (JSON or Pickle)&lt;/td&gt;
&lt;td&gt;Implemented a permanent &lt;code&gt;highscore.json&lt;/code&gt; with load/save functions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Game Over Screen &amp;amp; Restart&lt;/td&gt;
&lt;td&gt;Necessary game loop pause + state reset logic&lt;/td&gt;
&lt;td&gt;Q implemented game over UI, &lt;code&gt;Press R to restart&lt;/code&gt; functionality, and safe initialization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Asset Management&lt;/td&gt;
&lt;td&gt;Missing file errors or inconsistent file paths&lt;/td&gt;
&lt;td&gt;Automated creation of &lt;code&gt;assets/&lt;/code&gt; directory, generated asset-safe code&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Automation That Saved Time&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To my surprise, the entire game took only 30 minutes to build. The minimum playable version of Brick Breaker with paddle control, ball bounce, and bricks was created by Amazon Q in one prompt. I haven't typed in any line of code, but instead spent my time thinking about what features I should add next. Amazon Q's ability to code and debug fast allowed me to loop through ideas fast and spend more time on creative enhancements rather than boilerplate logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Screenshots &amp;amp; Gameplay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s what the final game looked like:&lt;/p&gt;

&lt;p&gt;Game Start&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0sckivxj02lmpvamudw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0sckivxj02lmpvamudw.png" alt="Image description" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ball Hitting Brick&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvd4dr8n1ubjrnwt8p6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvd4dr8n1ubjrnwt8p6y.png" alt="Image description" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Game Over Screen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fels4d9rwnrbj2itcly01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fels4d9rwnrbj2itcly01.png" alt="Image description" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I am genuinely impressed by the Amazon Q Developer CLI — it is way more than just an autocompleter. It felt like having an AI pair programmer by my side. I was shocked at its ability to think through abstract game logic, to refactor my code on the fly, and to provide thoughtful details like restart prompts and sound effects. I don't know if you are working on a retro mini-game or a larger prototype, but for me, Q has been a massive help — with Q I can work quickly and still maintain quality.&lt;/p&gt;

&lt;p&gt;🔗 Github Repo : &lt;a href="https://github.com/raju7258/brick-breaker" rel="noopener noreferrer"&gt;https://github.com/raju7258/brick-breaker&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>gamedev</category>
    </item>
    <item>
      <title>aws</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Tue, 06 May 2025 09:57:05 +0000</pubDate>
      <link>https://dev.to/luffy7258/aws-4hfg</link>
      <guid>https://dev.to/luffy7258/aws-4hfg</guid>
      <description></description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>My Experience Taking the AWS Solution Architect Exam: What’s New, What’s Harder, and Why It Still Matters</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Tue, 08 Apr 2025 17:55:04 +0000</pubDate>
      <link>https://dev.to/luffy7258/my-experience-taking-the-aws-solution-architect-exam-whats-new-whats-harder-and-why-it-still-1pp1</link>
      <guid>https://dev.to/luffy7258/my-experience-taking-the-aws-solution-architect-exam-whats-new-whats-harder-and-why-it-still-1pp1</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;I. Introduction: Why I’m Writing and Why It Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;So, I’ve recently renewed my AWS Certified Solution Architect – Professional, and I’ve got to tell you — it’s not what it used to be. In this article, I want to share what’s new, what caught me off guard, and why I still believe it’s totally worth the time and effort. Whether you're renewing or tackling it for the first time, this might just save you a headache or two.&lt;/p&gt;

&lt;p&gt;Main Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The need to stay current in a fast-changing cloud world.&lt;/li&gt;
&lt;li&gt;Why renewing every two years keeps your skills sharp and industry-relevant.&lt;/li&gt;
&lt;li&gt;AWS certifications have a real impact on how you approach and design solutions, so they're more than just a resume boost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;II. Exam Format: More Questions, More Time, More Focus Needed&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Overview:&lt;br&gt;
Let’s start with the most obvious change — the number of questions and the overall exam structure. It’s a small shift on paper, but a big difference in reality.&lt;/p&gt;

&lt;p&gt;Key Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The exam now includes &lt;strong&gt;75 questions&lt;/strong&gt;, up from the previous 65 questions.&lt;/li&gt;
&lt;li&gt;The duration has been extended to &lt;strong&gt;180 minutes&lt;/strong&gt; to match the added complexity.&lt;/li&gt;
&lt;li&gt;Increased mental fatigue — requires deeper focus and time management.&lt;/li&gt;
&lt;li&gt;Tips from my experience: pacing strategies, how I flagged questions, and how I dealt with tricky scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;III. SAP-C02: An In-Depth Look at Advanced Services&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The new exam AWS Certified Solution Architect - Professional (SAP-C02) digs deeper into services you may not use on a daily basis — though you probably should. It really pushes you to think outside the box and consider architecture from a broader, more modern perspective.&lt;/p&gt;

&lt;p&gt;Main Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There’s a stronger focus on Analytics, Machine Learning, Containerization, and Application Integration.&lt;/li&gt;
&lt;li&gt;It challenges you to think not just in silos, but in terms of how these services work together to build something scalable and efficient.&lt;/li&gt;
&lt;li&gt;I’ll walk you through what I studied, what surprised me, and how real-world experience with these services gave me a serious edge.&lt;/li&gt;
&lt;li&gt;Some of the best prep resources for me: AWS whitepapers, hands-on labs, and a few workshop links that were absolute game-changers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Workshop Reference Link : &lt;a href="https://workshops.aws/" rel="noopener noreferrer"&gt;https://workshops.aws/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;IV. Changes in the Process of Renewal: What to Consider&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you're renewing your AWS certification, there are some updated rules you’ll want to keep on your radar. The criteria around what gets renewed has shifted — and not necessarily in a way that benefits everyone.&lt;/p&gt;

&lt;p&gt;Main Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At the moment, only the &lt;strong&gt;AWS Certified Solutions Architect – Associate&lt;/strong&gt; is renewed automatically when you pass the Professional exam.&lt;/li&gt;
&lt;li&gt;Previously, this renewal would also cover certs like &lt;strong&gt;AWS Certified DevOps Engineer - Professional, AWS Certified SysOps Administrator - Associate and AWS Certified Developer - Associate&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;So what does that mean for you? Well, depending on your job title or long-term goals, you might need to take on multiple exams just to keep everything active.&lt;/li&gt;
&lt;li&gt;I’ll also drop a few suggestions that helped me stay on top of renewals — &lt;strong&gt;without burning out or overloading myself&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reference Link : &lt;a href="https://aws.amazon.com/certification/recertification/" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/recertification/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;V. My Proposal: We Should Make These Exams More Practical&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After spending years working hands-on with AWS services, I’ve come to believe that these exams should do more than test how well you can pick the right answer from a list. They should test how well you can actually do the job.&lt;/p&gt;

&lt;p&gt;Main Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It would be a game-changer if AWS introduced practical troubleshooting tasks or interactive lab-style challenges.&lt;/li&gt;
&lt;li&gt;Sure, it would raise the difficulty — but it would also build real-world skills instead of just theoretical knowledge.&lt;/li&gt;
&lt;li&gt;This shift is especially valuable for AWS Certified Solution Architect – Professional and AWS Certified DevOps Engineer - Professional exams.&lt;/li&gt;
&lt;li&gt;A few ideas? Think drag-and-drop architecture design, fixing broken deployments, or tracking down permission errors in IAM under pressure.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS AFT: Automating AWS Account Management – Benefits, Challenges, and Lessons Learned</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Wed, 05 Feb 2025 13:41:13 +0000</pubDate>
      <link>https://dev.to/luffy7258/aws-aft-automating-aws-account-management-benefits-challenges-and-lessons-learned-2m9e</link>
      <guid>https://dev.to/luffy7258/aws-aft-automating-aws-account-management-benefits-challenges-and-lessons-learned-2m9e</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding AWS AFT: A Practical Perspective&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS AFT, or AWS Account Factory for Terraform, is a powerful tool designed to simplify the provisioning and management of AWS accounts. Built on top of AWS Control Tower, it streamlines enterprise account management using Terraform. By adopting infrastructure-as-code principles, AWS AFT automates the creation and configuration of accounts based on predefined templates and policies, ensuring consistency, governance, and compliance across an organization.&lt;/p&gt;

&lt;p&gt;AWS AFT integrates seamlessly with Terraform, a popular open-source infrastructure-as-code tool, allowing users to define, update, and manage their cloud resources in a declarative way. Enterprises often find this integration particularly valuable as it bridges the gap between AWS account governance and DevOps practices, fostering efficiency and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why AWS AFT Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Managing multiple AWS accounts manually can be tedious and prone to errors. AWS AFT helps solve these issues by automating and standardizing account management. Here’s why enterprises find it valuable:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplifying Account Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setting up and managing multiple AWS accounts for different teams or departments takes time. AWS AFT automates this process, reducing manual effort and ensuring every new account adheres to company policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enforcing Consistency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Keeping configurations, security policies, and best practices consistent across multiple AWS accounts can be difficult. AWS AFT makes it easy by allowing organizations to define standard configurations that apply across all accounts, eliminating discrepancies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Strengthening Security &amp;amp; Compliance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For industries with strict security and compliance requirements, AWS AFT ensures every account follows necessary security standards automatically. This reduces the risk of misconfigurations and potential vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Scaling Without the Headache&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When managing dozens or even hundreds of AWS accounts, scaling becomes a challenge. AWS AFT makes this process much easier by automating tasks and reducing administrative overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How AWS AFT Fits in Enterprise Workflows&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS AFT isn’t just about account creation—it integrates deeply into enterprise operations, bringing several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Governance&lt;/strong&gt;: Works alongside AWS Control Tower for a unified governance model across multiple AWS accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased Automation&lt;/strong&gt;: Reduces repetitive manual tasks, allowing DevOps teams to focus on more strategic initiatives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Cost Management&lt;/strong&gt;: By enforcing standardized configurations and governance policies, organizations can control costs and avoid unnecessary spending.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With its ability to enhance automation while maintaining governance, AWS AFT is a must-have for enterprises looking to efficiently scale their AWS infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Challenges I Had to Overcome with AWS AFT&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While AWS AFT brings immense value, I encountered a few roadblocks along the way. Here are some of the key challenges and how I tackled them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Lack of Clear Documentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although AWS AFT is a well-structured tool, I found that its documentation lacked detail, making it challenging to troubleshoot issues. I often had to rely on trial and error or engaging with the GitHub community to find solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Workspaces Assigned to Default Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One unexpected issue I faced was AWS AFT creating Terraform workspaces under the default Terraform Enterprise (TFE) project instead of my designated ones. This required additional manual intervention or modifications to the AFT code to ensure workspaces were properly assigned for better visibility and control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No Automated Deletion Process&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS AFT doesn’t include a built-in method to automatically delete Terraform workspaces, AWS accounts, or related resources. This meant manually tracking and cleaning up resources to avoid unnecessary costs and clutter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Lack of Third-Party Network Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS AFT lacks built-in integration with external networking components. Organizations using third-party networking solutions had to implement additional customization and logic to bridge this gap effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS AFT is a game-changer for enterprises looking to manage AWS accounts efficiently while maintaining compliance and security. By automating account provisioning, enforcing best practices, and improving scalability, it significantly reduces the operational burden on cloud teams.&lt;/p&gt;

&lt;p&gt;However, like any tool, it comes with challenges. Addressing documentation gaps, workspace assignments, and lack of automated cleanup can help make the implementation smoother. With the right workarounds and best practices, AWS AFT can become an indispensable part of an organization’s cloud automation and governance strategy.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A Practical Comparison of Terraform Organization and Team Tokens</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Wed, 22 Jan 2025 12:20:36 +0000</pubDate>
      <link>https://dev.to/luffy7258/a-practical-comparison-of-terraform-organization-and-team-tokens-17n4</link>
      <guid>https://dev.to/luffy7258/a-practical-comparison-of-terraform-organization-and-team-tokens-17n4</guid>
      <description>&lt;p&gt;When managing access and resources in Terraform, the distinction between &lt;strong&gt;Organization Tokens&lt;/strong&gt; and &lt;strong&gt;Team Tokens&lt;/strong&gt; can seem subtle but is critical to efficient and secure operations. I’ll admit, I only noticed the differences recently when I encountered an issue that left me scratching my head. A routine task in Terraform failed unexpectedly, and after some digging, I realized it was due to confusion between the scope of an organization token and a team token. This experience taught me how crucial it is to understand these tokens to avoid similar pitfalls.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll break down the differences between these two types of tokens, explore their use cases, and provide practical insights to help you make the right choice for your Terraform workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Terraform Tokens?
&lt;/h2&gt;

&lt;p&gt;Tokens in Terraform are access keys used for API authentication and authorization. They act as a way to securely interact with Terraform Enterprise or Cloud, enabling operations ranging from managing resources to initiating runs. By using tokens, you can automate tasks without relying on user-specific credentials, improving security and scalability.&lt;/p&gt;

&lt;p&gt;Terraform provides different types of tokens to cater to various levels of access: organization tokens and team tokens. Understanding their distinct scopes and use cases is essential to managing your infrastructure effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Organization Tokens?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Organization Tokens&lt;/strong&gt; are designed for administrative tasks that apply to the entire organization. They provide access to organization-wide settings and resources, such as managing teams, workspaces, and policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Characteristics of Organization Tokens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scope: Grants access to the entire organization.&lt;/li&gt;
&lt;li&gt;Use Cases: Ideal for high-level administrative tasks, like creating teams, configuring organization settings, and managing workspaces.&lt;/li&gt;
&lt;li&gt;Management: Can only be created or revoked by members of the "owners" team.&lt;/li&gt;
&lt;li&gt;Permissions: Cannot initiate runs or create configuration versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Considerations:&lt;/strong&gt;&lt;br&gt;
Due to their extensive access, organization tokens must be carefully managed. Limit their use to trusted administrators, rotate them regularly, and store them securely to minimize security risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Team Tokens?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Team Tokens&lt;/strong&gt;, on the other hand, are scoped to specific teams and provide access only to the workspaces that the team is authorized to manage. Unlike organization tokens, team tokens are not tied to an individual user but instead to the team as a whole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Characteristics of Team Tokens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scope: Limited to specific workspaces assigned to the team.&lt;/li&gt;
&lt;li&gt;Use Cases: Ideal for performing API operations on workspaces, such as queuing plans and applying runs.&lt;/li&gt;
&lt;li&gt;Management: Each team can have one active token, which can be generated or revoked by team members or restricted to organization owners.&lt;/li&gt;
&lt;li&gt;Permissions: Inherits the permissions of the team, enabling actions like starting runs and managing configurations if allowed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Considerations:&lt;/strong&gt;&lt;br&gt;
Since team tokens are often used for operational tasks, ensure they are issued only to teams with specific workspace access. Review token usage regularly and disable unused tokens to reduce potential vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences Between Organization and Team Tokens
&lt;/h2&gt;

&lt;p&gt;Here’s a quick comparison to highlight the key differences:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Organization Tokens&lt;/th&gt;
&lt;th&gt;Team Tokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scope&lt;/td&gt;
&lt;td&gt;Organization-wide; access to all organization settings.&lt;/td&gt;
&lt;td&gt;Team-specific; limited to assigned workspaces.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use Cases&lt;/td&gt;
&lt;td&gt;Administrative tasks like creating teams and configuring policies.&lt;/td&gt;
&lt;td&gt;Workspace-specific operations like queuing plans.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Management&lt;/td&gt;
&lt;td&gt;Managed only by the "owners" team.&lt;/td&gt;
&lt;td&gt;Can be managed by team members (or restricted).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Permissions&lt;/td&gt;
&lt;td&gt;Cannot start runs or modify configurations.&lt;/td&gt;
&lt;td&gt;Can perform actions allowed by team permissions.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use Organization Tokens vs. Team Tokens
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Organization Tokens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For creating and managing teams and organization-level settings.&lt;/li&gt;
&lt;li&gt;For configuring organization-wide policies and governance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Team Tokens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For API operations at the workspace level.&lt;/li&gt;
&lt;li&gt;For shared access among team members managing specific resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choosing the right token type depends on the task’s scope. For broader administrative access, organization tokens are essential, while team tokens are better suited for granular, team-specific operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Managing Terraform Tokens
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Adhere to the Principle of Least Privilege: Only grant the access necessary for a task.&lt;/li&gt;
&lt;li&gt;Securely Store Tokens: Use tools like HashiCorp Vault or environment variables to store tokens securely.&lt;/li&gt;
&lt;li&gt;Rotate Tokens Regularly: Replace tokens periodically to reduce the risk of misuse.&lt;/li&gt;
&lt;li&gt;Audit Token Usage: Monitor logs to detect unauthorized or unusual token usage.&lt;/li&gt;
&lt;li&gt;Restrict Token Management Permissions: Limit who can generate or revoke tokens to prevent accidental or malicious changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Understanding the differences between Terraform organization and team tokens can save you from potential issues and ensure smoother workflows. Organization tokens are your go-to for administrative tasks, while team tokens excel in managing workspace-specific operations. By choosing the right token type for each scenario and adhering to security best practices, you can optimize your Terraform workflows and maintain robust access control.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>AWS Terraform Provider v6.0.0 Changes: Overwrite Argument Deprecated for SSM Parameter Store</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Fri, 17 Jan 2025 17:09:57 +0000</pubDate>
      <link>https://dev.to/luffy7258/terraform-v60-and-v50-changes-overwrite-argument-deprecated-for-ssm-parameter-store-1i76</link>
      <guid>https://dev.to/luffy7258/terraform-v60-and-v50-changes-overwrite-argument-deprecated-for-ssm-parameter-store-1i76</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform is a powerful tool for managing AWS infrastructure as code (IaC). Many developers rely on the &lt;code&gt;overwrite&lt;/code&gt; argument in the AWS SSM Parameter Store resource to manage configurations and secrets efficiently while avoiding the pitfalls of hardcoding sensitive values. The &lt;code&gt;overwrite&lt;/code&gt; argument has been a convenient way to update existing parameters automatically, streamlining workflows and reducing operational overhead.&lt;/p&gt;

&lt;p&gt;However, AWS Terraform Provider v6.0.0 has not yet been released. The deprecation of the &lt;code&gt;overwrite&lt;/code&gt; argument was first announced in the v5 upgrade guide to give users time to prepare. This change introduces challenges for teams with extensive configurations relying on this feature, requiring them to refactor their code and adopt alternative approaches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7jlnzv0w1l6tqpj3pff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7jlnzv0w1l6tqpj3pff.png" alt="Image description" width="678" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're planning to upgrade to AWS Terraform Provider v6.0.0 in the future, understanding the implications of this change is crucial. In this article, we’ll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The role of the &lt;code&gt;overwrite&lt;/code&gt; argument and the reasons behind its deprecation.&lt;/li&gt;
&lt;li&gt;The challenges this change introduces for Terraform users.&lt;/li&gt;
&lt;li&gt;Workarounds and strategies to manage parameter updates effectively in AWS Terraform Provider v6.0.0.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Role of the Overwrite Argument and the Reasons Behind Its Deprecation
&lt;/h2&gt;

&lt;p&gt;The deprecation of the &lt;code&gt;overwrite&lt;/code&gt; argument in AWS Terraform Provider v6.0.0 aims to align the AWS SSM Parameter Store resource with Terraform's standard practices. Previously, the &lt;code&gt;overwrite&lt;/code&gt; argument allowed two behaviors that are now considered problematic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adopting existing resources during creation ("import-on-create").&lt;/li&gt;
&lt;li&gt;Skipping updates without lifecycle rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By removing the &lt;code&gt;overwrite&lt;/code&gt; argument, Terraform encourages the use of explicit imports and lifecycle rules, providing better clarity and control over resource management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Import on Create&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Current Behavior&lt;/strong&gt;:&lt;br&gt;
Setting &lt;code&gt;overwrite = true&lt;/code&gt; automatically adopts existing parameters created outside Terraform. This implicit behavior bypasses Terraform’s explicit resource import process, potentially causing unintentional changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Future Behavior&lt;/strong&gt;:&lt;br&gt;
In AWS Terraform Provider v6.0.0, &lt;code&gt;overwrite&lt;/code&gt; will be removed, and the provider will default to treating all create operations as new (&lt;code&gt;overwrite = false&lt;/code&gt;). To manage existing resources, you’ll need to explicitly import them using the CLI or an import block.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  to = aws_ssm_parameter.example
  id = "/some/param"
}

resource "aws_ssm_parameter" "example" {
  name = "/some/param"
  # (other arguments...)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Skipping Updates Without a Lifecycle Rule&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Current Behavior&lt;/strong&gt;:&lt;br&gt;
Setting &lt;code&gt;overwrite = false&lt;/code&gt; skips updates when changes occur, without needing lifecycle rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Future Behavior&lt;/strong&gt;:&lt;br&gt;
In AWS Terraform Provider v6.0.0, this behavior will no longer be possible. The &lt;code&gt;overwrite&lt;/code&gt; argument will be removed, and the provider will default to always allowing updates (&lt;code&gt;overwrite = true&lt;/code&gt;). To skip specific updates, you’ll need to use the &lt;code&gt;ignore_changes&lt;/code&gt; lifecycle rule.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ssm_parameter" "example" {
  name  = "/some/param"
  type  = "String"
  value = "foo"

  lifecycle {
    ignore_changes = [value]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Changes Introduced in AWS Terraform Provider v5.0.0&lt;/strong&gt;&lt;br&gt;
The deprecation of the &lt;code&gt;overwrite&lt;/code&gt; argument began in AWS Terraform Provider v5.0.0 to give users time to adjust their configurations before its full removal in v6.0.&lt;/p&gt;

&lt;p&gt;How to Prepare for v6.0:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;overwrite = true&lt;/code&gt; with an explicit import block for managing existing resources.&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;overwrite = false&lt;/code&gt; with an &lt;code&gt;ignore_changes&lt;/code&gt; lifecycle block to skip updates.&lt;/li&gt;
&lt;li&gt;Until v6.0, keep &lt;code&gt;overwrite = true&lt;/code&gt; if your setup relies on it for updates. Removing it prematurely in v5.x will default it to &lt;code&gt;false&lt;/code&gt;, potentially causing configuration issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Challenges This Change Introduces for Terraform Users
&lt;/h2&gt;

&lt;p&gt;The removal of the &lt;code&gt;overwrite&lt;/code&gt; argument in AWS Terraform Provider v6.0.0 introduces significant challenges for users, especially those who heavily rely on it in their existing configurations. This change, which was already marked for deprecation in AWS Terraform Provider v5.0.0, requires extensive code refactoring and the development of alternative logic, such as using explicit &lt;code&gt;import&lt;/code&gt; blocks or lifecycle rules, to manage resource updates and existing parameters. Without proper preparation, upgrading to AWS Terraform Provider v6.0.0 may lead to broken pipelines, failed deployments, and increased complexity for DevOps teams. For many, adapting to this change under tight deadlines could be overwhelming, turning the transition into a time-intensive and stressful process. While the update promotes best practices, it demands careful planning and testing to ensure a smooth migration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Workarounds and strategies to handle parameter updates effectively in AWS Terraform Provider v6.0.0
&lt;/h2&gt;

&lt;p&gt;With the removal of the &lt;code&gt;overwrite&lt;/code&gt; argument in AWS Terraform Provider v6.0.0, managing updates to AWS SSM Parameter Store requires alternative approaches. Below is a workaround that mimics the behavior of &lt;code&gt;overwrite = true&lt;/code&gt;, ensuring parameters can be updated even when there are no changes to the parameter value. This method relies on using a dynamic resource to force updates when needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Using &lt;code&gt;random_id&lt;/code&gt; to Trigger Updates&lt;/strong&gt;&lt;br&gt;
The following configuration demonstrates how to manage parameter updates without the overwrite argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Store the notification email ID in SSM Parameter Store
resource "aws_ssm_parameter" "notification_group_email" {
  name  = "/path/to/ssm"
  type  = "String"
  value = var.ssm_value

  lifecycle {
    create_before_destroy = true
    replace_triggered_by = [ random_id.force_ssm_update_trigger ]
  }
}

# Dynamic resource to trigger updates
resource "random_id" "force_ssm_update_trigger" {
  byte_length = 8
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How This Works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;create_before_destroy&lt;/code&gt;:&lt;br&gt;
Ensures that a new parameter is created before the existing one is destroyed, avoiding downtime during updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;replace_triggered_by&lt;/code&gt;:&lt;br&gt;
Links the parameter to the random_id resource. Any change to the random_id resource forces a replacement of the parameter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;random_id Resource&lt;/code&gt;:&lt;br&gt;
Acts as a trigger for updates. By modifying or reapplying the configuration, the random_id value changes, causing the parameter to be updated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits of This Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mimics the behavior of &lt;code&gt;overwrite = true&lt;/code&gt; without requiring manual intervention.&lt;/li&gt;
&lt;li&gt;Provides control over when updates occur, even when the parameter value hasn’t changed.&lt;/li&gt;
&lt;li&gt;Ensures smooth updates with minimal disruption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more details, check the &lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/25636#issuecomment-1623661159" rel="noopener noreferrer"&gt;official discussion&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>ssm</category>
    </item>
    <item>
      <title>Optimizing Security and Efficiency in AWS Database Migration with DMS: Understanding Outbound-Only Connectivity</title>
      <dc:creator>Rajesh Murali Nair</dc:creator>
      <pubDate>Thu, 05 Dec 2024 22:54:30 +0000</pubDate>
      <link>https://dev.to/luffy7258/optimizing-security-and-efficiency-in-aws-database-migration-with-dms-understanding-outbound-only-1ooo</link>
      <guid>https://dev.to/luffy7258/optimizing-security-and-efficiency-in-aws-database-migration-with-dms-understanding-outbound-only-1ooo</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Migrating your databases to the AWS cloud using the Database Migration Service (DMS) is a smart choice for businesses seeking enhanced scalability, reliability, and cost-efficiency. When configuring your DMS instance, one crucial aspect to consider is network security. In this blog, we will explore why DMS instances require outbound-only connectivity, eliminating the need for incoming connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding DMS Connectivity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DMS operates by reading data from a source database, processing it, and writing it to a target database. This migration process is designed to be an outbound-oriented operation. Here’s why DMS instances do not need incoming connections to themselves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Outbound-Only Operations&lt;/strong&gt;: DMS, as the name suggests, is a migration service. It’s responsible for transferring data from the source to the target database. This means that the DMS instance is the initiator of connections to both the source and target databases. In other words, it doesn’t need incoming connections from the outside to perform its core functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security&lt;/strong&gt;: By limiting inbound connections to your DMS instance, you are significantly improving its security. You are reducing the attack surface, making it less vulnerable to potential threats. AWS and industry best practices recommend minimizing the exposure of resources by restricting inbound access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easier Security Management&lt;/strong&gt;: When working with multiple DMS instances or various services in your AWS environment, maintaining and managing security can become complex if you need to define and maintain inbound security rules for each service. With outbound-only connectivity, you simplify the security group and Network Access Control List (NACL) configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regulatory Compliance&lt;/strong&gt;: Many organizations, especially those in regulated industries, have stringent compliance requirements that restrict or prohibit incoming connections to certain resources. By adhering to outbound-only connectivity, you can maintain compliance with these security policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Network Isolation&lt;/strong&gt;: By isolating your DMS instances from incoming connections, you reduce the risk of unintended access or breaches. This is particularly essential in sensitive or regulated environments where data security and isolation are paramount.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Configuring DMS for Outbound-Only Connectivity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To configure your DMS instance with outbound-only connectivity, you need to ensure that your security groups and network configurations are set up properly. While the DMS instance doesn’t require incoming connections, the source and target databases may require specific configurations to allow traffic from the DMS instance. This ensures that DMS can successfully read data from the source and write it to the target without exposing itself to unnecessary risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When using the AWS Database Migration Service, understanding the necessity of outbound-only connectivity for your DMS instance is key to optimizing security and efficiency. By embracing this approach, you can minimize security risks, simplify management, adhere to compliance requirements, and ensure the success of your database migration to the AWS cloud. Outbound-only connectivity is a secure and effective way to leverage the power of DMS while safeguarding your resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dms</category>
      <category>migration</category>
      <category>database</category>
    </item>
  </channel>
</rss>
