<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Olawale Adepoju</title>
    <description>The latest articles on DEV Community by Olawale Adepoju (@olawde).</description>
    <link>https://dev.to/olawde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/olawde"/>
    <language>en</language>
    <item>
      <title>Accelerating AI Innovation with the AWS Cloud Adoption Framework</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Fri, 16 Jan 2026 20:37:49 +0000</pubDate>
      <link>https://dev.to/aws-builders/accelerating-ai-innovation-with-the-aws-cloud-adoption-framework-28hk</link>
      <guid>https://dev.to/aws-builders/accelerating-ai-innovation-with-the-aws-cloud-adoption-framework-28hk</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Cloud adoption is critical for organizations looking to leverage Artificial Intelligence (AI), Machine Learning (ML), and Generative AI (GenAI). But scaling AI in the cloud isn’t just about spinning up servers — it requires strategy, governance, and alignment across people, processes, and technology.&lt;/p&gt;

&lt;p&gt;The AWS Cloud Adoption Framework (CAF) provides a structured approach to navigate this journey, ensuring organizations can adopt AI and ML in a secure, scalable, and business-aligned way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to Artificial Intelligence
&lt;/h3&gt;

&lt;p&gt;Artificial Intelligence (AI) is the field focused on creating machines that can perform tasks traditionally requiring human intelligence, such as understanding language, perceiving images, making decisions, and solving problems. Many AI systems work by generating probabilistic outcomes—predictions or decisions with a high degree of certainty—helping automate or enhance knowledge-based work.&lt;/p&gt;

&lt;p&gt;A large part of modern AI relies on Machine Learning (ML), which allows computers to learn from data rather than being explicitly programmed. ML models generalize from examples, making them versatile across a wide range of applications. A specialized branch, Deep Learning, uses multi-layered neural networks to analyze complex data, especially unstructured information like images and text, enabling breakthroughs in areas such as image recognition, speech processing, and natural language understanding.&lt;/p&gt;

&lt;p&gt;Building on this, Generative AI represents a frontier in AI research, enabling machines to create new content—text, images, or even music—mimicking human-like reasoning and creativity. Advances in computing, data, and algorithms have made generative AI practical, unlocking applications across entertainment, art, research, and beyond.&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigating Your AI Adoption Journey
&lt;/h3&gt;

&lt;p&gt;Adopting a transformative technology like AI is a long, evolving journey. While every organization’s path is unique, patterns from thousands of successful AI adopters have emerged. To help de-risk this journey, the AWS Cloud Adoption Framework for AI (CAF-AI) offers guidance and best practices.&lt;/p&gt;

&lt;p&gt;When approaching your AI transformation, consider four critical elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outcome:&lt;/strong&gt; Define the business outcomes you want to achieve and work backward from them. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Flywheel:&lt;/strong&gt; High-quality data fuels AI models, which generate predictions that improve business outcomes, creating more valuable data in a self-reinforcing cycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Strategy:&lt;/strong&gt; Strong data management keeps the AI flywheel spinning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foundational Capabilities:&lt;/strong&gt; These core capabilities determine success or failure in AI adoption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The transformation is best approached iteratively, in four stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Envision:&lt;/strong&gt; Identify AI opportunities aligned with business objectives, map required data, and engage key stakeholders.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Align:&lt;/strong&gt; Establish cross-functional alignment, address dependencies, and ensure organizational readiness for AI adoption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Launch:&lt;/strong&gt; Deliver pilot projects or proofs of concept to demonstrate value, learn from outcomes, and refine strategies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale:&lt;/strong&gt; Expand successful pilots across the organization, maximizing both technical and business impact.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Throughout the journey, avoid trying to do everything at once. Pair long-term ambition with pragmatic, measurable steps to evolve capabilities, improve readiness, and deliver sustained business value. Incremental progress brings organizations closer to achieving their AI transformation goals.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the AWS Cloud Adoption Framework?
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/cloud-adoption-framework/" rel="noopener noreferrer"&gt;AWS Cloud Adoption Framework&lt;/a&gt; for AI, ML, and Generative AI (CAF-AI) provides a structured guide for organizations embarking on or advancing their AI journey. It helps teams plan mid- to long-term strategies, align stakeholders, and move beyond isolated proofs of concept toward enterprise-wide adoption.&lt;/p&gt;

&lt;p&gt;CAF-AI can be used in different ways: you may focus on specific sections to develop targeted skills or leverage the full framework to assess organizational maturity and prioritize near-term improvements. Built on the same foundational capabilities as the AWS Cloud Adoption Framework (AWS CAF), CAF-AI extends and adapts them to meet the unique demands of AI adoption while introducing new capabilities critical for AI success.&lt;/p&gt;

&lt;p&gt;The AWS CAF organizes cloud adoption into six perspectives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Business – aligns cloud adoption with organizational strategy and value creation.&lt;/li&gt;
&lt;li&gt;People – addresses skills, training, and change management.&lt;/li&gt;
&lt;li&gt;Governance – ensures policies, risk management, and compliance.&lt;/li&gt;
&lt;li&gt;Platform – focuses on the architecture, infrastructure, and cloud foundation.&lt;/li&gt;
&lt;li&gt;Security – manages risk, compliance, and secure AI/ML deployments.&lt;/li&gt;
&lt;li&gt;Operations – ensures operational excellence, monitoring, and continuous improvement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When applied to AI/ML, each perspective helps organizations avoid common pitfalls like skill gaps, uncontrolled experimentation, and poor model governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Applying CAF to AI, ML, and Generative AI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Business Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define business outcomes for AI projects (e.g., predictive analytics, intelligent automation, personalized recommendations).&lt;/li&gt;
&lt;li&gt;Prioritize AI initiatives based on ROI and feasibility.&lt;/li&gt;
&lt;li&gt;Establish KPIs for AI adoption, such as model accuracy, time-to-deploy, and business impact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;People Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build AI/ML capabilities through training in Python, TensorFlow, PyTorch, and AWS AI services.&lt;/li&gt;
&lt;li&gt;Empower teams with generative AI tools (e.g., Amazon Bedrock, SageMaker JumpStart).&lt;/li&gt;
&lt;li&gt;Create a culture of experimentation and innovation while maintaining responsible AI practices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Governance Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement AI governance frameworks: model versioning, data lineage, and bias mitigation.&lt;/li&gt;
&lt;li&gt;Ensure ethical AI practices and compliance with regulations (e.g., GDPR, HIPAA).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Platform Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build scalable AI/ML infrastructure using AWS services like SageMaker, Data Pipeline, and managed data lakes (S3 + Lake Formation).&lt;/li&gt;
&lt;li&gt;Standardize environments for reproducibility and collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protect sensitive data with encryption, IAM policies, and private endpoints.&lt;/li&gt;
&lt;li&gt;Secure ML pipelines and generative AI endpoints against misuse.&lt;/li&gt;
&lt;li&gt;Monitor model access, drift, and vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Operations Perspective:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor model performance, accuracy, and drift in production.&lt;/li&gt;
&lt;li&gt;Automate retraining and CI/CD pipelines for ML models.&lt;/li&gt;
&lt;li&gt;Apply MLOps best practices to reduce operational risk and downtime.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>genai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Architecting for AI Excellence: Exploring AWS’s Three New Well-Architected Lenses Announced at re:Invent 2025</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Fri, 16 Jan 2026 15:13:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/architecting-for-ai-excellence-exploring-awss-three-new-well-architected-lenses-announced-at-3dm</link>
      <guid>https://dev.to/aws-builders/architecting-for-ai-excellence-exploring-awss-three-new-well-architected-lenses-announced-at-3dm</guid>
      <description>&lt;p&gt;Artificial intelligence is no longer an experimental workload in AWS—it is rapidly becoming a core part of production architectures. From generative AI applications to large-scale machine learning pipelines, architects are now expected to design AI systems that are not only powerful, but also secure, reliable, cost-efficient, and responsible.&lt;/p&gt;

&lt;p&gt;At AWS re:Invent 2025, AWS expanded its AI guidance within the &lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;awsm.page-wa-lens-whitepapers=1&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc" rel="noopener noreferrer"&gt;Well-Architected Framework&lt;/a&gt; by introducing one new lens and two major updates designed specifically for AI workloads: the Responsible AI Lens, the Machine Learning (ML) Lens, and the Generative AI Lens. Together, these lenses offer practical, end-to-end architectural guidance for organizations at every stage of their AI journey—whether teams are just beginning to explore machine learning or operating complex, production-grade AI systems at scale.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html?refid=cr_card" rel="noopener noreferrer"&gt;AWS Well-Architected Framework&lt;/a&gt; itself defines proven architectural best practices for building and operating workloads in the cloud that are secure, reliable, performance-efficient, cost-optimized, and sustainable. By extending the framework with AI-focused lenses, AWS enables architects to apply these core principles to the unique challenges and considerations of modern AI and machine learning workloads.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Responsible AI Lens: Designing AI Systems with Trust, Fairness, and Transparency&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/responsible-ai-lens/responsible-ai-lens.html" rel="noopener noreferrer"&gt;Responsible AI Lens&lt;/a&gt; provides a structured framework that helps teams evaluate, track, and continuously improve their AI workloads against established best practices. It enables architects and developers to identify potential gaps in their AI implementations and offers actionable guidance to improve system quality while aligning with responsible AI principles. By applying the Responsible AI Lens, organizations can make informed architectural decisions that balance business objectives with technical requirements—accelerating the transition from AI experimentation to production-ready, trusted solutions.&lt;/p&gt;

&lt;p&gt;Key Takeaways from the Responsible AI Lens:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every AI system carries responsible AI considerations:&lt;/strong&gt;&lt;br&gt;
Whether intentionally designed or not, all AI systems introduce responsible AI implications. These considerations must be actively addressed throughout the system lifecycle rather than left to chance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI systems may be used beyond their original intent:&lt;/strong&gt;&lt;br&gt;
Applications are often adopted in ways developers did not initially anticipate. Combined with the probabilistic nature of AI, this can lead to unexpected outcomes—even within intended use cases—making early and deliberate responsible AI decisions essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Responsible AI enables innovation and builds trust:&lt;/strong&gt;&lt;br&gt;
Rather than limiting progress, responsible AI practices act as a catalyst for innovation by establishing stakeholder confidence, strengthening customer trust, and reducing long-term operational and reputational risks.&lt;/p&gt;

&lt;p&gt;The Responsible AI Lens serves as the foundational guidance for AI development on AWS, providing core principles that inform and support both the Machine Learning Lens and the Generative AI Lens implementations.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Machine Learning Lens: Building Strong ML Foundations on AWS:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/machine-learning-lens.html" rel="noopener noreferrer"&gt;Machine Learning Lens&lt;/a&gt; acts as a practical foundation for teams designing and running ML workloads on AWS. It brings together proven, cloud-agnostic best practices mapped to the Well-Architected Framework pillars, covering every stage of the ML lifecycle. Whether you’re experimenting with your first model or operating complex AI systems in production, the updated ML Lens provides a consistent way to think about architecture, operations, and scale.&lt;/p&gt;

&lt;p&gt;Since its initial release in 2023, AWS’s ML ecosystem has evolved significantly—and the updated ML Lens reflects that progress. It incorporates modern tooling and services that help teams move faster, collaborate better, and operate ML workloads more efficiently and responsibly.&lt;/p&gt;

&lt;p&gt;What’s new in the updated Machine Learning Lens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streamlined collaboration between data and AI teams using Amazon SageMaker Unified Studio&lt;/li&gt;
&lt;li&gt;AI-assisted development to boost developer productivity with Amazon Q&lt;/li&gt;
&lt;li&gt;Scalable, distributed training for foundation models and fine-tuning using Amazon SageMaker HyperPod&lt;/li&gt;
&lt;li&gt;Flexible model customization, including fine-tuning and knowledge distillation, using Amazon Bedrock, Kiro, and Amazon Q Developer&lt;/li&gt;
&lt;li&gt;No-code ML workflows with Amazon SageMaker Canvas, now enhanced with Amazon Q&lt;/li&gt;
&lt;li&gt;Stronger bias detection and responsible AI practices with improved fairness metrics in Amazon SageMaker Clarify&lt;/li&gt;
&lt;li&gt;Faster access to business insights through automated dashboards in Amazon QuickSight&lt;/li&gt;
&lt;li&gt;Modular inference architectures that simplify deployment and scaling using Inference Components&lt;/li&gt;
&lt;li&gt;Deeper observability with improved debugging and monitoring across the ML lifecycle&lt;/li&gt;
&lt;li&gt;Better cost control through SageMaker Training Plans, Savings Plans, and Spot Instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the strengths of the ML Lens is its flexibility. You can apply it early during architecture design or use it later to review and improve existing production workloads. Regardless of where you are in your cloud or ML journey, the ML Lens—powered by services like Amazon SageMaker Unified Studio, Amazon Q, Amazon SageMaker HyperPod, and Amazon Bedrock—helps teams build ML systems that are scalable, efficient, and ready for production.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Generative AI Lens: Practical Architecture Guidance for Foundation Models:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/generative-ai-lens/generative-ai-lens.html?did=wp_card&amp;amp;trk=wp_card" rel="noopener noreferrer"&gt;Generative AI Lens&lt;/a&gt; helps architects and builders take a structured, repeatable approach to designing systems that use large language models (LLMs) and other foundation models to deliver real business value. It focuses on the architectural decisions teams face most often when building generative AI applications—such as choosing the right model, designing effective prompts, customizing models, integrating workloads, and continuously improving system performance.&lt;/p&gt;

&lt;p&gt;Unlike the broader Machine Learning Lens, which applies across the entire ML spectrum, the Generative AI Lens zooms in on the unique requirements of foundation models and generative AI workloads. It distills best practices drawn from AWS’s experience working with thousands of customers and aligns them with the Well-Architected Framework, helping teams move from experimentation to production with confidence.&lt;/p&gt;

&lt;p&gt;What’s new in the updated Generative AI Lens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expanded guidance on orchestrating complex, long-running generative AI workflows using Amazon SageMaker HyperPod&lt;/li&gt;
&lt;li&gt;A stronger Responsible AI foundation, including a detailed breakdown of AWS’s eight core Responsible AI dimensions&lt;/li&gt;
&lt;li&gt;A new agentic AI preamble introducing architectural patterns for building AI agents and multi-step reasoning systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By building on the foundation provided by the ML Lens, the Generative AI Lens offers focused, practical guidance for teams tackling the distinct challenges—and opportunities—of generative AI and foundation model–based applications on AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementing Well-Architected AI/ML Guidance&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The three new AI-focused lenses—Responsible AI, Machine Learning, and Generative AI—are designed to work together as a single, cohesive guidance model rather than standalone frameworks. Each lens plays a specific role, but together they help teams build AI systems that are production-ready, trustworthy, and scalable.&lt;/p&gt;

&lt;p&gt;The Responsible AI Lens sets the baseline by focusing on safe, fair, and secure AI development. It helps teams balance business goals with technical and ethical requirements, making it easier to move from proof-of-concept experiments into production. The Machine Learning Lens then provides broader guidance across both traditional ML and modern AI workloads, with recent updates that improve collaboration between data and AI teams, introduce AI-assisted development, support large-scale infrastructure provisioning, and enable more flexible model deployment. On top of this foundation, the Generative AI Lens focuses specifically on LLM-based architectures, with new guidance for Amazon SageMaker HyperPod, emerging agentic AI patterns, and updated architectural scenarios for common generative AI applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  What’s Next?
&lt;/h4&gt;

&lt;p&gt;With the launch of these lenses at re:Invent 2025, AWS gives organizations a clear path to building AI systems that are not just powerful, but also responsible and trustworthy. By covering the full range of AI workloads—from traditional ML to generative AI—these lenses help teams accelerate innovation while maintaining strong architectural and responsible AI standards.&lt;/p&gt;

</description>
      <category>awsreinvent</category>
      <category>generativeai</category>
      <category>awswellarchitectedframework</category>
      <category>aws</category>
    </item>
    <item>
      <title>Detecting and Filtering Harmful Content with Amazon Bedrock Guardrails</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Thu, 08 Jan 2026 14:12:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/detecting-and-filtering-harmful-content-with-amazon-bedrock-guardrails-1b1d</link>
      <guid>https://dev.to/aws-builders/detecting-and-filtering-harmful-content-with-amazon-bedrock-guardrails-1b1d</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Technical Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon Bedrock Guardrails provide a centralized control layer that sits between your application and the foundation models (FMs) used to generate responses. Guardrails allow you to define enforceable safety, privacy, and compliance rules that are applied consistently—regardless of which model is used underneath.&lt;/p&gt;

&lt;p&gt;From an architecture perspective, guardrails are evaluated on both inbound prompts and outbound responses, ensuring that unsafe content is blocked or transformed before it reaches the model or the end user.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;High-Level Architecture Flow&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User Request Enters the Application&lt;br&gt;
A user interacts with the application (for example, a chatbot, banking portal, or call center system). The request is passed to the application backend through an API or UI layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prompt Evaluation via Bedrock Guardrails&lt;br&gt;
Before the request is sent to a foundation model, the application invokes Amazon Bedrock with an associated guardrail configuration.&lt;br&gt;
At this stage, guardrails inspect the user prompt for:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Harmful or toxic language&lt;/li&gt;
&lt;li&gt;Disallowed topics (such as financial or legal advice)&lt;/li&gt;
&lt;li&gt;Sensitive data patterns (PII, depending on configuration)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the prompt violates defined policies, Bedrock can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block the request&lt;/li&gt;
&lt;li&gt;Return a predefined safe response&lt;/li&gt;
&lt;li&gt;Log the event for auditing and monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Model Invocation (If Prompt Is Allowed)&lt;br&gt;
Only prompts that pass guardrail evaluation are forwarded to the selected foundation model (for example, Claude, Titan, or other Bedrock-supported models).&lt;br&gt;
This decouples safety logic from the model itself and ensures consistent behavior even when models are swapped or upgraded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Response Evaluation via Guardrails&lt;br&gt;
After the foundation model generates a response, guardrails are applied again—this time on the model output.&lt;br&gt;
Guardrails can:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Detect and block toxic or unsafe responses&lt;/li&gt;
&lt;li&gt;Prevent disallowed advice or policy violations&lt;/li&gt;
&lt;li&gt;Redact or mask personally identifiable information (PII)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Final Response Returned to the User
Only responses that comply with guardrail rules are returned to the application and displayed to the user. If the response violates policies, a controlled fallback message is returned instead.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Example Architecture Use Cases&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Chatbot Architecture&lt;br&gt;
Guardrails validate user input before inference and scan model output after inference to ensure no abusive or harmful content is surfaced to users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Financial Services Architecture&lt;br&gt;
Guardrails act as a policy enforcement layer that blocks prompts or responses related to investment advice, reducing regulatory risk while still allowing general financial information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contact Center Summarization Pipeline&lt;br&gt;
Conversation transcripts are sent through Bedrock with guardrails configured to detect and redact PII before summaries are stored in downstream systems such as S3, OpenSearch, or CRM platforms.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Why This Architecture Matters&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;By separating safety controls from application logic and model selection, Amazon Bedrock Guardrails enable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized governance across multiple AI workloads&lt;/li&gt;
&lt;li&gt;Model-agnostic safety enforcement&lt;/li&gt;
&lt;li&gt;Easier auditing, compliance, and policy updates without code changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach allows teams to scale generative AI applications while maintaining predictable, controlled, and compliant behavior across environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Amazon Bedrock Guardrails Policies and Enforcement Capabilities&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon Bedrock Guardrails provide a set of configurable safeguards (referred to as policies) that are evaluated during prompt processing and model inference. These policies allow teams to detect, block, redact, or validate content before it reaches a foundation model and again before a response is returned to the user.&lt;/p&gt;

&lt;p&gt;Each policy type can be enabled independently and tuned to match application-specific risk tolerance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Content Filters&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Content filters are used to detect and block harmful text or image content in user prompts and model responses.&lt;/p&gt;

&lt;p&gt;Guardrails classify content into predefined categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hate&lt;/li&gt;
&lt;li&gt;Insults&lt;/li&gt;
&lt;li&gt;Sexual&lt;/li&gt;
&lt;li&gt;Violence&lt;/li&gt;
&lt;li&gt;Misconduct&lt;/li&gt;
&lt;li&gt;Prompt Attacks (jailbreak attempts)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each category, you can configure the filter strength (for example, permissive vs. strict), allowing fine-grained control based on the sensitivity of your application.&lt;/p&gt;

&lt;p&gt;Both Classic and Standard &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-tiers.html" rel="noopener noreferrer"&gt;tiers&lt;/a&gt; support these categories.&lt;br&gt;
With the Standard tier, content detection is extended into code-level elements, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comments&lt;/li&gt;
&lt;li&gt;Variable and function names&lt;/li&gt;
&lt;li&gt;String literals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially important for developer tools, code assistants, and AI-generated scripts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Denied Topics&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Denied topics allow you to explicitly define subjects that are out of scope or not allowed for your application.&lt;/p&gt;

&lt;p&gt;If a denied topic is detected in either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user query, or&lt;/li&gt;
&lt;li&gt;The model’s response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the request can be blocked or replaced with a safe fallback message.&lt;/p&gt;

&lt;p&gt;In the Standard tier, denied topic detection also applies inside code elements such as comments, variables, function names, and strings—preventing policy violations from being hidden in generated code.&lt;/p&gt;

&lt;p&gt;This is commonly used in regulated environments (for example, blocking medical or investment advice).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Word Filters&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Word filters allow exact-match blocking of specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Words&lt;/li&gt;
&lt;li&gt;Phrases&lt;/li&gt;
&lt;li&gt;Profanity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is useful for enforcing business-specific restrictions, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Offensive language&lt;/li&gt;
&lt;li&gt;Competitor names&lt;/li&gt;
&lt;li&gt;Brand misuse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Word filters are deterministic and operate as a straightforward enforcement layer within the guardrail evaluation process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Sensitive Information Filters&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sensitive information filters help detect and block or mask personally identifiable information (PII) in both prompts and responses.&lt;/p&gt;

&lt;p&gt;Detection is probabilistic and supports standard formats for entities such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Social Security Numbers&lt;/li&gt;
&lt;li&gt;Dates of birth&lt;/li&gt;
&lt;li&gt;Addresses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to built-in PII detection, you can configure custom regular expressions to identify organization-specific identifiers, such as customer IDs or internal reference numbers.&lt;/p&gt;

&lt;p&gt;This policy is critical for applications that store outputs in downstream systems like S3, OpenSearch, CRMs, or analytics platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Policy Violation Handling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In addition to defining policies, you can configure custom user-facing messages that are returned when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A user input violates a policy, or&lt;/li&gt;
&lt;li&gt;A model response fails guardrail evaluation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows applications to fail safely and consistently, rather than returning generic errors or silent failures.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Integration Options in the Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Guardrails can be used in two primary ways:&lt;/p&gt;

&lt;p&gt;1.　During Model Inference&lt;br&gt;
Guardrails are applied by specifying the guardrail ID and version during the Bedrock inference API call.&lt;br&gt;
In this mode, guardrails evaluate both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input prompts&lt;/li&gt;
&lt;li&gt;Model completions&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Standalone Guardrail Evaluation
Using the ApplyGuardrail API, guardrails can be applied without invoking a foundation model.
This is useful for:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Pre-validating user input&lt;/li&gt;
&lt;li&gt;Post-processing outputs from external systems&lt;/li&gt;
&lt;li&gt;Enforcing policies in RAG pipelines before inference&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;For RAG and Conversational Applications&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In RAG or multi-turn conversational architectures, you may want to evaluate only the user’s current input, while excluding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System instructions&lt;/li&gt;
&lt;li&gt;Retrieved search results&lt;/li&gt;
&lt;li&gt;Conversation history&lt;/li&gt;
&lt;li&gt;Few-shot examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach ensures that guardrails focus on user intent, rather than falsely flagging internal context or system-generated content.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsguardrail</category>
      <category>ai</category>
      <category>amazonbedrock</category>
    </item>
    <item>
      <title>From Governance to Value: Running an Effective Architecture Review Board</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Mon, 05 Jan 2026 17:27:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/from-governance-to-value-running-an-effective-architecture-review-board-51ei</link>
      <guid>https://dev.to/aws-builders/from-governance-to-value-running-an-effective-architecture-review-board-51ei</guid>
      <description>&lt;p&gt;As I navigate the enterprise architecture world in my role as an Application Architect, one thing has become increasingly clear: the pace of change in modern computing landscapes is relentless. Cloud adoption, artificial intelligence, and continuous technology innovation are transforming how organizations build and operate systems. While these advancements create enormous opportunities, they also introduce complexity and risk—especially in large enterprises where consistency, security, and compliance cannot be compromised.&lt;/p&gt;

&lt;p&gt;Organizations are now challenged to move faster without losing control, ensuring that new initiatives and projects align with established enterprise standards, architectural principles, and regulatory requirements. This is where an effective Architecture Review Board (ARB) plays a critical role. When designed and operated well, an ARB helps organizations maintain strong enterprise guardrails while still accelerating the delivery of initiatives across an increasingly busy project pipeline.&lt;/p&gt;

&lt;p&gt;In this post, I explore what an Architecture Review Board really is, the key components of an efficient and practical architecture review process, and how to build and operate an effective enterprise ARB. Drawing from my own experience navigating enterprise architecture, I aim to show how an ARB can evolve from a perceived governance bottleneck into a strategic enabler of sound architectural decisions and sustainable innovation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Is an Architecture Review Board?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;An Architecture Review Board (ARB) is a cross-functional group responsible for reviewing solution architectures to ensure alignment with enterprise standards, best practices, and long-term supportability. It typically includes representatives from Security, Development, Enterprise Architecture, Infrastructure, and Operations. Bringing these perspectives together early helps prevent rework caused by missed stakeholder input.&lt;/p&gt;

&lt;p&gt;An ARB does not operate in isolation. It is embedded within the project delivery lifecycle, reviewing solution designs, custom builds, and third-party products to ensure enterprise alignment. Reviews usually occur after the design phase—before build or purchase decisions—and again before deployment to confirm that the implemented solution matches the approved architecture.&lt;/p&gt;

&lt;p&gt;While most organizations recognize the value of an ARB, many struggle to run it effectively. When designed well, an efficient architecture review process reduces costs, lowers security risk, and limits the accumulation of technical debt—turning governance into a catalyst for better outcomes rather than a delivery bottleneck.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Is an Architecture Review Board in an AWS Cloud-Native Environment?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In an AWS cloud-native and agile environment, an Architecture Review Board (ARB) exists to help teams build solutions that align with the AWS Well-Architected Framework while maintaining enterprise guardrails. Rather than acting as a gatekeeper, a modern ARB enables teams to design architectures that are secure, reliable, performant, cost-efficient, operationally excellent, and sustainable—without slowing delivery.&lt;/p&gt;

&lt;p&gt;A cloud-native ARB is inherently cross-functional, bringing together Security, Engineering, Platform, Enterprise Architecture, and Operations. This mirrors the Well-Architected approach, where architectural quality is the result of shared ownership across disciplines. Early collaboration reduces late-stage rework and helps teams make informed trade-offs across the Well-Architected pillars.&lt;/p&gt;

&lt;p&gt;Unlike traditional review boards, a Well-Architected–aligned ARB is embedded into the delivery lifecycle. Reviews occur early in design to guide service selection and architectural patterns, and again before release to ensure the implemented solution matches the approved design. In mature AWS environments, many of these reviews are reinforced through infrastructure as code, policy as code, guardrails, and reusable “golden paths,” allowing teams to move fast while staying compliant.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Architecture Without a Review Framework&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One of the biggest challenges in software architecture is achieving human consensus. In any organization, teams bring diverse priorities, perspectives, and constraints to the table. Without a formal architecture review process, these differences often turn into prolonged debates, inconsistent decisions, and stalled delivery.&lt;/p&gt;

&lt;p&gt;In the absence of a shared review model and clear architectural guardrails, discussions become opinion-driven rather than principle-driven. Over time, this slows down teams and increases friction between stakeholders. In practice, we often see individuals gravitate toward a few common personas:&lt;/p&gt;

&lt;h5&gt;
  
  
  The Late Reviewer
&lt;/h5&gt;

&lt;p&gt;Offers thoughtful feedback, but only at the final stages of delivery. Late input reduces the team’s ability to incorporate feedback effectively and often leads to rework.&lt;/p&gt;

&lt;h5&gt;
  
  
  The Central Gatekeeper
&lt;/h5&gt;

&lt;p&gt;Insists on being involved in every architectural decision. While well-intentioned, this behavior limits scalability and creates single points of decision failure.&lt;/p&gt;

&lt;h5&gt;
  
  
  The Over-Designer
&lt;/h5&gt;

&lt;p&gt;Passionate about craftsmanship and innovation, but tends to introduce unnecessary complexity. Solutions risk becoming difficult to operate and evolve.&lt;/p&gt;

&lt;h5&gt;
  
  
  The Idealist
&lt;/h5&gt;

&lt;p&gt;Strives for perfection at every step. This often delays decisions and prevents teams from delivering incremental value.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benefits of an Architecture Review Board&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Establishing an Architecture Review Board (ARB) delivers measurable value by improving architectural quality while enabling teams to move fast with confidence.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Improved compliance&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;A consistent review process helps ensure architectures align with enterprise standards, regulatory requirements, and approved design patterns. By reviewing decisions early, the ARB reinforces shared guardrails without slowing delivery.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Reduced technical debt&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Technical debt often starts with small design compromises that scale into long-term problems. The ARB identifies these risks early, promoting sustainable patterns and long-term thinking. This results in cleaner architectures, more maintainable systems, and less rework over time.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Greater efficiency and lower costs&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Contrary to common perception, a well-run ARB reduces friction rather than creating it. Standardized architectures and reusable patterns improve delivery speed, resource utilization, and cost predictability—while avoiding expensive late-stage rewrites.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Improved supportability and reliability&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;By embedding operational considerations into design reviews, the ARB ensures systems are easier to operate, monitor, and troubleshoot. Cross-functional representation surfaces supportability concerns early, leading to more resilient systems and fewer production incidents.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Security by design&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Security is the most critical outcome of an effective ARB. By integrating security reviews into architectural decisions from day one, the ARB helps protect against data exposure, unauthorized access, and evolving threats. This proactive approach strengthens trust with customers and stakeholders while reducing downstream risk.&lt;/p&gt;

&lt;p&gt;For more details on the review process, see &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/the-review-process.html" rel="noopener noreferrer"&gt;Well-Architected Framework: The review process&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>awswellarchitectedframework</category>
      <category>enterprisearchitecture</category>
    </item>
    <item>
      <title>SCP Automation for AWS Organization</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Tue, 14 Jan 2025 05:16:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/scp-automation-for-aws-organization-569j</link>
      <guid>https://dev.to/aws-builders/scp-automation-for-aws-organization-569j</guid>
      <description>&lt;h2&gt;
  
  
  Understanding AWS Service Control Policies (SCPs) in AWS Organizations
&lt;/h2&gt;

&lt;p&gt;As organizations grow and adopt cloud technologies, managing access and ensuring compliance across multiple accounts becomes increasingly complex. AWS Organizations, a service that allows you to centrally manage and govern multiple AWS accounts, provides a powerful feature called Service Control Policies (SCPs) to help you enforce governance at scale. &lt;br&gt;
AWS Service Control Policies (SCPs) are a powerful tool for managing permissions and enforcing governance across your AWS environment. By using SCPs effectively, you can ensure that your organization remains secure, compliant, and well-managed as you scale your cloud operations. Whether you’re restricting access to specific services, enforcing compliance standards, or managing permissions across multiple accounts,&lt;br&gt;
In this blog, we’ll explore what SCPs are, how they work, and why they are essential for managing your AWS environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Are Service Control Policies (SCPs)?
&lt;/h3&gt;

&lt;p&gt;Service Control Policies (SCPs) are a type of policy in AWS Organizations that allow you to define and enforce permissions for AWS accounts within your organization. SCPs act as guardrails, specifying the maximum permissions that accounts in an organizational unit (OU) or the entire organization can have. They do not grant permissions themselves but instead restrict what actions can or cannot be performed, even if permissions are granted at the account level.&lt;/p&gt;

&lt;p&gt;Think of SCPs as a way to set boundaries for what is allowed in your AWS environment. For example, you can use SCPs to ensure that no account in your organization can delete critical resources, use specific regions, or access certain AWS services.&lt;/p&gt;
&lt;h4&gt;
  
  
  How Do SCPs Work?
&lt;/h4&gt;

&lt;p&gt;SCPs are applied at the organizational level and are evaluated alongside Identity and Access Management (IAM) policies. Here’s how they work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hierarchy of Application: SCPs can be attached to the root of your organization, specific organizational units (OUs), or individual accounts. Policies applied at a higher level (e.g., the root) cascade down to all child OUs and accounts.&lt;/li&gt;
&lt;li&gt;Deny by Default: SCPs operate on a "deny by default" principle. If an action is not explicitly allowed by the SCP, it is implicitly denied, even if an IAM policy grants the action.&lt;/li&gt;
&lt;li&gt;No Direct Permissions: SCPs do not grant permissions. They only define the maximum permissions that can be granted by IAM policies. For example, if an SCP denies access to a service, no IAM policy can override that denial.&lt;/li&gt;
&lt;li&gt;Policy Evaluation: When a user or role attempts to perform an action, AWS evaluates the SCPs attached to the account, the IAM policies, and any resource-based policies. If the action is not allowed by the SCP, it is denied.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Why Use SCPs?
&lt;/h4&gt;

&lt;p&gt;SCPs are a critical tool for organizations that need to enforce governance and compliance across multiple AWS accounts. Here are some key benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized Control: SCPs allow you to manage permissions across all accounts in your organization from a single location, reducing the risk of misconfigurations.&lt;/li&gt;
&lt;li&gt;Enforce Compliance: You can use SCPs to enforce compliance with organizational policies, such as restricting the use of certain AWS regions or services.&lt;/li&gt;
&lt;li&gt;Limit Risk: By restricting access to sensitive services or actions, SCPs help reduce the risk of accidental or malicious changes to your AWS environment.&lt;/li&gt;
&lt;li&gt;Simplify Management: SCPs make it easier to manage permissions across multiple accounts by applying consistent policies at the OU or organizational level.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Use Cases for SCPs
&lt;/h4&gt;

&lt;p&gt;In this use case we will look at SCP to deny member accounts from leaving the organization, this policy will be applied at Root OU of the AWS Organization, and also another SCP to deny user from creating  console login access in a member account, this policy will be applied  a Compliant OU level( which is a child ou of the root).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Deny",
        "Action": "organizations:LeaveOrganization",
        "Resource": "*"
      }
    ]
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "iam:CreateLoginProfile"
      ],
      "Resource": "arn:aws:iam::*:user/*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kindly check out the repo for detailed code for setting up the SCP's at different OU levels and unit testing of each attached policies. &lt;a href="https://github.com/olawaleade/aws_scp_automation" rel="noopener noreferrer"&gt;https://github.com/olawaleade/aws_scp_automation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>awsorganization</category>
      <category>iam</category>
      <category>scp</category>
      <category>automation</category>
    </item>
    <item>
      <title>Deploy App on AWS ECS Fargate using Github Actions</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Tue, 23 Apr 2024 07:02:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploy-app-on-aws-ecs-fargate-using-github-actions-13mf</link>
      <guid>https://dev.to/aws-builders/deploy-app-on-aws-ecs-fargate-using-github-actions-13mf</guid>
      <description>&lt;p&gt;In this blog, i will show how to deploy an application on Amazon ECS using the Fargate for efficient containerized deployment and also Github actions will be used for the CI/CD.&lt;/p&gt;

&lt;p&gt;Step 1: Create your repository on ECR( Amazon Elastic Container Registry)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dog27r7hcwxrfft9yt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dog27r7hcwxrfft9yt3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Create cluster in ECS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqsnz18l52tmufvbrteb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqsnz18l52tmufvbrteb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Create a Task Definition&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcdpps128e5pl312ruj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcdpps128e5pl312ruj7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create a task role if you don't have existing role created. These are the two policies required for the role&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhb8gjy9khmoiiiumw4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhb8gjy9khmoiiiumw4f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the container, give a name, and copy the image URL created in the ECR previously.&lt;br&gt;
Add the port in which your application is accessed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1v9iwpw8sa65mkjb1k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1v9iwpw8sa65mkjb1k5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Create a Service&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; After creating this service it will fail because there is no image pushed to the ECR yet.&lt;br&gt;
Click on the cluster created, and create service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvlth4rbg1kurk0ptaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvlth4rbg1kurk0ptaw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3njlmevmkkjfqb0jah44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3njlmevmkkjfqb0jah44.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsdqekg4nl3ajm69tkt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsdqekg4nl3ajm69tkt2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: While service is creating, configure your application github workflow. The application is packaged using Docker.&lt;/p&gt;

&lt;p&gt;Hence the dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM node:16.20.1
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","run","start"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;github workflow is as below&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: CICD

on:
  push:
    branches: [ main ]

jobs:
  build-and-deploy:
    runs-on: [ ubuntu-latest ]
    steps:
      - name: Checkout source
        uses: actions/checkout@v3
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
        with:
          mask-password: 'true'

      - name: Build, tag, and push image to Amazon ECR
        id: build-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          IMAGE_TAG: latest
          REPOSITORY: nodeapp
        run: |
          # Build a docker container and
          # push it to ECR so that it can
          # be deployed to ECS.
          docker build -t $ECR_REGISTRY/$REPOSITORY:$IMAGE_TAG .
          docker push $ECR_REGISTRY/$REPOSITORY:$IMAGE_TAG
          echo "image=$ECR_REGISTRY/$REPOSITORY:$IMAGE_TAG" &amp;gt;&amp;gt; $GITHUB_OUTPUT    

      - name: Fill in the new image ID in the Amazon ECS task definition
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
          task-definition: nodejs-app-task-definition.json 
          container-name: nodejs-app
          image: ${{ steps.build-image.outputs.image }}    
      - name: Deploy Amazon ECS task definition
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def.outputs.task-definition }}
          service: nodejs-app-service
          cluster: DevCluster
          wait-for-service-stability: true


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create the task definition file(nodejs-app-task-definition.json) in the directory of your project. Copy the json file and paste in the file created.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k16j7kt8xyqzl385693.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k16j7kt8xyqzl385693.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The environment variables are set in github settings&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bij4hycxlzm686alof9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bij4hycxlzm686alof9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the image is built and pushed to ECR, the service becomes active and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw49curk40265ve6xtvgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw49curk40265ve6xtvgp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the task to see the configuration to see the public ip address. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F572e3kmc56x3d6ax5e7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F572e3kmc56x3d6ax5e7j.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>githubactions</category>
      <category>devops</category>
      <category>fargate</category>
    </item>
    <item>
      <title>Integrating AzureAD to Auth0</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Mon, 15 Apr 2024 01:03:20 +0000</pubDate>
      <link>https://dev.to/olawde/integrating-azuread-to-auth0-1k3i</link>
      <guid>https://dev.to/olawde/integrating-azuread-to-auth0-1k3i</guid>
      <description>&lt;h3&gt;
  
  
  Introduction to Auth0
&lt;/h3&gt;

&lt;p&gt;Auth0 is a versatile plug-and-play solution for adding authentication and authorization services to your application. Your company may save the cost, effort, and risk associated with developing your own authentication and authorization solution.&lt;/p&gt;

&lt;p&gt;When you build an application and you want to incorporate user authentication and authorization so that your users will be able to log in either with a username/password or with their social accounts (such as Microsoft Account).&lt;/p&gt;

&lt;p&gt;Integrating the Auth0 with Azure Active Directory might be challenging, so I will cover the procedures to be followed when implementing the integration. There are numerous steps that are broken down below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure App In Azure AD&lt;/li&gt;
&lt;li&gt;Create The Client Secret In Azure AD&lt;/li&gt;
&lt;li&gt;Configure API Permissions&lt;/li&gt;
&lt;li&gt;Create Enterprise Connection In Auth0&lt;/li&gt;
&lt;li&gt;Enable Enterprise Connection For Application&lt;/li&gt;
&lt;li&gt;Testing&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Configure App In Azure AD
&lt;/h4&gt;

&lt;p&gt;By now you must have had an account registered in the Azure portal, navigate to Azure AD in the &lt;a href="https://portal.azure.com/" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the “App Registrations” button in the side menu.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24by5itbefl2j2kvmfpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24by5itbefl2j2kvmfpj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Azure AD App Registrations, create a new App Registration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterfqa7yta69dhix91r0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterfqa7yta69dhix91r0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should now see the App Registration screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1spdk4dzk7su3ysgujk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1spdk4dzk7su3ysgujk7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the name for your application (you can change this later if you get it wrong).&lt;/li&gt;
&lt;li&gt;Select “Accounts in this organizational directory only” (multi-tenant is beyond the scope of this article).&lt;/li&gt;
&lt;li&gt;Configure redirect URI selecting “Web” and entering the callback URL https://{your-auth0-tenant}.auth0.com/login/callback (obviously, replace {your-auth0-tenant} with your Auth0 tenant name)&lt;/li&gt;
&lt;li&gt;Click “Register”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now see the newly created app Overview screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wnrvtknj1s7ffaiyqnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wnrvtknj1s7ffaiyqnc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE !!&lt;/strong&gt;  Copy the Application (client) ID from the overview screen of your newly created app registration, we'll need this later.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create The Client Secret In Azure AD
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Select the “Certificates &amp;amp; Secrets” area from the App registration side menu&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgvks0fge7vw3okxiwsr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgvks0fge7vw3okxiwsr.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the “New client secret” button in the “Client secrets” section.&lt;/li&gt;
&lt;li&gt;You should now see the Client Secret creation dialog&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai9m46a2u5e3lgnpble.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai9m46a2u5e3lgnpble.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the name for the description.&lt;/li&gt;
&lt;li&gt;Select expiry “Custom” depending on preference.&lt;/li&gt;
&lt;li&gt;Click the “Add” button.&lt;/li&gt;
&lt;li&gt;You should now see the new client secret listed in the “Client secrets” section.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Configure API Permissions
&lt;/h4&gt;

&lt;p&gt;We need to configure access to the MS Graph API for retrieving basic user profile and directory info. This will be done with delegated permissions which give access to the "User.Read" and directory.Read.All' permissions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On your App registration overview screen, click on the API permission&lt;/li&gt;
&lt;li&gt;You should now see the API permissions screen, click on add permission.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3xae3mgtj2aqcd7tvz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3xae3mgtj2aqcd7tvz2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Microsoft APIs, click on the Microsoft graph.&lt;/li&gt;
&lt;li&gt;You should see that “Delegated” permission for "User.Read" is already configured by default. Also, follow the steps below to replace Directory.Read.All&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2hzou2st2mtvx9y7rkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2hzou2st2mtvx9y7rkl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the search text field under the “Select Permissions” heading enter the text "Directory.Read.All'. Tick the checkbox next to the “Directory.Read.All” permission.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckt48npp5elf39dbb7m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckt48npp5elf39dbb7m1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the “Add Permissions” button.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Create Enterprise Connection in Auth0
&lt;/h4&gt;

&lt;p&gt;Here you have to be logged in to your Auth0 tenant control panel, and it is assumed you have an application running in the Auth0. We establish an enterprise connection in Auth0, to connect the Azure AD to the application in the Auth0 tenant.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on Authentication&amp;gt; Enterprise&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5dglq2pev5bzogoi4l6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5dglq2pev5bzogoi4l6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the + button next to Azure AD&lt;/li&gt;
&lt;li&gt;You should see the New Azure AD connection screen&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c3f7tblq2mnga4e4eak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c3f7tblq2mnga4e4eak.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the connection name (Needs to be unique).&lt;/li&gt;
&lt;li&gt;Enter the domain in the Microsoft Azure AD Domain field.&lt;/li&gt;
&lt;li&gt;Enter your Azure AD app registration Application (client) ID and Client Secret (which is the value in the client secret). You should have saved these while creating your Azure App registration.&lt;/li&gt;
&lt;li&gt;Leave everything else as a default&lt;/li&gt;
&lt;li&gt;Click the "create" button&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Enable Enterprise Connection For Application
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Having created the Enterprise Connection you should be looking at the connection already. If not navigate to Connections&amp;gt; Enterprise&amp;gt; Microsoft Azure AD&amp;gt; Your-Enterprise-Connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpibnx3zo0x1no7nc2he.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpibnx3zo0x1no7nc2he.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the "Applications" tab in the main heading, click on your application and enable the toggle next to it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk1ngofmlq3xki2a74j4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk1ngofmlq3xki2a74j4.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Testing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Open Authentication&amp;gt; Enterprise&amp;gt; Microsoft Azure AD, click on the try button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tgnp25hs93q249cipom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tgnp25hs93q249cipom.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should be redirected to the Azure AD login screen&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsvtlh0ala28h3daut01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsvtlh0ala28h3daut01.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After Login, accept the permissions request (may not be shown depending on Azure AD config for granting permissions).&lt;/li&gt;
&lt;li&gt;If successful you should see the “It Works!” Message.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lkpf0s9ia54zwnt12in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lkpf0s9ia54zwnt12in.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>auth0</category>
      <category>azuread</category>
      <category>azure</category>
      <category>authentication</category>
    </item>
    <item>
      <title>Overview of AWS Cost Explorer</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Tue, 09 Apr 2024 01:27:21 +0000</pubDate>
      <link>https://dev.to/aws-builders/overview-of-aws-cost-explorer-252m</link>
      <guid>https://dev.to/aws-builders/overview-of-aws-cost-explorer-252m</guid>
      <description>&lt;h3&gt;
  
  
  Why AWS Cost Explorer?
&lt;/h3&gt;

&lt;p&gt;AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 13 months, forecast how much you're likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs.&lt;a href="https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html"&gt;https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Cost Explorer offers an easy-to-use interface to visualize and understand your AWS cost and usage over time. Previously, Cost Explorer provided up to 13 months of cost and usage data at daily and monthly granularity as a free feature, with an option for hourly granularity over the past 14 days as a paid feature. However, customers requiring multi-year analysis or understanding cost drivers with resource level details couldn’t complete these tasks in Cost Explorer. Now, with extended multi-year history and more granular resource level data within Cost Explorer, customers no longer need to leave Cost Explorer to perform the above analysis &lt;/p&gt;

&lt;p&gt;Cost Explorer now offers the following features for free:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-year data at monthly granularity: you can now access up to 38 months of historical data at monthly granularity, allowing for more comprehensive long-term trend analysis.&lt;/li&gt;
&lt;li&gt;Resource-level data at daily granularity: Cost Explorer offers resource-level data at daily granularity, spanning over the past 14 days, enabling you to dive into your cost drivers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The hourly data as well as daily resource-level data is available for the past 14 days.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling Cost Explorer
&lt;/h3&gt;

&lt;p&gt;You can enable Cost Explorer for your account by opening Cost Explorer for the first time in the AWS Cost Management console. &lt;strong&gt;You can't enable Cost Explorer using the API&lt;/strong&gt;. After you enable Cost Explorer, AWS prepares the data about your costs for the current month and the last 13 months, and then calculates the forecast for the next 12 months. The current month's data is available for viewing in about 24 hours. The rest of your data takes a few days longer. Cost Explorer refreshes your cost data at least once every 24 hours.&lt;br&gt;
You can launch Cost Explorer if your account is a member account in an organization where the management account enabled Cost Explorer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
An account’s status within an organization determines what cost and usage data are visible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A standalone account joins an organization. After this, the account can no longer access cost and usage data from when the account was a standalone account.&lt;/li&gt;
&lt;li&gt;A member account leaves an organization to become a standalone account. After this, the account can no longer access cost and usage data from when the account was a member of the organization. The account can access only the data that's generated as a standalone account.&lt;/li&gt;
&lt;li&gt;A member account leaves organization A to join organization B. After this, the account can no longer access cost and usage data from when the account was a member of organization A. The account can access only the data that's generated as a member of organization B.&lt;/li&gt;
&lt;li&gt;An account rejoins an organization that the account previously belonged to. After this, the account regains access to its historical cost and usage data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  To sign up for Cost Explorer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console and open the AWS Cost Management console.&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Explorer.&lt;/li&gt;
&lt;li&gt;On the Welcome to Cost Explorer page, choose Launch Cost Explorer.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Starting Cost Explorer
&lt;/h3&gt;

&lt;p&gt;After you enable Cost Explorer, you can launch it from the AWS Cost Management console.&lt;br&gt;
Start Cost Explorer by opening the AWS Cost Management console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To open Cost Explorer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console and open the AWS Cost Management console at &lt;a href="https://console.aws.amazon.com/cost-management/home"&gt;https://console.aws.amazon.com/cost-management/home&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This opens the Cost dashboard that shows you the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your estimated costs for the month to date&lt;/li&gt;
&lt;li&gt;Your forecasted costs for the month&lt;/li&gt;
&lt;li&gt;A graph of your daily costs&lt;/li&gt;
&lt;li&gt;Your five top cost trends&lt;/li&gt;
&lt;li&gt;A list of reports that you recently viewed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  To set up multi-year and granular data
&lt;/h3&gt;

&lt;p&gt;Enable multi-year data at monthly granularity and resource-level data at daily granularity&lt;br&gt;
Using the management account, you can enable multi-year data and granular data in Cost Explorer. You do this in the Cost Management preferences in the console.&lt;/p&gt;

&lt;p&gt;However, in order to enable multi-year and granular data, you first need to manage access to view and edit your Cost Management preferences.&lt;/p&gt;

&lt;p&gt;You can enable multi-year data at monthly granularity and resource-level data at daily granularity from Cost management preference page available to management account of your organization. Once these features are enabled, they can be used by all accounts in your organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Enabling multi-year data at monthly granularity: You can click on the checkbox to enable this feature. Once enabled, your data should be available within 48 hours in Cost Explorer.&lt;/p&gt;

&lt;p&gt;Enabling resource-level data at daily granularity: You can select specific services you want to enable resource data for. The services are listed in the order of their contribution to your AWS bill, with the most expensive service on top. Once enabled, your data will be available in Cost Explorer within 48 hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruls570pyizu8jy5ksx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruls570pyizu8jy5ksx4.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to the AWS Management Console and open the AWS Cost Management console at &lt;a href="https://console.aws.amazon.com/cost-management/home"&gt;https://console.aws.amazon.com/cost-management/home&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Management preferences.&lt;/li&gt;
&lt;li&gt;To get historical data for up to 38 months, select Multi-year data at monthly granularity.&lt;/li&gt;
&lt;li&gt;To enable resource-level or hourly granular data, consider the following options:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Hourly granularity&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Cost and usage data for all AWS services at hourly granularity to get hourly data for all AWS services without resource-level data.&lt;/li&gt;
&lt;li&gt;Select EC2-Instances (Elastic Compute Cloud) resource-level data to track EC2 cost and usage at instance level at hourly granularity.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Daily granularity&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Resource-level data at daily granularity to get resource-level data for individual or all AWS services.&lt;/li&gt;
&lt;li&gt;Choose services from the AWS services at daily granularity dropdown list that you want to enable resource-level data for.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enabling historical data display for 38 months
&lt;/h3&gt;

&lt;p&gt;Your business, applications, and architecture have matured in the past few years and you are wondering how your AWS spend has evolved along with that. You can now perform this analysis for the past three years in Cost Explorer and get a better understanding of your year-over-year or quarter-over-quarter spend patterns. In Cost Explorer, you can now select a start date within the past three years and set an end date to any date up to the present day to create multi-year data view. You can filter and group this data by various dimensions, such as service, account, region, usage type to perform comprehensive analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxu6ul83manjcrveg54s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxu6ul83manjcrveg54s.png" alt="Image description" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable Resource-level data at daily granularity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Hourly granularity(up to 14 days of past data) is a paid feature &lt;a href="https://docs.aws.amazon.com/cost-management/latest/userguide/ce-hourly-granularity.html"&gt;https://docs.aws.amazon.com/cost-management/latest/userguide/ce-hourly-granularity.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have noticed variance in your Lambda spend in the past two weeks and you are wondering what is causing that at the resource level. You can now perform this analysis in Cost Explorer and pinpoint the exact Lambda functions responsible for the variance. You can then discuss these functions with respective teams to differentiate intended from unintended spend.&lt;/p&gt;

&lt;p&gt;You can filter Cost Explorer for Lambda service to focus on Lambda cost and usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd74a7a0zpodm2js4xadb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd74a7a0zpodm2js4xadb.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscostexplorer</category>
      <category>billing</category>
      <category>costmanagement</category>
    </item>
    <item>
      <title>Kinesis Producers</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Mon, 01 Apr 2024 06:28:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/kinesis-producers-43cp</link>
      <guid>https://dev.to/aws-builders/kinesis-producers-43cp</guid>
      <description>&lt;h4&gt;
  
  
  Kinesis Producers
&lt;/h4&gt;

&lt;p&gt;A producer for Amazon Kinesis Data Streams is an application that feeds user data records into a Kinesis data stream (also called data ingestion). The Kinesis Producer Library (KPL) makes it easier to construct producer applications by allowing developers to achieve high write throughput to a Kinesis data stream.&lt;/p&gt;

&lt;p&gt;There are different methods to stream data into Amazon kinesis streams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kinesis SDK &lt;/li&gt;
&lt;li&gt;Kinesis Producer Library (KPL) &lt;/li&gt;
&lt;li&gt;Kinesis Agent &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other third-party libraries include:&lt;/p&gt;

&lt;p&gt;Spark, Log4J, Appenders, Flume, Kafka Connect, NiFi&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Producer SDK - PutRecord(s)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PutRecord (one record) and PutRecords (many records) APIs are utilized.&lt;/li&gt;
&lt;li&gt;PutRecords leverages batching and enhances performance, resulting in fewer HTTP calls.&lt;/li&gt;
&lt;li&gt;AWS Mobile SDKs: Android, iOS, etc...&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Managed Amazon Web Services sources for Kinesis Data Streams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS IoT&lt;/li&gt;
&lt;li&gt;CloudWatch Logs&lt;/li&gt;
&lt;li&gt;Kinesis Data Analytics&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Use cases:&lt;br&gt;
low throughput, higher latency, simple API, AWS Lambda&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Producer Library (KPL)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to use and highly configurable C++/Java library&lt;/li&gt;
&lt;li&gt;Used for building high-performance, long-running producers&lt;/li&gt;
&lt;li&gt;Automated and configurable retry mechanism&lt;/li&gt;
&lt;li&gt;Synchronous or Asynchronous APIs (better performance for async)&lt;/li&gt;
&lt;li&gt;Submits metrics to CloudWatch for monitoring.&lt;/li&gt;
&lt;li&gt;Batching (both turned on by default) – increase throughput, decrease cost:

&lt;ul&gt;
&lt;li&gt;Collect Records and Write to multiple shards in the same PutRecords API call.&lt;/li&gt;
&lt;li&gt;Aggregate – increased latency.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Producer Library (KPL) Batching&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By inserting some delay using RecordMaxBufferedTime, batching efficiency can be impacted (default 100ms)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g6ejvli63p9yjhcc4h9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g6ejvli63p9yjhcc4h9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; When not to use the Kinesis Producer Library &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The KPL can incur an additional processing delay of up to &lt;strong&gt;RecordMaxBufferedTime&lt;/strong&gt; within the library (user-configurable) &lt;/li&gt;
&lt;li&gt;Larger values ​​of &lt;strong&gt;RecordMaxBufferedTime&lt;/strong&gt; result in higher packing efficiencies and better performance &lt;/li&gt;
&lt;li&gt;Applications that cannot tolerate this additional delay may need to use the AWS SDK directly &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3724rg1ipbqpj00u391.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3724rg1ipbqpj00u391.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Kinesis Agent
&lt;/h4&gt;

&lt;p&gt;Monitor Log files and sends them to Kinesis Data Streams&lt;br&gt;
Java-based agent, built on top of KPL&lt;br&gt;
Install in Linux-based server environments &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write from multiple directories and write to multiple streams&lt;/li&gt;
&lt;li&gt;Routing feature based on directory/log file&lt;/li&gt;
&lt;li&gt;Pre-process data before sending to streams (single line, CSV to JSON, log to JSON)&lt;/li&gt;
&lt;li&gt;The agent handles file rotation, checkpointing, and retry upon failures&lt;/li&gt;
&lt;li&gt;Emits metrics to CloudWatch for monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AWS Kinesis API - Exceptions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Provisioned Throughput Exceeded Exceptions &lt;/li&gt;
&lt;li&gt;Happens when sending more data (exceeding MB/s or TPS for any shard)&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure you don't have a hot shard (such as your partition key is bad and too many data goes to that partition) Solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retries with backoff &lt;/li&gt;
&lt;li&gt;Increase shards (scaling) &lt;/li&gt;
&lt;li&gt;Ensure your partition key is a good one &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>kinesisproducer</category>
      <category>datastream</category>
      <category>analytics</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Cost Optimization Hub</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Mon, 25 Mar 2024 08:34:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cost-optimization-hub-8io</link>
      <guid>https://dev.to/aws-builders/aws-cost-optimization-hub-8io</guid>
      <description>&lt;h2&gt;
  
  
  Overview of Cost Optimization Hub
&lt;/h2&gt;

&lt;p&gt;Cost Optimization Hub is an AWS Billing and Cost Management feature that helps you consolidate and prioritize cost optimization recommendations across your AWS accounts and AWS Regions, so that you can get the most out of your AWS spend.&lt;/p&gt;

&lt;p&gt;Cost Optimization Hub provides the following main benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically identify and consolidate your AWS cost optimization opportunities.&lt;/li&gt;
&lt;li&gt;Quantify estimated savings that incorporate your AWS pricing and discounts.&lt;/li&gt;
&lt;li&gt;Aggregate and deduplicate savings across related cost optimization opportunities.&lt;/li&gt;
&lt;li&gt;Prioritize your cost optimization recommendations with filtering, sorting, and grouping.&lt;/li&gt;
&lt;li&gt;Measure and benchmark your cost efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Accounts supported by Cost Optimization Hub
&lt;/h4&gt;

&lt;p&gt;The following AWS account types can opt in to Cost Optimization Hub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standalone AWS account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A standalone AWS account that doesn't have AWS Organizations enabled. For example, if you opt in to Cost Optimization Hub while signed in to a standalone account, Cost Optimization Hub identifies cost optimization opportunities and consolidates recommendations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Member account of an organization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An AWS account that's a member of an organization. If you opt in to Cost Optimization Hub while signed in to a member account of an organization, Cost Optimization Hub identifies cost optimization opportunities and consolidates recommendations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Management account of an organization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An AWS account that administers an organization. If you opt in to Cost Optimization Hub while signed in to a management account of an organization, Cost Optimization Hub gives you the option to opt in the management account only, or the management account and all member accounts of the organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To opt in all member accounts for an organization, make sure that the organization has all features enabled. For more information, see Enabling All Features in Your Organization&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with Cost Optimization Hub
&lt;/h2&gt;

&lt;p&gt;When you access Cost Optimization Hub for the first time, you're asked to opt in using the account that you’re signed in with.&lt;br&gt;
Before you can use the feature, you must opt in. In addition, you can also opt in using the Cost Optimization Hub API, AWS Command Line Interface (AWS CLI), or SDKs.&lt;/p&gt;

&lt;p&gt;By opting in, you authorize Cost Optimization Hub to import cost optimization recommendations generated by multiple AWS services in your account and all member accounts of your organization. These include rightsizing recommendations from AWS Compute Optimizer and Savings Plans recommendations from AWS Billing and Cost Management. These recommendations are saved in the US East (N. Virginia) Region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling Cost Optimization Hub
&lt;/h3&gt;

&lt;h4&gt;
  
  
  To enable Cost Optimization Hub
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console.&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Optimization Hub.&lt;/li&gt;
&lt;li&gt;On the Cost Optimization Hub page, choose your relevant organization and member account settings:

&lt;ul&gt;
&lt;li&gt;Enable Cost Optimization Hub for this account and all member accounts: Recommendations in this account and all member accounts will be imported into Cost Optimization Hub.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzz6f5bkpo3z58p3kz4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzz6f5bkpo3z58p3kz4z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose Enable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you enable Cost Optimization Hub, AWS starts to import cost optimization recommendations from various AWS products, such as AWS Compute Optimizer. It can take as long as 24 hours for Cost Optimization Hub to import recommendations for all supported AWS resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Accessing the console
&lt;/h4&gt;

&lt;p&gt;When your setup is complete, access Cost Optimization Hub.&lt;/p&gt;

&lt;p&gt;To access Cost Optimization Hub&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Optimization Hub.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Opting out of Cost Optimization Hub
&lt;/h4&gt;

&lt;p&gt;You can opt out of Cost Optimization Hub at any time. However, the organization account can't opt out all member accounts. Each member needs to opt out at account level.&lt;/p&gt;

&lt;p&gt;To opt out of Cost Optimization Hub&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console.&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Management Preferences.&lt;/li&gt;
&lt;li&gt;In Preferences, choose Cost Optimization Hub.&lt;/li&gt;
&lt;li&gt;On the Cost Optimization Hub tab, clear Enable Cost Optimization Hub.&lt;/li&gt;
&lt;li&gt;Choose Save preferences.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Viewing your cost optimization opportunities
&lt;/h3&gt;

&lt;p&gt;Cost optimization findings for your resources are displayed on the Cost Optimization Hub dashboard. You can use this dashboard to filter cost optimization opportunities and aggregate estimated savings. You can compare your total savings opportunities against your previous month's AWS spend.&lt;/p&gt;

&lt;p&gt;Use the dashboard to group your savings opportunities by AWS account, AWS Region, resource types, and tags. View the distribution of your savings opportunities, explore the recommended actions, and identify the areas with the most savings opportunities. The dashboard is refreshed daily and all costs reflect your usage up to the previous day. For example, if today is December 2, the data includes your usage through December 1.&lt;/p&gt;

&lt;h4&gt;
  
  
  Viewing the dashboard
&lt;/h4&gt;

&lt;p&gt;Use the following procedure to view the dashboard and your cost optimization opportunities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console and open the AWS Billing and Cost Management.&lt;/li&gt;
&lt;li&gt;In the navigation pane, choose Cost Optimization Hub.
By default, the dashboard displays an overview of cost optimization opportunities for AWS resources across all AWS Regions in the account that you're currently signed in to.&lt;/li&gt;
&lt;li&gt;You can perform the following actions on the dashboard:

&lt;ul&gt;
&lt;li&gt;To view the cost optimization findings for a particular AWS Region in the account, choose the Region in the chart.&lt;/li&gt;
&lt;li&gt;To view the cost optimization findings for resources in a particular account, under Aggregate estimated savings by, choose AWS account, and then choose an account ID in the chart.&lt;/li&gt;
&lt;li&gt;To view cost optimization findings by resource type, under Aggregate estimated savings by, choose Resource type.&lt;/li&gt;
&lt;li&gt;To view recommended actions, under Aggregate estimated savings by, choose Recommended action.&lt;/li&gt;
&lt;li&gt;To filter findings on the dashboard, under Filter, choose from the filter options.&lt;/li&gt;
&lt;li&gt;To go to the list of resources available for optimization, choose View opportunities.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5jeb22hg4j6lzhc8r94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5jeb22hg4j6lzhc8r94.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Switching the dashboard view
&lt;/h4&gt;

&lt;p&gt;The Cost Optimization Hub dashboard provides you two styles for viewing your cost optimization opportunities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chart view&lt;/li&gt;
&lt;li&gt;Table view&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can set the style by choosing one of the views on the top right corner of the chart or table.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reference
&lt;/h4&gt;

&lt;p&gt;You can also reference &lt;a href="https://aws.amazon.com/blogs/aws/new-cost-optimization-hub-to-find-all-recommended-actions-in-one-place-for-saving-you-money/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>costmanagement</category>
      <category>costoptimizationhub</category>
    </item>
    <item>
      <title>Migrating repository from github to gitlab</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Tue, 19 Mar 2024 05:15:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-repository-from-github-to-gitlab-3ape</link>
      <guid>https://dev.to/aws-builders/migrating-repository-from-github-to-gitlab-3ape</guid>
      <description>&lt;p&gt;I will be showing how I migrated a repository from GitHub to GitLab. There are two methods;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1&lt;/strong&gt;&lt;br&gt;
Firstly you can do it on the console by going to GitLab and authenticating your GitHub credential there and you will be able to import all the repositories you want on GitLab.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log in to your GitLab, and create a new project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht6bmffwmni06sqwi4tt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht6bmffwmni06sqwi4tt.png" alt="Image description" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the import project option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7s8q3mm3jlh40orteygw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7s8q3mm3jlh40orteygw.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the GitHub option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil323aej2sv3qo3pw22c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil323aej2sv3qo3pw22c.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It will show the option of Authorization from GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm40ibarf4ls84xewnoux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm40ibarf4ls84xewnoux.png" alt="Image description" width="507" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you can import all repositories or choose the repository to be migrated. Also, there is an option of importing issues pull request events.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jlkbk22nin6dffh35je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jlkbk22nin6dffh35je.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2&lt;/strong&gt;&lt;br&gt;
The second option, let's say your GitHub is in a private network and you want to take one repository and push it to GitLab, where GitLab is on some other network or hosted in some other cloud environment. How do you migrate in this case?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the GitHub repository you want to migrate.
use
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone --bare &amp;lt;GitHub URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wrjc4cr97hay4p1qurq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wrjc4cr97hay4p1qurq.png" alt="Image description" width="587" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new project in GitLab, and choose the create the blank project option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmaj8jvitb0s6zgdp9d8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmaj8jvitb0s6zgdp9d8.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give a project name and click create project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg4eeufoxzlol9mdvbr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg4eeufoxzlol9mdvbr3.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push the cloned repository to GitLab.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push --mirror &amp;lt;GitLab URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Error Alert:&lt;/strong&gt; while I push to GitLab I encountered an error&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm5sffzujv6f58feai1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm5sffzujv6f58feai1a.png" alt="Image description" width="577" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How did I solve it?&lt;/strong&gt;&lt;br&gt;
Go to project: "Settings" → "Repository" → "Expand" on "Protected branches" and click on unprotect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3g3ezoas7psj0mrq94q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3g3ezoas7psj0mrq94q.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the GitHub repository is migrated successfully to GitLab&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo37n3x4z2xg5ooo02dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo37n3x4z2xg5ooo02dg.png" alt="Image description" width="577" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>gitlab</category>
      <category>git</category>
    </item>
    <item>
      <title>Add alternate contacts to AWS Organization member accounts programmatically</title>
      <dc:creator>Olawale Adepoju</dc:creator>
      <pubDate>Tue, 12 Mar 2024 07:04:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/add-alternate-contacts-to-aws-organization-member-accounts-programmatically-4ipd</link>
      <guid>https://dev.to/aws-builders/add-alternate-contacts-to-aws-organization-member-accounts-programmatically-4ipd</guid>
      <description>&lt;p&gt;To manage the alternate contacts (billing, operations, and security) on your member accounts in AWS Organizations can be daunting sometimes especially when there are quite a large number of member account in the AWS Organization. To input it one after the other can be tasking, so i will be showing how to set the same alternate contacts across all of your accounts programmatically across Organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#### Why Alternate Account?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mostly we want to right people to receive AWS notification regarding billing, operations and security on all of your accounts so that your Cloud Center of Excellence (CCoE) team can receive important notifications about your AWS accounts and take due actions.&lt;br&gt;
Managing alternate contacts become even more important as your organization scales to hundreds or thousands of accounts, saving you time and reducing operational burden.&lt;br&gt;
We’re going to use &lt;a href="https://aws.amazon.com/cloudshell/"&gt;AWS CloudShell&lt;/a&gt;, a browser-based shell that is automatically authenticated with your AWS console credentials and accessible via the upper navigation bar of the AWS console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
First need to make sure that the AWS Identity and Access Management (IAM) user or role you want to manage alternate contacts with has the following permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;account: GetAlternateContact – allows the user to view the current alternate contact&lt;/li&gt;
&lt;li&gt;account: PutAlternateContact – allows the user to set a new alternate contact&lt;/li&gt;
&lt;li&gt;account: DeleteAlternateContact – allows the user to delete an alternate contact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Better so grant the requisite permissions to manage alternate contacts by attaching the &lt;strong&gt;AWSAccountManagementFullAccess&lt;/strong&gt; managed policy to your IAM user or role.&lt;/p&gt;

&lt;p&gt;Next, you’ll need to enable the AWS Account Management service for your organization so you can centrally manage alternate contacts. You can do this by using this CLI command from the management account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws organizations enable-aws-service-access --service-principal account.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, you can register a delegated administrator so users don’t need access to the management account to manage alternate contacts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws organizations register-delegated-administrator --account-id &amp;lt;YOUR-CHOSEN-ACCOUNT-ID&amp;gt; --service-principal account.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;#### Automating the Alternate contacts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;loop-accounts.sh – This script gathers a list of all accounts in your organization and then executes the security-contact.sh script. Paste the script in your CloudShell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; loop-accounts.sh
#! /bin/bash
    managementaccount=\`aws organizations describe-organization --query Organization.MasterAccountId --output text\`

    for account in \$(aws organizations list-accounts --query 'Accounts[].Id' --output text); do

            if [ "\$managementaccount" -eq "\$account" ]
                     then
                         echo 'Skipping management account.'
                         continue
            fi
            ./security-contact.sh -a \$account
            sleep 0.2
    done
EOF
chmod 755 loop-accounts.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The management account is explicitly excluded from the account list. This is because alternate contacts for the management account can only be modified using the standalone context, not the organization context.&lt;/p&gt;

&lt;p&gt;security-contact.sh – This script sets the security alternate contact to the member account in the AWS Organization. Paste the script in your CloudShell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; security-contact.sh
#! /bin/bash
while getopts a: flag
do
    case "\${flag}" in
        a) account_id=\${OPTARG};;
    esac
done

echo 'Put security contact for account '\$account_id'...'
aws account put-alternate-contact \
  --account-id \$account_id \
  --alternate-contact-type=SECURITY \
  --email-address=mysecurity-contact@example.com \
  --phone-number="+1(111)222-3333" \
  --title="Security Contact" \
  --name="My Name"
echo 'Done putting security contact for account '\$account_id'.'

EOF
chmod 755 security-contact.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;FYI:&lt;/strong&gt; make sure to replace the contact details with your actual contact information.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsorganization</category>
    </item>
  </channel>
</rss>
