<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wojciech Kaczmarczyk</title>
    <description>The latest articles on DEV Community by Wojciech Kaczmarczyk (@wojciech_piotrka_4898763).</description>
    <link>https://dev.to/wojciech_piotrka_4898763</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wojciech_piotrka_4898763"/>
    <language>en</language>
    <item>
      <title>Accelerate AI Workloads with Amazon EC2 Trn1 Instances and AWS Neuron SDK</title>
      <dc:creator>Wojciech Kaczmarczyk</dc:creator>
      <pubDate>Fri, 22 Nov 2024 13:36:32 +0000</pubDate>
      <link>https://dev.to/wojciech_piotrka_4898763/accelerate-ai-workloads-with-amazon-ec2-trn1-instances-and-aws-neuron-sdk-24cc</link>
      <guid>https://dev.to/wojciech_piotrka_4898763/accelerate-ai-workloads-with-amazon-ec2-trn1-instances-and-aws-neuron-sdk-24cc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As machine learning models grow in complexity, the need for cost-effective and high-performance infrastructure becomes crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Trn1 Instances&lt;/strong&gt;, powered by AWS-designed Trainium chips, and the &lt;strong&gt;AWS Neuron SDK&lt;/strong&gt; offer a powerful combination to accelerate deep learning training workloads.&lt;/p&gt;

&lt;p&gt;These solutions are designed to deliver exceptional performance, scalability, and cost savings, making them ideal for &lt;strong&gt;AI developers&lt;/strong&gt; and &lt;strong&gt;data scientists&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This article explores the key benefits and features of &lt;strong&gt;Trn1&lt;/strong&gt; instances and the &lt;strong&gt;Neuron SDK&lt;/strong&gt;, along with guidance on getting started using &lt;strong&gt;AWS SageMaker&lt;/strong&gt;, &lt;strong&gt;Deep Learning AMIs&lt;/strong&gt;, and &lt;strong&gt;Neuron Containers&lt;/strong&gt; to supercharge your AI workflows.&lt;/p&gt;

&lt;p&gt;Benefits&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Trn1 Instances&lt;/strong&gt; and the &lt;strong&gt;AWS Neuron SDK&lt;/strong&gt; deliver unparalleled performance and cost efficiency for training deep learning models.&lt;/p&gt;

&lt;p&gt;Built on AWS-designed Trainium chips, Trn1 instances provide up to &lt;strong&gt;50% lower training costs&lt;/strong&gt; compared to &lt;strong&gt;GPU-based&lt;/strong&gt; instances, making them ideal for organizations aiming to scale AI projects efficiently. Their high-speed interconnect and optimization with the &lt;strong&gt;Neuron SDK&lt;/strong&gt; ensure faster training times, enabling quicker insights and innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Amazon EC2 Trn1 Instances:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Trainium Chips&lt;/strong&gt;: Designed specifically for AI/ML training workloads, delivering high performance and energy efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High-Speed Networking&lt;/strong&gt;: Powered by AWS Elastic Fabric Adapter (EFA) for ultra-fast interconnect, supporting distributed training across multiple nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Supports up to 16 Trainium accelerators per instance, making it suitable for massive datasets and complex models.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Framework Compatibility&lt;/strong&gt;: Works seamlessly with popular ML frameworks like TensorFlow and PyTorch via the Neuron SDK.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Neuron SDK:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Optimization&lt;/strong&gt;: Includes libraries, compilers, and runtime tools for training and deploying models on Trainium and Inferentia chips.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Framework Integration&lt;/strong&gt;: Offers optimized plugins for TensorFlow, PyTorch, and Hugging Face Transformers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Profiling and Debugging Tools&lt;/strong&gt;: Enables users to fine-tune performance, ensuring efficient use of resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AWS SageMaker
&lt;/h3&gt;

&lt;p&gt;Amazon SageMaker &lt;strong&gt;simplifies&lt;/strong&gt; building, training, and deploying machine learning models on Trn1 instances. It provides &lt;strong&gt;pre-configured environments&lt;/strong&gt;, easy integration with the Neuron SDK, and a fully &lt;strong&gt;managed experience&lt;/strong&gt; for distributed training.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Deep Learning AMIs
&lt;/h3&gt;

&lt;p&gt;AWS Deep Learning AMIs come pre-installed with the Neuron SDK, popular ML frameworks, and tools, &lt;strong&gt;allowing&lt;/strong&gt; developers to quickly set up &lt;strong&gt;environments&lt;/strong&gt; for training and inference on Trn1 instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Neuron Containers
&lt;/h3&gt;

&lt;p&gt;Neuron Containers are &lt;strong&gt;Docker&lt;/strong&gt; images optimized for Trainium and Inferentia-based workloads. They provide ready-to-use environments for running training jobs in containerized workflows, supporting &lt;strong&gt;Kubernetes&lt;/strong&gt; and &lt;strong&gt;ECS&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  Practice
&lt;/h1&gt;

&lt;p&gt;Play around with Amazon SageMaker Studio based on YT tutorial.&lt;br&gt;&lt;br&gt;
Getting started with &lt;a href="https://www.youtube.com/watch?v=oBx_o57gDGY" rel="noopener noreferrer"&gt;&lt;strong&gt;Getting started on Amazon SageMaker Studio | Amazon Web Services&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Explore Neuron SDK  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/models/training-trn1-samples.html#model-samples-training-trn1" rel="noopener noreferrer"&gt;Find training samples&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Explore AWS Neuron samples GitHub repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/hf_text_classification/BertBaseCased.ipynb" rel="noopener noreferrer"&gt;Explore GitHub repository with AWS Neuron samples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/README.md" rel="noopener noreferrer"&gt;https://github.com/aws-neuron/aws-neuron-samples/blob/master/README.md&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;To explore more, dive into the &lt;a href="https://aws.amazon.com/ec2/instance-types/trn1/" rel="noopener noreferrer"&gt;AWS EC2 Trn1 Documentation&lt;/a&gt; and the &lt;a href="https://aws.amazon.com/machine-learning/neuron/" rel="noopener noreferrer"&gt;AWS Neuron SDK Guide&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sagemaker</category>
      <category>deeplearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>RAG based Generative AI using Amazon Bedrock Knowledge Base</title>
      <dc:creator>Wojciech Kaczmarczyk</dc:creator>
      <pubDate>Fri, 22 Nov 2024 12:48:39 +0000</pubDate>
      <link>https://dev.to/wojciech_piotrka_4898763/rag-based-generative-ai-using-amazon-bedrock-knowledge-base-1h9o</link>
      <guid>https://dev.to/wojciech_piotrka_4898763/rag-based-generative-ai-using-amazon-bedrock-knowledge-base-1h9o</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;Generative AI chatbots, powered by advanced language models, offer natural, contextual, and versatile conversations by dynamically generating responses.&lt;/p&gt;

&lt;p&gt;Unlike traditional chatbots, they utilize techniques like transformers, attention mechanisms, and reinforcement learning to enhance coherence and relevance.&lt;/p&gt;

&lt;p&gt;These capabilities make them ideal for customer service, virtual assistance, and creative tasks like content generation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Retrieval Augmented Generation (RAG) enhances language models by integrating external knowledge retrieval with their generation process.&lt;/p&gt;

&lt;p&gt;Using vector embeddings to find relevant information from a knowledge base, RAG combines this data with the model's outputs to produce more accurate, context-aware, and informed responses.&lt;/p&gt;

&lt;p&gt;This approach excels in tasks like question answering, dialog systems, and content generation, improving text quality and coherence.&lt;/p&gt;

&lt;h1&gt;
  
  
  Scenario
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;The user makes a request to the GenAI app.&lt;/li&gt;
&lt;li&gt;The app passes the query to the &lt;strong&gt;Bedrock agent&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If the agent finds it relevant, it sends a request to the &lt;strong&gt;Knowledge base&lt;/strong&gt; to get context based on user input.&lt;/li&gt;
&lt;li&gt;The question is converted into embeddings using &lt;strong&gt;Bedrock via the Titan embeddings v1.2 model&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;The embedding is used to find similar documents from an &lt;strong&gt;OpenSearch Service Serverless index&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;OpenSearch output is returned to the &lt;strong&gt;Knowledge base&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The Knowledge base returns the context.&lt;/li&gt;
&lt;li&gt;The Bedrock agent sends the user’s request, along with the data retrieved from the index as context in the prompt, to the &lt;strong&gt;LLM&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The LLM returns a succinct response to the user request based on the retrieved data.&lt;/li&gt;
&lt;li&gt;The response from the LLM is sent back to the app.&lt;/li&gt;
&lt;li&gt;The app displays the &lt;strong&gt;Agent/LLM output&lt;/strong&gt; to the users.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Practice
&lt;/h1&gt;

&lt;p&gt;In this video, you can learn how to build a RAG-based Generative AI Chatbot in &lt;strong&gt;20 minutes&lt;/strong&gt; using &lt;strong&gt;Amazon Bedrock Knowledge Base&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  In this video, you'll learn:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;What is &lt;strong&gt;Amazon Bedrock Knowledge Base&lt;/strong&gt; and how to set it up?&lt;/li&gt;
&lt;li&gt;How to set up a managed &lt;strong&gt;Amazon OpenSearch Serverless vector database&lt;/strong&gt; with Amazon Bedrock Knowledge Base.&lt;/li&gt;
&lt;li&gt;How to sync data and test Amazon Bedrock Knowledge Base with a managed chatbot test feature using &lt;strong&gt;Amazon Bedrock LLMs&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Links/Sources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=hnyDDfo8e9Q" rel="noopener noreferrer"&gt;Build a RAG-based Generative AI Chatbot in 20 mins using Amazon Bedrock Knowledge Base&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-it-works.html" rel="noopener noreferrer"&gt;Foundation Models&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html" rel="noopener noreferrer"&gt;Amazon Bedrock Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-it-works.html" rel="noopener noreferrer"&gt;Retrieval Augmented Generation (RAG) Technique&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>vectordatabase</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding BERT Tokens: Tokenization and Its Role in NLP</title>
      <dc:creator>Wojciech Kaczmarczyk</dc:creator>
      <pubDate>Wed, 20 Nov 2024 15:44:08 +0000</pubDate>
      <link>https://dev.to/wojciech_piotrka_4898763/understanding-bert-tokens-tokenization-and-its-role-in-nlp-58nc</link>
      <guid>https://dev.to/wojciech_piotrka_4898763/understanding-bert-tokens-tokenization-and-its-role-in-nlp-58nc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to BERT Tokens: A Beginner's Guide
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;BERT&lt;/strong&gt; (Bidirectional Encoder Representations from Transformers) is a powerful machine learning model used for processing natural language, like understanding text or answering questions. At the heart of how BERT works is something called &lt;strong&gt;tokenization&lt;/strong&gt;—a process that breaks down text into smaller pieces called tokens.&lt;/p&gt;

&lt;p&gt;You can think of tokens as the "&lt;strong&gt;building blocks&lt;/strong&gt;" of language that BERT uses to analyze and understand text. For example, the sentence "I love AI" would be split into individual words or subwords, which the model can process more effectively. BERT uses special tokens (such as [CLS] and [SEP]) to add structure and context to the text it analyzes, making it easier for BERT to perform tasks like sentiment analysis or language translation.&lt;/p&gt;

&lt;p&gt;This article will explain what BERT &lt;strong&gt;tokens&lt;/strong&gt; are, how they’re created, and why they’re so important for helping BERT understand and &lt;strong&gt;process language&lt;/strong&gt;. Whether you're curious about how BERT handles complex text or just want to know more about how &lt;strong&gt;tokenization&lt;/strong&gt; works, this guide will give you the key insights you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of BERT Tokens
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. WordPiece Tokens
&lt;/h3&gt;

&lt;p&gt;BERT uses a tokenization approach called &lt;strong&gt;WordPiece&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Words are split into smaller units (subwords) to handle out-of-vocabulary words effectively.&lt;/li&gt;
&lt;li&gt;Example:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input&lt;/strong&gt;: &lt;em&gt;"unbelievable"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tokens&lt;/strong&gt;: &lt;code&gt;["un", "##believable"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"##"&lt;/code&gt; indicates the subword is part of a previous word.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Special Tokens
&lt;/h3&gt;

&lt;p&gt;BERT adds special tokens to inputs to provide additional context and structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;[CLS]&lt;/strong&gt; (Classification Token):

&lt;ul&gt;
&lt;li&gt;Placed at the beginning of every input sequence.
&lt;/li&gt;
&lt;li&gt;Used to aggregate information for classification tasks.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;[SEP]&lt;/strong&gt; (Separator Token):

&lt;ul&gt;
&lt;li&gt;Marks the end of one sentence and separates multiple sentences in a sequence.
&lt;/li&gt;
&lt;li&gt;Example: In sentence-pair tasks, &lt;code&gt;[SEP]&lt;/code&gt; separates the two sentences.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;[PAD]&lt;/strong&gt; (Padding Token):

&lt;ul&gt;
&lt;li&gt;Added to make sequences the same length in a batch.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  BERT Tokenization Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Text Cleaning&lt;/strong&gt;: Input text is lowercased (for uncased models) and punctuation is standardized.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tokenization&lt;/strong&gt;: Sentences are split into tokens using the WordPiece algorithm.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Special Tokens&lt;/strong&gt;: &lt;code&gt;[CLS]&lt;/code&gt; and &lt;code&gt;[SEP]&lt;/code&gt; tokens are added.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convert to IDs&lt;/strong&gt;: Tokens are mapped to integer IDs using a predefined vocabulary.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Padding and Truncation&lt;/strong&gt;: Sequences are padded or truncated to match the maximum length.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Input&lt;/strong&gt;: &lt;em&gt;"Hello world!"&lt;/em&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokenization&lt;/strong&gt;: &lt;code&gt;["[CLS]", "hello", "world", "!", "[SEP]"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDs&lt;/strong&gt; (using a vocab): &lt;code&gt;[101, 7592, 2088, 999, 102]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  BERT Tokens in Practice
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Single-Sentence Input:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: &lt;em&gt;"I love AI."&lt;/em&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokens&lt;/strong&gt;: &lt;code&gt;[CLS] I love AI . [SEP]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDs&lt;/strong&gt;: &lt;code&gt;[101, 146, 1567, 7270, 1012, 102]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Sentence Pair Input:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: &lt;em&gt;"What is AI?"&lt;/em&gt; / &lt;em&gt;"Artificial Intelligence."&lt;/em&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokens&lt;/strong&gt;: &lt;code&gt;[CLS] What is AI ? [SEP] Artificial Intelligence . [SEP]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDs&lt;/strong&gt;: &lt;code&gt;[101, 2054, 2003, 7270, 1029, 102, 7844, 10392, 1012, 102]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Padding:
&lt;/h3&gt;

&lt;p&gt;Sentences in a batch are padded to the same length.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input 1&lt;/strong&gt;: &lt;code&gt;[101, 2054, 2003, 7270, 102, 0, 0]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input 2&lt;/strong&gt;: &lt;code&gt;[101, 2154, 2731, 102, 0, 0, 0]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Token Embeddings in BERT
&lt;/h2&gt;

&lt;p&gt;After tokenization, tokens are converted into embeddings that capture their contextual meaning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Token Embeddings&lt;/strong&gt;: Represent the specific token.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segment Embeddings&lt;/strong&gt;: Distinguish between sentences in sentence-pair tasks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Position Embeddings&lt;/strong&gt;: Capture the order of tokens in the sequence.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final embedding for each token is a combination of these three components.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Tokenization Matters in BERT
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Handles Out-of-Vocabulary Words&lt;/strong&gt;: Breaking words into subwords reduces issues with rare words.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized Context Understanding&lt;/strong&gt;: WordPiece tokens allow BERT to handle root words, prefixes, and suffixes effectively.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Universal Vocabulary&lt;/strong&gt;: The same vocabulary works across different languages and domains.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Tools for Tokenization
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hugging Face Transformers&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BertTokenizer&lt;/span&gt;

&lt;span class="n"&gt;tokenizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;BertTokenizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bert-base-uncased&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tokenizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tokenize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello world!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;token_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tokenizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;convert_tokens_to_ids&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# ['hello', 'world', '!']
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token_ids&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# [7592, 2088, 999]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.&lt;strong&gt;TensorFlow or PyTorch Implementations&lt;/strong&gt;: Often include built-in tokenizers compatible with BERT models.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary: Understanding BERT Tokens in NLP
&lt;/h1&gt;

&lt;p&gt;BERT (Bidirectional Encoder Representations from Transformers) uses &lt;strong&gt;tokenization&lt;/strong&gt; to process text, breaking down raw input into smaller units called &lt;strong&gt;tokens&lt;/strong&gt;. It relies on the &lt;strong&gt;WordPiece&lt;/strong&gt; tokenization technique, which splits words into subwords to handle out-of-vocabulary words. Special tokens like &lt;code&gt;[CLS]&lt;/code&gt;, &lt;code&gt;[SEP]&lt;/code&gt;, and &lt;code&gt;[PAD]&lt;/code&gt; are added to structure input sequences for specific NLP tasks.&lt;/p&gt;

&lt;p&gt;The tokenization process involves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cleaning and splitting text&lt;/li&gt;
&lt;li&gt;Converting tokens into integer IDs&lt;/li&gt;
&lt;li&gt;Adding padding or truncation where necessary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These tokens are then embedded into vectors, combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Token embeddings&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Segment embeddings&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Position embeddings&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combination represents their contextual meaning.&lt;br&gt;
BERT's tokenization approach enables efficient handling of large datasets and complex language models, making it essential for tasks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text classification&lt;/li&gt;
&lt;li&gt;Question answering&lt;/li&gt;
&lt;li&gt;Sentence pair analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tools like the &lt;strong&gt;Hugging Face Transformers&lt;/strong&gt; library simplify tokenization and integration with BERT models for practical NLP applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=QEaBAZQCtwE" rel="noopener noreferrer"&gt;Getting Started With Hugging Face in 15 Minutes | Transformers, Pipeline, Tokenizer, Models&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
&lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;Hugging Face Model Hub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://huggingface.co/learn/nlp-course/chapter9/2" rel="noopener noreferrer"&gt;Hugging Face Hello World examples&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>nlp</category>
    </item>
    <item>
      <title>Why Generative AI Projects Struggle to Reach Production</title>
      <dc:creator>Wojciech Kaczmarczyk</dc:creator>
      <pubDate>Tue, 19 Nov 2024 16:27:54 +0000</pubDate>
      <link>https://dev.to/wojciech_piotrka_4898763/why-generative-ai-projects-struggle-to-reach-production-4gg3</link>
      <guid>https://dev.to/wojciech_piotrka_4898763/why-generative-ai-projects-struggle-to-reach-production-4gg3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The rapid advancements in Generative AI (GenAI) have captured the imagination of organizations worldwide. Yet, many Proof-of-Concept (PoC) initiatives fail to transition into production. Based on surveys with AWS GenAI partners, six critical roadblocks have been identified. Here, we unpack these challenges and explore potential solutions to help organizations overcome them.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Meticulous Business Scoping and ROI Modeling&lt;/strong&gt;
“We had FOMO, but also had higher hopes…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Many organizations dive into GenAI projects without clear business objectives or Return on Investment (ROI) frameworks. The absence of well-defined use cases leads to disappointing benefits, leaving projects to flounder after initial enthusiasm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Start with a business-first approach:&lt;/p&gt;

&lt;p&gt;Define specific goals tied to measurable outcomes.&lt;br&gt;
Leverage AWS tools like Amazon Bedrock to build scalable, targeted solutions.&lt;br&gt;
Use frameworks like “working backward” to ensure alignment with business needs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lack of a Robust Data Strategy&lt;/strong&gt;
“Turned out that real life needs more robust data…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Data discrepancies between PoC and production environments are common. Many organizations underestimate the data quality, volume, and diversity needed for GenAI solutions to perform effectively in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Invest in a comprehensive data strategy:&lt;br&gt;
Use Retrieval-Augmented Generation (RAG) for more reliable results.&lt;br&gt;
Regularly fine-tune models with real-world data.&lt;br&gt;
Employ AWS data services like Amazon S3 and AWS Glue to centralize, clean, and prepare data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Advanced Optimization for ROI&lt;/strong&gt;
“It works. But it’s just too expensive…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Generative AI projects can falter due to cost, performance, or latency concerns that make production deployment financially unsustainable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Optimize costs with targeted model usage:&lt;br&gt;
Utilize AWS tools for model evaluation to balance accuracy, cost, and speed.&lt;br&gt;
Leverage Amazon Bedrock for access to multiple cost-efficient foundation models.&lt;br&gt;
Explore model customization to fine-tune only the necessary components and reduce computational overhead.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Skilled ML/FM Engineers&lt;/strong&gt;
“We do not yet have those specialists…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Transitioning GenAI projects into production requires specialized engineering skills, such as foundational model (FM) deployment and ML pipeline management. A shortage of such talent can halt progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Upskill your team with AWS training programs and certifications (e.g., AWS Generative AI Specialty).&lt;br&gt;
Consider managed services like Amazon SageMaker for deploying and maintaining ML pipelines without deep expertise.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Strategic Priority&lt;/strong&gt;
“To be honest, this capability is a nice-to-have…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without a strong strategic commitment, AI initiatives are often deprioritized or shelved. This is especially true when GenAI projects are treated as exploratory rather than essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Embed GenAI initiatives into the organization’s broader digital transformation strategy.&lt;br&gt;
Highlight quick wins and measurable outcomes to secure buy-in from leadership.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Challenges in Governance, Security, and Compliance&lt;/strong&gt;
“We are unsure this won’t go wrong…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Concerns around security, compliance, and legal risks frequently delay production deployment. Issues such as prompt injection vulnerabilities or personal data exposure exacerbate the hesitation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Use Amazon Bedrock’s built-in guardrails to mitigate risks:&lt;br&gt;
Implement word and topic filters to prevent harmful outputs.&lt;br&gt;
Ensure compliance with PII filters and robust security measures.&lt;br&gt;
Work with AWS experts or trusted GenAI partners to navigate regulatory landscapes effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;: Bridging the Gap Between PoC and Production&lt;br&gt;
The journey from experimentation to production in GenAI is complex but navigable. By addressing the above challenges systematically, organizations can unlock the full potential of their GenAI initiatives. AWS’s ecosystem of tools, from Amazon Bedrock to SageMaker, provides a robust foundation for deploying scalable, secure, and cost-effective GenAI solutions.&lt;/p&gt;

&lt;p&gt;Take the first step today. Explore AWS Generative AI resources, fine-tune your strategy, and move from PoC to production with confidence.&lt;/p&gt;

&lt;p&gt;Ready to get started? Visit AWS Generative AI to learn more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws-notes.hashnode.dev/why-generative-ai-projects-struggle-to-reach-production" rel="noopener noreferrer"&gt;My-tech blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/apn/fuel-generative-ai-success-building-robust-data-foundation-with-aws-partners/" rel="noopener noreferrer"&gt;AWS Partner pages&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/ai/generative-ai/?gclid=Cj0KCQiAi_G5BhDXARIsAN5SX7pGIx09pzF-qHt-MXjH8NdLQ6A78tTNiURCOz7l8VYlADVQgIyCLjYaAnoHEALw_wcB&amp;amp;trk=c30d9c04-dff1-48dd-85e1-551e986fbb60&amp;amp;sc_channel=ps&amp;amp;ef_id=Cj0KCQiAi_G5BhDXARIsAN5SX7pGIx09pzF-qHt-MXjH8NdLQ6A78tTNiURCOz7l8VYlADVQgIyCLjYaAnoHEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!686122457191!p!!g!!aws%20generative%20ai!20894978760!155960704974" rel="noopener noreferrer"&gt;AWS Partner pages&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting Started with AWS Landing Zone: Tips for Terraform Setup</title>
      <dc:creator>Wojciech Kaczmarczyk</dc:creator>
      <pubDate>Thu, 07 Nov 2024 16:09:18 +0000</pubDate>
      <link>https://dev.to/wojciech_piotrka_4898763/getting-started-with-aws-landing-zone-tips-for-terraform-setup-47oi</link>
      <guid>https://dev.to/wojciech_piotrka_4898763/getting-started-with-aws-landing-zone-tips-for-terraform-setup-47oi</guid>
      <description>&lt;p&gt;AWS Landing Zone is a powerful framework designed to help organizations establish &lt;strong&gt;secure&lt;/strong&gt;, &lt;strong&gt;multi-account&lt;/strong&gt; environments in AWS. It provides a foundation for deploying and managing an enterprise-ready AWS environment with &lt;strong&gt;governance&lt;/strong&gt;, &lt;strong&gt;compliance&lt;/strong&gt;, and &lt;strong&gt;security best practices&lt;/strong&gt; baked in. For those working with &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;, Terraform can be an ideal choice to automate the setup of a Landing Zone. Here, we’ll go over the basics and provide some tips for using Terraform to manage your &lt;strong&gt;AWS Landing Zone&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is AWS Landing Zone?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS Landing Zone offers a standardized, secure, and scalable environment for managing AWS accounts. It's aimed at simplifying the setup of new AWS accounts within an organization while enforcing policies, security controls, and compliance requirements. Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Account Vending Machine (AVM)&lt;/strong&gt; for automated account creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized logging&lt;/strong&gt; for tracking account activities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Baseline Configuration&lt;/strong&gt; including VPCs, subnets, and route tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guardrails for consistent security policies&lt;/strong&gt; across accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up AWS Landing Zone with Terraform
&lt;/h2&gt;

&lt;p&gt;Using Terraform with AWS Landing Zone provides several benefits, such as reproducibility, scalability, and automation. Here’s a simple outline to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Define Your Landing Zone Resources&lt;/strong&gt;: Start by &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/designing-control-tower-landing-zone/account-structure.html" rel="noopener noreferrer"&gt;defining essential resources like Organizational Units (OUs), Accounts, and necessary roles within your Terraform configuration files&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use the AWS Organizations Module&lt;/strong&gt;: AWS offers a &lt;a href="https://registry.terraform.io/modules/aws-ss/organizations/aws/latest" rel="noopener noreferrer"&gt;Terraform module for AWS Organizations&lt;/a&gt;, which simplifies the process of managing multi-account setups and can integrate with AWS Control Tower.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create Policies as Cod&lt;/strong&gt;e: With Terraform, you can define &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html" rel="noopener noreferrer"&gt;Service Control Policies (SCPs)&lt;/a&gt; to manage account permissions. SCPs can restrict or allow specific services or actions, which is useful for maintaining a compliant setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate VPC and Network Setup&lt;/strong&gt;: Use Terraform modules to set up a standardized VPC architecture. AWS provides a &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest" rel="noopener noreferrer"&gt;VPC module&lt;/a&gt; that can help establish subnets, route tables, and NAT gateways across multiple accounts in your Landing Zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enable CloudTrail and Centralized Logging&lt;/strong&gt;: &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html" rel="noopener noreferrer"&gt;Set up CloudTrail and centralized logging&lt;/a&gt; to S3. This allows you to monitor activities across all accounts in a single location for better security and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tips for Using Terraform with AWS Landing Zone
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan Organizational Units (OUs) Carefully&lt;/strong&gt;
Organize accounts based on their purpose, such as Dev, Test, and Prod, and create distinct OUs. This structure enables easier management and policy application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;resource "aws_organizations_organizational_unit" "dev" {&lt;br&gt;
  name      = "Development"&lt;br&gt;
  parent_id = aws_organizations_organization.example.root_id&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage the AWS Control Tower Module&lt;/strong&gt;&lt;br&gt;
If you're using AWS Control Tower, Terraform's Control Tower module can help automate account creation. Control Tower adds guardrails, SCPs, and baselines, further simplifying multi-account management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create Custom Policies as Code&lt;/strong&gt;&lt;br&gt;
Define Service Control Policies (SCPs) in Terraform to enforce rules across your organization. This example restricts users in specific accounts from launching certain EC2 instance types:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;resource "aws_organizations_policy" "restrict_ec2_instances" {&lt;br&gt;
  name   = "RestrictEC2Instances"&lt;br&gt;
  type   = "SERVICE_CONTROL_POLICY"&lt;br&gt;
  content = &amp;lt;&amp;lt;POLICY&lt;br&gt;
{&lt;br&gt;
  "Version": "2012-10-17",&lt;br&gt;
  "Statement": [&lt;br&gt;
    {&lt;br&gt;
      "Effect": "Deny",&lt;br&gt;
      "Action": "ec2:RunInstances",&lt;br&gt;
      "Resource": "*",&lt;br&gt;
      "Condition": {&lt;br&gt;
        "StringEquals": {&lt;br&gt;
          "ec2:InstanceType": [&lt;br&gt;
            "t2.micro",&lt;br&gt;
            "t2.small"&lt;br&gt;
          ]&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;br&gt;
POLICY&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automate Account Provisioning&lt;/strong&gt;
Use Terraform's for_each functionality to iterate through a list of accounts and create each account in the Landing Zone setup:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;resource "aws_organizations_account" "accounts" {&lt;br&gt;
  for_each    = toset(var.account_names)&lt;br&gt;
  name        = each.key&lt;br&gt;
  email       = "${each.key}@yourdomain.com"&lt;br&gt;
  role_name   = "OrganizationAccountAccessRole"&lt;br&gt;
  parent_id   = aws_organizations_organizational_unit.prod.id&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Implement Centralized IAM Roles&lt;/strong&gt;
Create IAM roles with permissions that can be assumed by users across accounts, allowing &lt;a href="https://docs.aws.amazon.com/rolesanywhere/latest/userguide/security-iam.html" rel="noopener noreferrer"&gt;centralized access management&lt;/a&gt; and simplifying security.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;AWS Landing Zone is a fantastic framework for establishing secure, compliant, and scalable AWS environments. By using &lt;strong&gt;Terraform&lt;/strong&gt;, you can bring Infrastructure as Code &lt;strong&gt;best practices&lt;/strong&gt; to Landing Zone setups, allowing for repeatable, consistent, and efficient account provisioning and management.&lt;/p&gt;

&lt;p&gt;Whether you're setting up a new environment or looking to improve an existing one, Terraform can &lt;strong&gt;simplify **AWS Landing Zone management, streamline workflows, and enhance overall **governance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Credits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://chatgpt.com/" rel="noopener noreferrer"&gt;Chat GPT
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.craiyon.com/" rel="noopener noreferrer"&gt;Craiyon&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
