<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jeya Shri</title>
    <description>The latest articles on DEV Community by Jeya Shri (@jeyy).</description>
    <link>https://dev.to/jeyy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jeyy"/>
    <language>en</language>
    <item>
      <title>Banking Ledgers: The Foundation Behind Decentralization</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Fri, 13 Feb 2026 09:17:47 +0000</pubDate>
      <link>https://dev.to/jeyy/banking-ledgers-the-foundation-behind-decentralization-20p0</link>
      <guid>https://dev.to/jeyy/banking-ledgers-the-foundation-behind-decentralization-20p0</guid>
      <description>&lt;p&gt;Hii guys, I'm starting off with blockchain from scratch like literal basics. So, the utmost first thing we need to know about is banking legders - they are the ones which sow seeds for invention of blockchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are they ?
&lt;/h2&gt;

&lt;p&gt;Okay, now you have ₹10,000 in your bank A and now you have to transfer ₹5000 to your friend's bank account B. When I was a child, I always thought that the amount is being physically transferred from Bank A to Bank. But now we all know that's not the case, instead the details are only being updated.&lt;/p&gt;

&lt;p&gt;Ledger is like a notebook which maintains records of those transactions and store their data. Only the database is being updated, the amount being transferred and the balance and so on.&lt;/p&gt;

&lt;p&gt;So, people this is what a leger actually is and banks maintain those ledgers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Centralization and the need for Decentralization:
&lt;/h2&gt;

&lt;p&gt;As we saw before, the ledgers are controlled by a central authority that maintains all records. This means the entire system is built on trust, because a single entity is responsible for managing, updating, and protecting the data.&lt;/p&gt;

&lt;p&gt;Now you might wonder — if banks have been using this system for decades, what could possibly go wrong?&lt;/p&gt;

&lt;p&gt;Let’s think about it.&lt;/p&gt;

&lt;p&gt;When one authority holds all the power:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If their system fails, transactions can get delayed or completely stopped.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If their database is compromised, sensitive financial data is at risk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If an incorrect entry is made, we depend entirely on them to identify and fix it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And most importantly, users have very little transparency into how these records are handled behind the scenes.&lt;/p&gt;

&lt;p&gt;This does not mean centralized systems are bad. In fact, they brought structure, reliability, and scalability to modern banking. But like every technological model, centralization comes with its own set of limitations.&lt;/p&gt;

&lt;p&gt;Over time, people began asking an important question:&lt;/p&gt;

&lt;p&gt;“What if we could build a system where trust isn’t placed in a single authority?”&lt;/p&gt;

&lt;p&gt;What if records were not stored in just one place, but shared across multiple participants?&lt;/p&gt;

&lt;p&gt;What if transactions could be verified collectively rather than approved by one controlling body?&lt;/p&gt;

&lt;p&gt;This curiosity is exactly what planted the idea for something revolutionary.&lt;/p&gt;

&lt;p&gt;And that is where &lt;strong&gt;decentralization&lt;/strong&gt; enters the picture.&lt;/p&gt;

&lt;p&gt;Instead of relying on one central ledger, decentralization distributes the ledger across a network of computers. Every participant holds a copy, and transactions are validated through consensus rather than blind trust.&lt;/p&gt;

&lt;p&gt;No single entity has complete control.&lt;br&gt;
No single point of failure exists.&lt;br&gt;
And transparency becomes a built-in feature rather than an optional one.&lt;/p&gt;

&lt;p&gt;This shift in thinking eventually led to the creation of blockchain — a technology designed to solve the trust problem without requiring intermediaries.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
      <category>fintech</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Blockchain - What's the fuss about?</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Wed, 11 Feb 2026 06:07:10 +0000</pubDate>
      <link>https://dev.to/jeyy/understanding-blockchain-the-what-and-ifs-5h0g</link>
      <guid>https://dev.to/jeyy/understanding-blockchain-the-what-and-ifs-5h0g</guid>
      <description>&lt;p&gt;Blockchain - They are digital databases, where the data is stored within blocks which are connected with one another forming a chain (Reminds me of Linked List). So, in recent days we are hearing a lot about financial threats and lack of security paving way for hackers to access our system which can be a huge threat. &lt;/p&gt;

&lt;p&gt;So, inorder to ensure that data tampering becomes nearly impossible - we have got blockchain. The recorded transactions are stored in blocks which are interconnected with one another.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it is exactly?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Blockchain&lt;/strong&gt; as defined earlier is a digital database, handling digital transactions with high level security. So, they are blocks which are connected in a chain. The main thing is that this data is immutable, i.e once the data is stored in the blocks, it can't be changed or modified.&lt;/p&gt;

&lt;p&gt;This is achieved because the blocks act as nodes of a linked list, where a node is dependant on the node before and after it. So, if we alter it then it may break the whole chain structure.&lt;/p&gt;

&lt;p&gt;Each block contains crucial data, such as a list of transactions, a timestamp, and a unique identifier called a cryptographic hash. This hash is generated from the block's contents and the hash of the previous block, ensuring that each block is tightly connected to the one before it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zmz9vgm11k39etbaw7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zmz9vgm11k39etbaw7b.png" alt="Blockchain" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Decentralization
&lt;/h2&gt;

&lt;p&gt;Blockchain is not centrally managed by a separate server to manage, monitor and handle the data, instead they are spread across several computers which are connected. This allows transparency since its publicly visible to all nodes in the network.&lt;/p&gt;

&lt;p&gt;This ensures that no single network can hack into it, if an attempt is made to modify a record at one instance of the database, the other nodes will thwart this action by comparing block hashes. Consequently, no individual node within the network possesses the capability to alter information contained within the chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main Components :&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Distributed Ledger&lt;/em&gt; - Blockchain is a shared record system spread across many computers, with each participant having a copy. Once data is added, it can't be changed or deleted, ensuring no single point of failure or control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Blocks&lt;/em&gt; - Data is stored in blocks, each containing a set of transactions, a timestamp, and a reference (hash) to the previous block. This creates a secure, linked chain of blocks, where any change would disrupt the chain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Nodes&lt;/em&gt; (Peer-to-Peer Network) -  Nodes are the devices in the network that store the blockchain and validate new transactions. They communicate directly, ensuring the blockchain remains decentralized and operates without a central authority.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Advantages :
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Decentralization&lt;/em&gt;: The decentralized nature of blockchain technology eliminates the need for intermediaries, reducing costs and increasing transparency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Security&lt;/em&gt;: Transactions on a blockchain are secured through cryptography, making them virtually immune to hacking and fraud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Transparency&lt;/em&gt;: Blockchain technology allows all parties in a transaction to have access to the same information, increasing transparency and reducing the potential for disputes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Efficiency&lt;/em&gt;: Transactions on a blockchain can be processed quickly and efficiently, reducing the time and cost associated with traditional transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trust: The transparent and secure nature of blockchain technology can help to build trust between parties in a transaction.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reference - geeksforgeeks, tutorialspoint&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>web3</category>
      <category>learning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>What are ur thoughts on Blockchain ?</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Sun, 08 Feb 2026 14:23:22 +0000</pubDate>
      <link>https://dev.to/jeyy/what-are-ur-thoughts-on-blockchain--1n6j</link>
      <guid>https://dev.to/jeyy/what-are-ur-thoughts-on-blockchain--1n6j</guid>
      <description>&lt;p&gt;I'm thinking of learning about Blockchain - which I have no idea about. But peculiarly attracted to it, I don't know why!&lt;/p&gt;

&lt;p&gt;So, guys tell me...anything like:&lt;br&gt;
what is ur view about it? &lt;br&gt;
Is it worth the hype?&lt;br&gt;
Is it worth learning?&lt;br&gt;
What role does it play?&lt;br&gt;
Where to start with?&lt;br&gt;
How did you learn it?&lt;br&gt;
Any resources u can help us with?&lt;/p&gt;

&lt;p&gt;Or anything about it!!&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ama</category>
      <category>blockchain</category>
      <category>learning</category>
    </item>
    <item>
      <title>Help people keep up with tech stacks to prevent being laid off!</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Fri, 06 Feb 2026 18:31:46 +0000</pubDate>
      <link>https://dev.to/jeyy/help-people-keep-up-with-tech-stacks-to-prevent-being-laid-off-p2f</link>
      <guid>https://dev.to/jeyy/help-people-keep-up-with-tech-stacks-to-prevent-being-laid-off-p2f</guid>
      <description>&lt;p&gt;Okayy people, this community comprises of people from different technical backgrounds who have enormous knowledge in distinct fields.&lt;/p&gt;

&lt;p&gt;I'm just a newbie to the tech world and there are many others like me, who are wondering about what to study next and where to start with. Inorder to sustain in this world of layoffs, we need help and guidance from people like you with experience.&lt;/p&gt;

&lt;p&gt;So, keeping in mind the line of next gen techies sustaining in this AI era, can you people please guide us amateurs providing us with knowledge which you have gained throughout your jouney. &lt;/p&gt;

&lt;p&gt;And yeah, nowadays there's a lot of technologies evolving and major changes are being made with companies replacing the human workforce faster than ever. So, inorder to survive in here, we need to keep evolving with knowlege.&lt;/p&gt;

&lt;p&gt;So people, come share it in comments...so, we can start learning those technologies asap.&lt;/p&gt;

</description>
      <category>anonymous</category>
      <category>help</category>
      <category>ama</category>
      <category>discuss</category>
    </item>
    <item>
      <title>A Beginner’s Guide to Amazon SageMaker (AI series)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Fri, 06 Feb 2026 17:58:53 +0000</pubDate>
      <link>https://dev.to/jeyy/a-beginners-guide-to-amazon-sagemaker-ai-series-1jag</link>
      <guid>https://dev.to/jeyy/a-beginners-guide-to-amazon-sagemaker-ai-series-1jag</guid>
      <description>&lt;p&gt;There are situations where pre-built AI is not enough. You may need a model tailored specifically to your business data, capable of making predictions unique to your use case. This is where &lt;strong&gt;Amazon SageMaker&lt;/strong&gt; becomes essential.&lt;/p&gt;

&lt;p&gt;Amazon SageMaker is the service that moves you from &lt;em&gt;using AI&lt;/em&gt; to &lt;em&gt;building AI&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding What Amazon SageMaker Really Is
&lt;/h2&gt;

&lt;p&gt;Amazon SageMaker is a fully managed machine learning platform that allows developers and data scientists to build, train, tune, and deploy machine learning models at scale.&lt;/p&gt;

&lt;p&gt;Before platforms like SageMaker existed, building ML systems required setting up servers, configuring GPUs, managing distributed training clusters, handling deployment infrastructure, and monitoring production models. This process was not only complex but also expensive and time-consuming.&lt;/p&gt;

&lt;p&gt;SageMaker consolidates this entire lifecycle into a single environment.&lt;/p&gt;

&lt;p&gt;It is important to understand that SageMaker is not a single tool. It is an ecosystem of capabilities designed to support every stage of machine learning, from data preparation to production deployment.&lt;/p&gt;

&lt;p&gt;For beginners, this may sound overwhelming at first, but the platform is structured in a way that allows you to adopt it gradually.&lt;/p&gt;

&lt;h2&gt;
  
  
  When You Should Use SageMaker Instead of Pre-Built AI Services
&lt;/h2&gt;

&lt;p&gt;A common question beginners ask is whether they should use services like Bedrock or jump directly into SageMaker. The answer depends on the level of customization required.&lt;/p&gt;

&lt;p&gt;Pre-built AI services are ideal when the problem is already well understood, such as detecting faces, converting speech, or generating text. SageMaker becomes the right choice when your data is unique and your predictions must be tailored specifically to your domain.&lt;/p&gt;

&lt;p&gt;For example, a bank predicting loan defaults, a hospital estimating patient risk, or an e-commerce platform forecasting product demand would benefit from custom-trained models.&lt;/p&gt;

&lt;p&gt;In simple terms, if AI services are ready-made tools, SageMaker is the workshop where you build your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Machine Learning Fits Into the SageMaker Workflow
&lt;/h2&gt;

&lt;p&gt;To understand SageMaker clearly, it helps to visualize the machine learning lifecycle as a sequence of stages.&lt;/p&gt;

&lt;p&gt;The process typically begins with data collection. Models learn patterns from historical data, so the quality and quantity of data directly influence model performance.&lt;/p&gt;

&lt;p&gt;Next comes data preparation, where missing values are handled, formats are standardized, and features are engineered. Clean data is critical because even the most advanced algorithms cannot compensate for poor input.&lt;/p&gt;

&lt;p&gt;Training follows preparation. During training, an algorithm analyzes the dataset repeatedly, adjusting internal parameters to minimize prediction error.&lt;/p&gt;

&lt;p&gt;Once trained, the model must be evaluated to ensure it performs well on unseen data. Only after meeting performance expectations is it deployed as an endpoint that applications can call in real time.&lt;/p&gt;

&lt;p&gt;SageMaker supports each of these stages within a managed environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  SageMaker Studio: The Central Workspace
&lt;/h2&gt;

&lt;p&gt;At the heart of SageMaker is &lt;strong&gt;SageMaker Studio&lt;/strong&gt;, a web-based integrated development environment for machine learning.&lt;/p&gt;

&lt;p&gt;Studio provides a unified workspace where you can access datasets, write training code, run experiments, and deploy models. It eliminates the need to switch between multiple tools.&lt;/p&gt;

&lt;p&gt;For beginners, Studio simplifies the learning curve because everything is organized in one place. You can launch notebooks, track experiments, visualize metrics, and manage models without configuring infrastructure manually.&lt;/p&gt;

&lt;p&gt;This centralized approach is one of SageMaker’s strongest advantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built-in Algorithms and Framework Support
&lt;/h2&gt;

&lt;p&gt;One of the biggest barriers to starting with machine learning is choosing the right algorithm and configuring the training environment. SageMaker reduces this friction by offering built-in algorithms optimized for performance and scalability.&lt;/p&gt;

&lt;p&gt;These algorithms cover common tasks such as classification, regression, recommendation systems, and anomaly detection.&lt;/p&gt;

&lt;p&gt;At the same time, SageMaker supports popular frameworks like TensorFlow, PyTorch, and Scikit-learn. This means developers who already have ML experience can bring their own code, while beginners can rely on pre-optimized options.&lt;/p&gt;

&lt;p&gt;The platform adapts to different skill levels rather than forcing a single workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training Models Without Managing Infrastructure
&lt;/h2&gt;

&lt;p&gt;Training machine learning models often requires significant compute power, especially for large datasets. SageMaker provisions the required resources automatically, runs the training job, and shuts down the infrastructure afterward.&lt;/p&gt;

&lt;p&gt;This on-demand model prevents unnecessary costs and removes the burden of capacity planning.&lt;/p&gt;

&lt;p&gt;Additionally, SageMaker supports distributed training, enabling large models to train faster by using multiple machines simultaneously. While beginners may not need this immediately, it becomes valuable as projects scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic Model Tuning
&lt;/h2&gt;

&lt;p&gt;Choosing the right hyperparameters is one of the most challenging parts of machine learning. Hyperparameters control how a model learns, and small adjustments can dramatically affect accuracy.&lt;/p&gt;

&lt;p&gt;SageMaker includes automatic model tuning, which searches for the best hyperparameter combinations by running multiple training jobs in parallel.&lt;/p&gt;

&lt;p&gt;Instead of guessing optimal settings, developers can rely on systematic experimentation driven by the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Models Into Production
&lt;/h2&gt;

&lt;p&gt;A trained model becomes useful only when it can serve predictions to real applications. SageMaker makes deployment straightforward by allowing models to be exposed through secure API endpoints.&lt;/p&gt;

&lt;p&gt;Applications can send requests to these endpoints and receive predictions in milliseconds.&lt;/p&gt;

&lt;p&gt;SageMaker also supports auto-scaling, ensuring that endpoints adjust capacity based on traffic. This prevents performance bottlenecks during peak usage while controlling costs during quieter periods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and Maintaining Model Performance
&lt;/h2&gt;

&lt;p&gt;Machine learning models can degrade over time as real-world data evolves, a phenomenon known as model drift. SageMaker provides monitoring capabilities that track prediction quality and detect anomalies.&lt;/p&gt;

&lt;p&gt;When performance drops, teams can retrain models using updated datasets.&lt;/p&gt;

&lt;p&gt;This continuous improvement cycle is essential for maintaining reliable AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Simple Conceptual Example Using Python
&lt;/h2&gt;

&lt;p&gt;The following example illustrates what launching a training job might look like using the SageMaker Python SDK. The goal here is not to dive into algorithm details but to understand how easily training can be initiated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sagemaker&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sagemaker.sklearn.estimator&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SKLearn&lt;/span&gt;

&lt;span class="n"&gt;role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-sagemaker-execution-role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;estimator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SKLearn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;entry_point&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;train.py&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instance_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ml.m5.large&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;framework_version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.2-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;train&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3://your-bucket/training-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet defines a training configuration, points to a script containing the learning logic, and starts the training process using data stored in Amazon S3.&lt;/p&gt;

&lt;p&gt;SageMaker handles the infrastructure, environment setup, and execution automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Awareness and Cost Control
&lt;/h2&gt;

&lt;p&gt;Amazon SageMaker follows a usage-based pricing model. Costs typically depend on compute instances used for training, storage, and deployed endpoints.&lt;/p&gt;

&lt;p&gt;Because resources are provisioned on demand, it is important to stop unused endpoints and notebooks. Cost management becomes especially important as experiments grow larger.&lt;/p&gt;

&lt;p&gt;For beginners, starting with smaller instances is a practical way to learn without overspending.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where SageMaker Fits in the Modern AI Stack
&lt;/h2&gt;

&lt;p&gt;After exploring multiple AWS AI services, it becomes clear that SageMaker occupies a different layer of the ecosystem.&lt;/p&gt;

&lt;p&gt;If services like Rekognition and Comprehend provide ready-made intelligence, and Bedrock provides generative capabilities through foundation models, SageMaker empowers organizations to create proprietary models trained on their own data.&lt;/p&gt;

&lt;p&gt;It represents the deepest level of AI customization available within AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Amazon SageMaker marks an important transition in your AI journey. It shifts your role from integrating intelligence into applications to designing intelligent systems yourself.&lt;/p&gt;

&lt;p&gt;For beginners, the key is not to master every SageMaker feature immediately, but to understand the workflow and gradually build familiarity. Machine learning can appear complex, but platforms like SageMaker make it significantly more approachable.&lt;/p&gt;

&lt;p&gt;AI on AWS is not just about models, it is about building intelligent, scalable systems that solve meaningful problems.&lt;/p&gt;

&lt;p&gt;What do you think about this??&lt;br&gt;
And what series do u think I should post next?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Building Generative AI Applications Using Amazon Bedrock (AI Series)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Sat, 31 Jan 2026 06:26:34 +0000</pubDate>
      <link>https://dev.to/jeyy/building-generative-ai-applications-using-amazon-bedrock-ai-series-27am</link>
      <guid>https://dev.to/jeyy/building-generative-ai-applications-using-amazon-bedrock-ai-series-27am</guid>
      <description>&lt;p&gt;So far in the AI on AWS series, we have discussed services that are task-specific like image analysis, document processing, text understanding, and speech processing. Amazon Bedrock is an indication that AI is moving towards general-purpose generative AI instead of task-specific AI.&lt;/p&gt;

&lt;p&gt;Generative AI allows the applications to generate new content, not just analyze the existing one. This involves the creation of text, summarization of documents, question and answer systems and conversational systems. The managed platform of Amazon Bedrock, Amazon AWS enables developers to work with powerful foundation models without the need to manage infrastructure or to train large models on their own.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Amazon Bedrock is ?
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock is a fully managed service which can access multiple foundation models using the same, consistent API. Large AI models that are trained on very large datasets are referred to as foundation models and can be used to perform a variety of tasks, including text generation, reasoning, summarization, or classification.&lt;/p&gt;

&lt;p&gt;Prior to Bedrock, the use of such models would mean dealing with the management of GPUs, scaling infrastructure, security issues, and complex deployments. Bedrock eliminates these obstacles by storing the models on AWS and making them a service to the developers.&lt;/p&gt;

&lt;p&gt;This enables generative AI to be utilized by programmers who are interested in developing applications but not in operating AI systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Foundation Models&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A foundation model is a large, trained artificial intelligence model that has trained patterns on large volumes of data. It is able to be trained to respond to a large number of tasks depending on prompts rather than be trained to complete one task.&lt;/p&gt;

&lt;p&gt;As an example, the identical model can respond to questions, summarize text, create code, or rewrite material depending on the way it is trained. This is the power of foundation models because of this flexibility.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock offers the foundation models offered by other providers where developers can choose a model depending on capability, performance, and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Available Models on Amazon Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock accepts foundation models provided by providers like Anthropic, Amazon and Meta. These are text generation models, conversational AI models and embeddings.&lt;/p&gt;

&lt;p&gt;All the models possess varying strengths. There are those that are reasoning and safety optimized, others speed or cost optimized. Bedrock generalises these variations by providing a common API, such that there is no need to re-write application code in order to change models.&lt;/p&gt;

&lt;p&gt;This is one of the greatest strengths of Bedrock due to this flexibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon Bedrock Works as a Developer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a developer, Amazon Bedrock is similar to a request-response service. You give a query to a chosen foundation model and the model produces a response.&lt;/p&gt;

&lt;p&gt;Bedrock deals with model hosting, scaling, security and availability. The requests and responses remain within the AWS environment that is vital to organizations that are concerned with data privacy and compliance.&lt;/p&gt;

&lt;p&gt;You do not work on models or hardware. You just eat this intelligence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt Engineering in Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Making the input text to be by design in order to steer the output of the model is what is referred to as prompt engineering. Because foundation models are very flexible, the quality of result obtained is dependent on the way instructions are written.&lt;/p&gt;

&lt;p&gt;Indicatively, when a model is asked to summarize this document as a set of three bullet points to a non-technical audience the results would be very different to when a vague prompt is used.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock lets a developer quickly experiment with prompts, and repeat the process, as outputs get refined without having to retrain models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applications of Amazon Bedrock in Real Life&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Chatbots, document summarization systems, intelligent search systems, content generation systems, and internal knowledge assistants are built using Amazon Bedrock.&lt;/p&gt;

&lt;p&gt;Bedrock allows enterprises to build AI-based applications that can assist employees to find information, generate reports, and automatize repetitive processes. It enables startups to develop generative AI features in quick mode without having to spend on infrastructure.&lt;/p&gt;

&lt;p&gt;To inexperienced people, Bedrock opens up the possibilities of creating modern AI applications that were not accessible before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working on Amazon Bedrock on the AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AWS Console has a playground-like interface that allows the user to play around with various foundation models and prompts.&lt;/p&gt;

&lt;p&gt;You are able to choose a model, type a prompt, fine-tune the settings, including temperature and maximum tokens, and instantly see the resultant response. This interactive space is useful in learning about the behavior of generative AI and how immediate modifications influence the product.&lt;/p&gt;

&lt;p&gt;It is also useful in encouraging beginners to have confidence before adding Bedrock to the applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Python (Conceptual Example) on Amazon Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is a simplified example that shows how a prompt might be sent to a Bedrock model using the AWS SDK. The exact API may vary depending on the model used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="n"&gt;bedrock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bedrock-runtime&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Explain cloud computing to a beginner in simple terms.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;modelId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;foundation-model-id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates how generative AI can be integrated into applications with minimal code. The complexity of large models is completely hidden from the developer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security, Privacy and Data Handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security is one of the reasons why companies turn to Amazon Bedrock. By default, AWS does not use customer data to generate foundation models.&lt;/p&gt;

&lt;p&gt;The data will be stored in the AWS environment and will enjoy the available security measures provided by AWS including IAM, encryption and logging. This renders Bedrock to be enterprise and regulated worthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing and Cost Implications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock is a pay-as-you-go service, which is often provided on the basis of input and output tokens. There are different pricing structures in different models.&lt;/p&gt;

&lt;p&gt;Since generative AI allows using big prompts and generating long outputs, the cost management is significant. The prompts should be designed with care and then be observed in development and production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When Amazon Bedrock Is the Right Choice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock is the right solution when the applications require general-purpose and flexible AI functionality, like text generation, reasoning, or dialogs.&lt;/p&gt;

&lt;p&gt;In case you need to train a very specialized machine learning model in a bare-bones fashion, Amazon SageMaker can be the right option. Bedrock is the best choice when you need power, fast, and easy, but do not want to spend on infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock is one of the significant changes in the usage of AI by developers. Rather than creating models, developers create experiences that are driven by foundation models.&lt;/p&gt;

&lt;p&gt;To start with, Amazon Bedrock is the entry point to the generative AI world on the AWS platform. It enables you to create, develop and test smart applications on the same infrastructure used by business.&lt;/p&gt;

&lt;p&gt;The following and the last section of this introductory to AI series will discuss Amazon SageMaker and the way it fits into the larger AI and machine learning ecosystem of AWS.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cloud</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Converting Speech into Text Using Amazon Transcribe(AI series on AWS)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Tue, 27 Jan 2026 11:00:00 +0000</pubDate>
      <link>https://dev.to/jeyy/converting-speech-into-text-using-amazon-transcribeai-series-on-aws-bic</link>
      <guid>https://dev.to/jeyy/converting-speech-into-text-using-amazon-transcribeai-series-on-aws-bic</guid>
      <description>&lt;p&gt;The last section of this series discussed Amazon Polly and the way in which an app can be used to transform written text into natural speech. We close the circle in this paper by taking human speech in reverse, to text.&lt;/p&gt;

&lt;p&gt;Speech to text technology has been integrated in the current-day applications. Audio data can be found everywhere, whether in virtual meetings and communication with customers via phone calls or podcasts and voice notes. This data is very slow, expensive and prone to mistakes when done manually. Amazon Transcribe fixes this issue through artificial intelligence.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Amazon Transcribe Is and What Its Significance Is
&lt;/h2&gt;

&lt;p&gt;Amazon transcribe is a fully managed speech recognition service it converts a speech into written text. It enables applications to handle audio recordings or audio live feed and generate correct transcriptions automatically.&lt;/p&gt;

&lt;p&gt;Historically, speech recognition systems have been complex in terms of acoustic model, language model and tuning. Amazon transcribe distills all of this complexity and reveals a simple interface which can be used by developers without any prior experience using speech processing.&lt;/p&gt;

&lt;p&gt;This renders speech-to-text to be usable even by novices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Operation of Amazon Transcribe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When audio is uploaded to the Amazon Transcribe, the company initially breaks down the sound waves in order to determine the patterns of speech. It then splits the audio into phonemes, matches them to words with language models followed by the use of contextual knowledge to enhance accuracy.&lt;/p&gt;

&lt;p&gt;Amazon Transcribe trains using various datasets that enable it to process the various accents, talking rates and conversational patterns. It also knows punctuations, sentence boundaries and changes of speakers.&lt;/p&gt;

&lt;p&gt;To the developer, all these occur behind the scenes. You feed audio and get structured text back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Audio formats, languages and features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe adds MP3, WAV, FLAC, and MP4 as universal audio. It is also compatible with several languages and local dialects, which makes it appropriate to be used worldwide.&lt;/p&gt;

&lt;p&gt;The service has additional features available beyond simple transcription like speaker recognition, custom vocabularies and automatic punctuations. These characteristics increase greatly readability and usability of the text generated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applications of the Amazon Transcribe in the real world&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe is also broadly applicable in meeting transcription applications, call center analytics, media content indexations, and accessibility applications. Businesses use it to transcribe the calls of their customers and analyze dialogs and produce compliance documents.&lt;/p&gt;

&lt;p&gt;Individual developers and students Individual developers and students can use Transcribe to drive applications such as voice-based note taking, podcasts transcription, or interview documentation systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Amazon Transcribe by Using the AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Those who are new to it can easily test Amazon Transcribe using the AWS Console.&lt;/p&gt;

&lt;p&gt;Once the Transcribe service has been opened, registration of a transcription job is possible by giving an audio file that is stored on Amazon S3. You pick the language and the configuration options then go on to start the job. When the processing has been done, AWS makes the transcription output available in text and JSON formats.&lt;/p&gt;

&lt;p&gt;The console based workflow assists the user to get the full lifecycle of a transcription job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working in Python using Amazon Transcribe (Example)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following is a Python code that illustrates how a job of transcription of an audio file in S3 can be initiated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;transcribe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;transcribe&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;transcribe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_transcription_job&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;TranscriptionJobName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sample-transcription-job&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Media&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MediaFileUri&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3://my-audio-bucket/sample-audio.mp3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;MediaFormat&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mp3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;LanguageCode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;en-US&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Transcription job started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the job is done, the transcription output is delivered over the given S3 location. The output contains the timestamps, confidence scores, and names of the speakers when it is turned off.&lt;/p&gt;

&lt;p&gt;This is a batch processing method that suits recording like meetings or interviews.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming and Real Time Transcription&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe is also compatible with real-time transcription streaming APIs. This allows use of live captions, voice assistants and real time analytics.&lt;/p&gt;

&lt;p&gt;Streaming processes audio in chunks as they come in, and generate the text as it is received in almost real-time. Although a little more difficult to implement compared with the batch jobs, it also introduces the possibility of interactive voice-driven applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhancing Precision by use of own vocabularies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A typical problem with speech-to-text applications, is the ability to identify domain specific terminology, names, or acronyms. Amazon Transcribe takes care of this by using custom vocabularies.&lt;/p&gt;

&lt;p&gt;Lists of specialized words and phrases (e.g., product names, technical terms, etc.) can be defined by developers. These vocabularies are then used by transcribe to enhance the accuracy of recognition when transcription is being done.&lt;/p&gt;

&lt;p&gt;The feature is particularly useful in such industries as healthcare, finance, and technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing and Cost Awareness&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe pricing is calculated by the way many seconds of audio it processes. Other characteristics like custom vocabularies or streaming transcription can have an impact on pricing.&lt;/p&gt;

&lt;p&gt;AWS offers a free tier that has limited use in learning and experimentation. When transcription is required with the long audio files, developers are advised to pay attention to transcription time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use Amazon Transcribe?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe should be used when a program requires decent and scalable speech-to-text, as well as automated speech-to-text. It can be used in offline records and in live audio streaming.&lt;/p&gt;

&lt;p&gt;When an application demands an almost very specialized speech recognition or offline processing that cannot connect with the cloud, other solutions might be necessary. Transcribe is a good and stable cloud-based application solution in most applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe fills a significant role in the AI ecosystem on AWS because it helps an application to interpret human speech. It can be used together with such services as Amazon Polly and Amazon Comprehend to make developers create fully voice-enabled and language-aware systems.&lt;/p&gt;

&lt;p&gt;To the novice Amazon Transcribe is a big leap to developing intelligent applications that will converse with people in a natural manner.&lt;/p&gt;

&lt;p&gt;What are your thoughts about transcribe? Have you guys used it or did any projects with it yet?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Amazon Polly - She's Holly Molly(AI on AWS series)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Mon, 26 Jan 2026 09:37:44 +0000</pubDate>
      <link>https://dev.to/jeyy/amazon-polly-shes-holly-mollyai-on-aws-series-9lk</link>
      <guid>https://dev.to/jeyy/amazon-polly-shes-holly-mollyai-on-aws-series-9lk</guid>
      <description>&lt;p&gt;Now that we have this AI on AWS series, we do not deal with text knowledge anymore, but provide applications with a voice. Voice interface has also been freed of virtual assistants. They can be employed broadly in learning platforms, accessibility tools, customer support platforms, navigation applications and content platforms.&lt;/p&gt;

&lt;p&gt;This is made possible through Amazon Polly which is the AWS service that uses artificial intelligence to convert written text to natural-sounding speech.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Amazon Polly is and what Issue it Resolves?
&lt;/h2&gt;

&lt;p&gt;Amazon Polly is a text-to-speech tool, which converts text to natural sound. The conventional method of developing speech system involved voice recording of human voices, audio file management and dealing with various accents and language. This was costly, time consuming and not easily scalable.&lt;/p&gt;

&lt;p&gt;Amazon Polly does not need to overcome these obstacles as it offers ready-to-use neural voices capable of speech-generating at any time. All developers do is to input text in Polly and get an audio stream back.&lt;/p&gt;

&lt;p&gt;This enables dynamic generation of speech without audio files (audio files need not be stored and handled).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Amazon Polly Magic Behind the Scenes
&lt;/h2&gt;

&lt;p&gt;Sentences are put in Amazon Polly it goes through deep learning models that are trained on a variety of samples of human speech. These models comprehend sentence structure, pronunciation, stress and intonation.&lt;/p&gt;

&lt;p&gt;Polly is a supporter of standard voices as well as a supporter of neural voices. More complex models are used in neural voices to generate a more natural and expressive speech, and they are more related to human narrations.&lt;/p&gt;

&lt;p&gt;The product is produced in audio file or stream formats, e.g. MP3, WAV or OGG, which can be directly played in applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assisted Language, Vocality and Speech Styles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly supports multiple languages and a large variety of voices, including the male and female ones in various regions. Other speech styles that are supported by some voices include the conversational, newscaster, or empathetic tones.&lt;/p&gt;

&lt;p&gt;This elasticity enables the developers to select the voices that are most appropriate to their application. A simple example of this is that an educational application could employ a soft and clear voice, whereas a news application could implement a more commanding voice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application of Amazon Polly in the Real World&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly is popularly applied in e-learning systems to turn learning content into audio lessons. The accessibility tools rely on Polly to read text to the visually impaired individuals. The customer support systems use Automated call flows to create spoken responses.&lt;/p&gt;

&lt;p&gt;Polly is also used by content creators to create an audio version of blogs, articles, and notifications so that information is more accessible and attractive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Amazon Polly by use of the AWS console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AWS Console is a simple tool offering the opportunity to play with Amazon Polly.&lt;/p&gt;

&lt;p&gt;Once you have navigated into the Polly service you can enter or paste text into the console, choose a language and voice and immediately hear the speech that has been generated. This practical method is used to make novices realize the impact of various voices and styles on speech production.&lt;/p&gt;

&lt;p&gt;The generated audio can also be downloaded to the console, and tested further.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python using Amazon Polly(Example)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An example in Python (transforming a text into an MP3 audio file with the help of Amazon Polly) is given below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;polly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;polly&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;polly&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synthesize_speech&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Welcome to the AI on AWS series. This audio was generated using Amazon Polly.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;OutputFormat&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mp3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;VoiceId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Joanna&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;speech.mp3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;AudioStream&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Audio file generated successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code transmits the text to Amazon Polly, where it gets an audio stream and is saved in the form of an MP3 file. Polly is simple in structure and can be simply incorporated into web and mobile applications as well as into the backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Speech Synthesis Markup Language (SSML)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly also enables SSML whereby a developer has the control over pronunciation, pauses, pitch and speaking pace. Speech sound may be made more natural and expressive using SSML.&lt;/p&gt;

&lt;p&gt;As an illustration, one may insert pauses between sentences, stress certain words, and even make certain words be pronounced differently.&lt;/p&gt;

&lt;p&gt;It is a good level of control in narration, stories and teaching materials.&lt;/p&gt;

&lt;p&gt;Streaming: &lt;/p&gt;

&lt;p&gt;Amazon Polly will support streaming audio and file-based generation. Streaming can be used in applications that need to be real-time like chatbots or voice assistants. File based creation works better with pre recorded materials such as audiobooks or announcements.&lt;/p&gt;

&lt;p&gt;This difference will enable developers to select the appropriate integration strategy depending on the needs of the applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing and Cost factors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly pricing is determined by the amount of characters that are read out. Neural voice commands a higher price in comparison to regular voice, as it is a superior voice.&lt;/p&gt;

&lt;p&gt;The free version is enough to explore the service to beginners and for small projects. When scaling applications, use of character is an important factor that needs to be monitored to prevent any surprises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When is Amazon Polly the correct choice to use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly is the best when the applications require dynamism, scalability, and natural sound. Specifically, it is handy in accessibility, education, and voice-based user experience.&lt;/p&gt;

&lt;p&gt;In case the application needs some personal voice or a speech that is very personalized, some extra services or professional voice applications might be necessary. Polly is a good and powerful solution to most general use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Polly shows that AI could be used to make applications more user-friendly by adding voice features. AWS allows developers to build voice-enabled systems with little to no effort by generalizing the complex speech synthesizer into a simple API.&lt;/p&gt;

&lt;p&gt;To those who are just starting to learn, Amazon Polly can be a helpful entry point into the world of AI-based speech recognition and a significant milestone on the way toward making the applications more interactive and inclusive.&lt;/p&gt;

&lt;p&gt;Leave your suggestions below! What do you think of polly? &lt;/p&gt;

</description>
      <category>ai</category>
      <category>cloud</category>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>Understanding Text Using Amazon Comprehend(AI series on AWS)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Sat, 24 Jan 2026 04:26:27 +0000</pubDate>
      <link>https://dev.to/jeyy/understanding-text-using-amazon-comprehendai-series-on-aws-11f5</link>
      <guid>https://dev.to/jeyy/understanding-text-using-amazon-comprehendai-series-on-aws-11f5</guid>
      <description>&lt;p&gt;We have already learned how to use AWS to interpret images in the case of Amazon Rekognition and documents in the case of Amazon Textract. Here we change the focus of images and documents to what all applications are interacting with on a daily basis: &lt;em&gt;text&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The information that is valuable is in the form of emails, reviews, support tickets, social media posts, chat messages, feedback forms, and survey responses. Nevertheless, it becomes impractical to read and analyze this data manually when it becomes large. &lt;/p&gt;

&lt;p&gt;Amazon Comprehend allows applications to initialise and unlock meaning in text with Natural Language Processing (NLP).&lt;/p&gt;

&lt;h2&gt;
  
  
  What Amazon Comprehend Is ?
&lt;/h2&gt;

&lt;p&gt;Amazon Comprehend is a comprehensive managed artificial intelligence service that can read and extract the concepts of sentiment, key phrases, entities, language, and topics. Comprehend comprehends the meaning of words in their original context and intent unlike the conventional ways of text processing where the matching of keywords is applied.&lt;/p&gt;

&lt;p&gt;Indicatively, it can tell a positive review or a negative one by a customer, that "Amazon" is an organization, not a river, or that a lengthy paragraph had key words.&lt;/p&gt;

&lt;p&gt;It is important since these applications are used today to create large volumes of unstructured text in large amounts. Amazon Comprehend enables programmers to transform that unstructured text into structured, actionable information without having to create NLP models.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;Amazon Comprehend processes the input with pre-trained deep learning models when you send text to it and this is intended to understand the language. These models are trained on big data on various languages and various writing styles.&lt;/p&gt;

&lt;p&gt;The service splits text into tokens, deconstructs grammar, judges the semantic meaning, and uses classification methods to generate insights. All this is going on behind the scenes. As a developer, it is still a simple API call which can give a response in the form of a JSON.&lt;/p&gt;

&lt;p&gt;This is the abstraction that makes Comprehend usable even to software developers who have no experience in either linguistics or machine learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Major functionalities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Comprehend is characterized by a number of essential features that fulfill a particular need related to the analysis of the text.&lt;/p&gt;

&lt;p&gt;Sentiment analysis identifies the positive, negative, neutral, and mixed emotions in the text. It is typically applied to customer feedback system, review analysis and social media monitoring.&lt;/p&gt;

&lt;p&gt;Entity recognition involves recognising in the real world objects like people, commercial objects, locations, dates, quantities, and organisations. Considering the sentence, Apple released the iPhone in California, Comprehend is able to recognize the correct organization (Apple) and the correct location (California).&lt;/p&gt;

&lt;p&gt;Key phrase extraction emphasizes the most suitable phrases in a text. This comes in handy with respect to summarization, indexing, and optimization of search.&lt;/p&gt;

&lt;p&gt;Language detection allows the automatic identification of the language of the input text that is particularly useful in global applications dealing with multilingual data.&lt;/p&gt;

&lt;p&gt;Topic modeling is a service that can be used through asynchronous jobs to find themes in a massive amount of documents without labels.&lt;/p&gt;

&lt;p&gt;All these enable applications to comprehend textual data in large scale in a profound way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Comprehend is very popular in industries. Ticket analysis on customer support platforms helps detect possible recurrent problems and customer mood. Product perception is studied by companies dealing with e-commerce through reviews. Reports and emails go through financial institutions to identify risk indicators. Healthcare institutions examine clinical notes, frequently with the help of Comprehend Medical which is a special form of the service.&lt;/p&gt;

&lt;p&gt;With even basic projects, Comprehend has the ability to drive functionality such as feedback dashboards, automated tagging systems or sentiment-driven alerts.&lt;/p&gt;

&lt;p&gt;The program is accessed with the help of Amazon Comprehend in the AWS Console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service walk-through in AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once signed in to the AWS, go to the search box and type in Comprehend and open the service. The console has a real time analysis area where you can paste a sample text and immediately analyze it and see the sentiment, entities, key phrases and language identified.&lt;/p&gt;

&lt;p&gt;This interactive experience assists users to learn how the service understands various types of texts and become confident enough before using it in applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developing Python applications with Amazon Comprehend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following is a Python code with the AWS SDK to analyse sentiment and detect entities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;comprehend&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;comprehend&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Customer&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;was&lt;/span&gt; &lt;span class="n"&gt;very&lt;/span&gt; &lt;span class="n"&gt;good&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;though&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="n"&gt;took&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="nb"&gt;long&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;goods&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="n"&gt;sentiment_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;comprehend&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;detect&lt;/span&gt; &lt;span class="nf"&gt;sentiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;LanguageCode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;en&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;entities_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;comprehend&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;identify&lt;/span&gt; &lt;span class="nf"&gt;entities &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;LanguageCode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;en&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sentiment:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sentiment_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Sentiment&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Entities:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;entity&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Entities&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entity&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Text&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;entity&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an illustration of the ease at which Comprehend fits into applications. Your application is able to draw in information using simply a few lines of code one might have used complicated NLP pipelines.&lt;/p&gt;

&lt;p&gt;Included in the response are the scores of confidence, which can be utilized in decision-making logic within applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch Processing and Asynchronous Jobs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although real time APIs are efficient with small input sizes, it is also possible to process large input in real time using asynchronous mode in Amazon Comprehend. Thousands of documents stored in S3 can be analyzed with batch jobs and get the results after the processing.&lt;/p&gt;

&lt;p&gt;This renders Comprehend to be appropriate in analytics workloads, historical data processing, and text analysis on an enterprise level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing and Cost Awareness&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Comprehend is priced on a pay-as-you-go basis, and usually by the amount of characters processed. The price differs according to the analysis of the kind being conducted.&lt;/p&gt;

&lt;p&gt;The free tier would allow enough use of the service to experiment with it and learn about it as well as do small projects. Nevertheless, when working with a large amount of text, the developers should always look at the usage to prevent unforeseen expenses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Which Cases Should we use Amazon Comprehend?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Comprehend is the right option in cases when your application requires text interpretation instead of merely searching words. It is specifically applicable in sentiment analysis, extraction of entities, text classification and large-scale analysis of documents.&lt;/p&gt;

&lt;p&gt;In case your application domain needs to have a very domain-specific understanding of language, then you might need custom classification models or specialised services. Comprehend has sufficient capability even in most general-purpose applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Amazon Comprehend demonstrates the ability of highly powerful AI to be easily available as simple APIs. AWS allows developers to concentrate on functionality rather than models by disregarding the complexity of NLP.&lt;/p&gt;

&lt;p&gt;As an intro, understanding Amazon Comprehend is the next skill to acquire in order to create intelligent and data-driven applications capable of understanding the human language.&lt;/p&gt;

&lt;p&gt;Next in this series, we will discuss the service to the text-to-speech conversion called Amazon Polly which enables us to transform the text into the voice and opens the possibilities of voice-activated applications.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cloud</category>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Extracting Text from Documents Using Amazon Textract (AI series)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Wed, 21 Jan 2026 15:19:50 +0000</pubDate>
      <link>https://dev.to/jeyy/extracting-text-from-documents-using-amazon-textract-ai-series-42eb</link>
      <guid>https://dev.to/jeyy/extracting-text-from-documents-using-amazon-textract-ai-series-42eb</guid>
      <description>&lt;p&gt;The last stop on our journey through Amazon Rekognition and the ease with which image analysis can be made by AWS even when you are just getting your legs wet.&lt;/p&gt;

&lt;p&gt;We are now going a notch further into an issue that everybody encounters, which is extracting text and information out of documents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need for Textract :
&lt;/h2&gt;

&lt;p&gt;Scanned PDF documents, invoices, application forms, ID proofs and bank statements are still flooding the internet. It was once the case that the extraction of information out of these would require either typing the information in directly or it would require a series of complex OCR pipelines to be run. Amazon Textract slices that aggravation with AI.&lt;/p&gt;

&lt;p&gt;Most of the documents are simply a mess when it comes to it. Machines find it difficult to know what goes in what field, a table or mere fluff because people are able to read a scanned invoice or a scanned form.&lt;/p&gt;

&lt;p&gt;Simple OCR applications are able to scoop up text, but they have no idea of context. They are not even able to know with confidence whether something is a heading or a value, or whether a chunk is a paragraph or a table cell.&lt;/p&gt;

&lt;p&gt;Textract is not a mere OCR but it does the layout of documents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Okay, So what exactly is Amazon Textract ?
&lt;/h2&gt;

&lt;p&gt;Textract is an AI service that will automatically extract text, key-value pairs, and tables in hunted down documents and PDFs. It is based on machine-learned document layout-readers.&lt;/p&gt;

&lt;p&gt;In place of spitting out raw text, Textract spits out useful information such as form fields, table rows and the links between labels and values.&lt;/p&gt;

&lt;p&gt;This is why it is a great triumph to businesses that are dependent on docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works?&lt;/strong&gt;&lt;br&gt;
Giving Textract a document, it performs the first step of OCR, to identify any text. Subsequently, it learns through deep-learning models, such as layout patterns such as alignment, spacing, and grouping.&lt;/p&gt;

&lt;p&gt;On that, Textract selects structured items like form fields (e.g. Name: John Doe), rows and columns in a table. The ultimate output is a clean JSON, which is easily readable by any developer.&lt;/p&gt;

&lt;p&gt;As far as your app is concerned, it is simply an API call but the magic behind it is much smarter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uses :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Textract can be relied upon in any business where one ships a ton of docs. Banks draw information out of statements and loan files. Claim forms are processed automatically by insurers. HR departments streamline the process of resumes and onboarding documents.&lt;/p&gt;

&lt;p&gt;Student projects and even startups are able to use Textract to create document automation without creating an OCR stack.&lt;/p&gt;

&lt;p&gt;Textract can be viewed through the AWS Console in the simplest way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Login in AWS, select Textract in the list of services. It has the demo where you can upload sample documents. After you have entered your file, you get the choice of text, forms, and tables.&lt;/p&gt;

&lt;p&gt;The console will display the blocks of text, key-value pairs, and table structures that it has found in the shortest amount of time possible. It provides a pretty visual verification that allows unskilled people to see how Textract interprets the doc precisely.&lt;/p&gt;

&lt;p&gt;The following Python snippet is an example of adding a PDF in an S3 bucket with the help of the AWS SDK and extracting form data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;textract&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;textract&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;textract&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;analyze_document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;Document&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;S3Object&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-documents-bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;application_form.pdf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;FeatureTypes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;FORMS&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TABLES&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Blocks&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;BlockType&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;KEY_VALUE_SET&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The process begins by reading the text in an image and uploading the results as a document with a report on the data.&lt;/p&gt;

&lt;p&gt;In this case, the processedocument API has been analyzed as opposed to simple text detection. This allows Textract to be able to send back formatted instead of raw data.&lt;/p&gt;

&lt;p&gt;The response is made of such block types as WORD, LINE, TABLE, CELL, and KEYVALUE_SET. It could seem enormous initially, but when you learn the IDs and relationships, it is not very difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous vs Asynchronous :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Textract provides 2 ways of running things. The sync APIs are suitable to small documents and on-the-fly usage. The asynchronous ones are used to run large PDFs and batches, where the output is stored in S3 at the end of the job.&lt;/p&gt;

&lt;p&gt;When their application expands, most will begin with sync and flip to asynchronous.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Textract charges per page. The rate varies when you are simply dragging plain text files or excavating forms and tables. In the case of learning or small projects, it will remain cheap, particularly when you are only viewing a few docs.&lt;/p&gt;

&lt;p&gt;It also has a free plan that will allow you to use somewhat free, thus you can take a look around without spending cash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Textract where you require structured data of documents and not just the raw text only. In case you want to automate documents, process forms, or extract data in scanned documents, then Textract is sound.&lt;/p&gt;

&lt;p&gt;When all you need is to extract plain text in pictures plain OCR may work, but Textract is actually impressive when layout is a concern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Textract demonstrates that AWS AI is not merely an invention of the cool things that can be found in sci-fi movies but rather addresses real life issues. It removes a massive burden in various workflows of businesses by converting messy documents into clean data.&lt;/p&gt;

&lt;p&gt;Textract is a terrific demonstration of how AI services can be inserted into apps with minimal effort and colossal returns to a fresh person.&lt;/p&gt;

&lt;p&gt;In the following post, we will explore Amazon Comprehend; the AI reading, sentimental, and insight generating text reader.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Getting Started with Amazon Rekognition(AI series in AWS)</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Tue, 20 Jan 2026 16:26:20 +0000</pubDate>
      <link>https://dev.to/jeyy/getting-started-with-amazon-rekognitionai-series-in-aws-51o1</link>
      <guid>https://dev.to/jeyy/getting-started-with-amazon-rekognitionai-series-in-aws-51o1</guid>
      <description>&lt;p&gt;Artificial Intelligence often feels intimidating to beginners, especially when terms like machine learning models, neural networks, and training datasets are thrown around. AWS simplifies this journey by offering ready-to-use AI services where you can build powerful features without having deep AI knowledge.&lt;/p&gt;

&lt;p&gt;In this first part of the series, we will explore Amazon Rekognition, one of the easiest AI services on AWS to get started with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Amazon Rekognition?
&lt;/h2&gt;

&lt;p&gt;Amazon Rekognition is an AI service that allows applications to analyze images and videos. It can identify objects, scenes, text, faces, and even detect whether people are wearing protective equipment like helmets or masks.&lt;/p&gt;

&lt;p&gt;The important part is this: you do not need to build or train any machine learning model. AWS handles all of that behind the scenes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Amazon Rekognition is Beginner-Friendly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rekognition is a great starting point for beginners because it is fully managed and works with just an API call or a few clicks in the AWS Console. You upload an image, call the service, and receive structured results in JSON format.&lt;/p&gt;

&lt;p&gt;This makes it ideal for developers who want to understand how AI services integrate into real applications without learning complex ML theory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Rekognition is commonly used in applications such as face verification systems, content moderation platforms, document scanning apps, and security monitoring solutions. For example, a photo-upload application can automatically detect inappropriate content, or a company can verify employee identity during login using facial comparison.&lt;br&gt;
These use cases show how AI can be embedded into everyday software products.&lt;/p&gt;
&lt;h2&gt;
  
  
  How Amazon Rekognition Works
&lt;/h2&gt;

&lt;p&gt;When you upload an image to Rekognition, the service processes it using pre-trained deep learning models. These models analyze visual patterns and return meaningful labels such as detected objects, emotions on faces, or extracted text.&lt;/p&gt;

&lt;p&gt;From a developer's perspective, it works like a simple request-response system. You send an image, and AWS sends back the analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Amazon Rekognition with Python (Example)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is a simple Python example using the AWS SDK (boto3) to detect labels in an image stored in an S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="n"&gt;rekognition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rekognition&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rekognition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect_labels&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;S3Object&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-image-bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sample.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;MaxLabels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;MinConfidence&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Labels&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confidence&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code sends an image to Rekognition and prints detected objects with confidence scores. Even if you are new to AWS SDKs, the structure is straightforward and readable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Rekognition follows a pay-as-you-go model. You are charged based on the number of images or video minutes processed. For small experiments and learning purposes, the cost is usually minimal. AWS also provides a free tier for limited usage, which is sufficient for beginners to practice.&lt;/p&gt;

&lt;p&gt;Always check the official pricing page before using it in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Should You Use Rekognition?
&lt;/h2&gt;

&lt;p&gt;You should consider Amazon Rekognition when your application needs image or video understanding without investing time in training machine learning models. It is especially useful for startups, student projects, and rapid prototyping.&lt;/p&gt;

&lt;p&gt;If your requirement involves highly customized vision models, then services like SageMaker would be more suitable, which we will cover later in this series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Rekognition is one of the easiest ways to introduce AI into your applications. It allows beginners to build intelligent features using simple API calls while AWS handles the complexity behind the scenes.&lt;/p&gt;

&lt;p&gt;In the next part of this series, we will look at Amazon Textract, a service that extracts text and structured data from documents such as PDFs and scanned images.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>beginners</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Testing VPC connectivity - A quick hands-on</title>
      <dc:creator>Jeya Shri</dc:creator>
      <pubDate>Sat, 10 Jan 2026 14:30:00 +0000</pubDate>
      <link>https://dev.to/jeyy/testing-vpc-connectivity-a-quick-hands-on-4k0f</link>
      <guid>https://dev.to/jeyy/testing-vpc-connectivity-a-quick-hands-on-4k0f</guid>
      <description>&lt;h2&gt;
  
  
  Testing VPC Connectivity
&lt;/h2&gt;

&lt;p&gt;I recently completed a hands-on project focused on understanding and testing Virtual Private Cloud (VPC) connectivity in AWS. The goal of this exercise was to verify how different components inside a VPC communicate with each other and with the internet, while maintaining proper security boundaries. By testing each connection step by step, I was able to clearly see how AWS networking works in practice.&lt;/p&gt;

&lt;p&gt;This project helped me move beyond configuration and actually validate whether the architecture behaved the way it was designed to.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 1: Setting Up the VPC Basics
&lt;/h3&gt;

&lt;p&gt;I started by creating a custom VPC with both public and private subnets. To support proper traffic flow, I configured route tables for each subnet type, attached an internet gateway to the VPC, and set up a NAT gateway. This initial setup formed the backbone of the network and allowed me to understand how AWS separates public-facing resources from private ones while still enabling controlled communication.&lt;/p&gt;

&lt;p&gt;At this stage, the focus was on network segmentation and ensuring that each subnet had the correct routing rules associated with it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 2: Connecting to the Public EC2 Instance
&lt;/h3&gt;

&lt;p&gt;Once the VPC was ready, I launched a public EC2 instance inside the public subnet. I associated an Elastic IP with the instance and connected to it securely using SSH. This step helped me understand how public instances are accessed from the internet and how security groups and key-based authentication work together to allow secure inbound access.&lt;/p&gt;

&lt;p&gt;Successfully connecting to the instance confirmed that the internet gateway and route table for the public subnet were configured correctly.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 3: Testing Connectivity Between EC2 Instances
&lt;/h3&gt;

&lt;p&gt;Next, I launched a private EC2 instance within the private subnet. From the public EC2 instance, I tested connectivity to the private instance using its private IP address. This demonstrated how instances within the same VPC can communicate internally, even when they are placed in different subnets.&lt;/p&gt;

&lt;p&gt;This step highlighted the role of internal routing and security group rules in enabling private, internal communication without exposing resources to the public internet.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 4: Testing Internet Access from the Private Subnet
&lt;/h3&gt;

&lt;p&gt;Finally, I tested internet connectivity for the private EC2 instance. Since private instances do not have direct access to the internet, the NAT gateway played a crucial role here. By routing outbound traffic through the NAT gateway, the private instance was able to access the internet securely without having a public IP address.&lt;/p&gt;

&lt;p&gt;This confirmed how NAT gateways enable controlled outbound access while preventing any inbound connections from external networks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;p&gt;This project gave me practical exposure to testing VPC connectivity rather than just configuring it. By validating communication between public and private instances and confirming controlled internet access, I gained a deeper understanding of how secure and scalable VPC architectures are designed in AWS. Testing each layer of connectivity helped reinforce how individual networking components work together as a complete system.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>vpc</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
