<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: fvgm-spec</title>
    <description>The latest articles on DEV Community by fvgm-spec (@fvgmspec).</description>
    <link>https://dev.to/fvgmspec</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fvgmspec"/>
    <language>en</language>
    <item>
      <title>Building a basic path finder app with help of AI in a few minutes</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Sun, 19 Jan 2025 13:29:48 +0000</pubDate>
      <link>https://dev.to/fvgmspec/building-a-basic-path-finder-app-with-help-of-ai-in-a-few-minutes-1pkm</link>
      <guid>https://dev.to/fvgmspec/building-a-basic-path-finder-app-with-help-of-ai-in-a-few-minutes-1pkm</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github"&gt;GitHub Copilot Challenge &lt;/a&gt;: Fresh Starts&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;If you are concerned about saving our natural environment, it would be useful for you to have an application always available where you can find the closer paths to recycling containers. I live in a little country in Latin America between Brazil and Argentina, you my guess which one it is, here the programs to protect our natural environment by recycling paper, glass, plastic and other renewable trash, it not exactly as developed like in countries such as Germany or any other in the north of Europe. &lt;/p&gt;

&lt;p&gt;People who really cares about this issue, at least here in my country, try to do as many as little things to avoid environmental climate change, just storing the recycling trash in the places where it has to be done. Having an efficient way to do that would be beneficial for every body, so this is the idea that I had: asking AI to help me to build an app to achieve that efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u0xpno082h1f6cipbqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u0xpno082h1f6cipbqs.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

&lt;p&gt;To review what I did link to &lt;a href="https://github.com/fvgm-spec/path-finder/tree/main" rel="noopener noreferrer"&gt;this repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Copilot Experience
&lt;/h2&gt;

&lt;p&gt;The first challenge that I faced was starting to use GitHub Copilot in VSCode, is just as easy as creating a directory where you are going to build your code + ctrl + I, and there you go...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhkn3e37zci16okq2ajm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhkn3e37zci16okq2ajm.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I received the first answer from my AI prompt, I knew that I was going in the right "path", I wanted to ask for help and build a &lt;code&gt;path-finder&lt;/code&gt; App that would help me to get the more efficient routes near to my house to take my recycling trash.&lt;/p&gt;

&lt;p&gt;As I wanted to build an ASAP (as simple as possible) app in Python I just asked so to Copilot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio4r07ddqztu85xipya0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio4r07ddqztu85xipya0.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I got the suggestions from the AI I started to apply them in my code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzisr2k4gztgdup6nh0sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzisr2k4gztgdup6nh0sy.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result of my prompt was clear, a path-finder app, just as simple that I could start working on improving it using AI models provided by Github Copilot. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50pdzokk9tm66edkzgh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50pdzokk9tm66edkzgh7.png" alt="Image description" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Models
&lt;/h2&gt;

&lt;p&gt;I used the &lt;code&gt;Claude 3.5 Sonnet&lt;/code&gt; model from Anthropic, as I have previously used this model with other AI editors, such as claude.ai. It is reliable for helping with code and also provides very complete code suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This demo test using GitHub Copilot Clearly shows how a simple project can be started from scratch, just by having a simple idea in mind. Then it can be improved using creativity and GitHub Copilot features.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>Designing Cost-Optimized AWS Architectures: A Comprehensive Guide</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Thu, 02 Nov 2023 16:49:57 +0000</pubDate>
      <link>https://dev.to/fvgmspec/designing-cost-optimized-aws-architectures-a-comprehensive-guide-2gl0</link>
      <guid>https://dev.to/fvgmspec/designing-cost-optimized-aws-architectures-a-comprehensive-guide-2gl0</guid>
      <description>&lt;p&gt;In this tutorial, we will delve into the art and science of designing cost-optimized AWS architectures. Cost optimization is a crucial aspect of AWS cloud computing, and this guide will provide a blend of theoretical knowledge and practical exercises to help you create architectures that not only meet your performance requirements but also optimize your AWS bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction to Cost Optimization in AWS&lt;/li&gt;
&lt;li&gt;Understanding AWS Pricing Models&lt;/li&gt;
&lt;li&gt;Design Principles for Cost Optimization&lt;/li&gt;
&lt;li&gt;Architecting for Scalability and Elasticity&lt;/li&gt;
&lt;li&gt;Use Case: Cost Optimizing Client's Architecture&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction to Cost Optimization in AWS
&lt;/h2&gt;

&lt;p&gt;Understanding the importance of cost optimization is the first step towards building cost-effective AWS architectures. We'll introduce the concept and highlight its significance in cloud computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Understanding AWS Pricing Models
&lt;/h2&gt;

&lt;p&gt;Before diving into cost optimization, it's crucial to comprehend AWS pricing models. AWS and most of the cloud based services, offers the pay-as-you-go approach, that allows users pay only for the individual services they need, without requiring long-term contracts or complex licensing. So they can only pay for the services they consume, and once they stop using them, there are no additional costs or termination fees.&lt;/p&gt;

&lt;p&gt;In the image below are shown the basic pricing principles of AWS pricing model, every single user is charged only for the resources they consume, so there are no upfront fees and no minimum commitment or long-term contracts required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo9j9qbexwdrf2y6fqd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo9j9qbexwdrf2y6fqd1.png" alt="was pricing model" width="447" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Design Principles for Cost Optimization
&lt;/h2&gt;

&lt;p&gt;One of the Pillars of an &lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc" rel="noopener noreferrer"&gt;AWS Well Architected Framework&lt;/a&gt;, is Cost Optimization. The design principles of this Well Architected Framework Pillar includes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adopting an efficient consumption model&lt;/strong&gt;, design principle, is directly related to the &lt;a href="https://aws.amazon.com/pricing/?aws-products-pricing.sort-by=item.additionalFields.productNameLowercase&amp;amp;aws-products-pricing.sort-order=asc&amp;amp;awsf.Free%20Tier%20Type=*all&amp;amp;awsf.tech-category=*all" rel="noopener noreferrer"&gt;pay-as-you-go&lt;/a&gt; pricing model previously described, where users pay only for the computing resources they consume, and increase or decrease it depending on business the requirements.&lt;/p&gt;

&lt;p&gt;Another remarkable design principle is &lt;strong&gt;Measuring the overall efficiency&lt;/strong&gt; of your workloads and the costs associated with deliveries, then the results of these analyses can be used to understand the gains you make from increasing output, increasing functionality, and overall reducing cost.&lt;/p&gt;

&lt;p&gt;Another principle is referred to as &lt;strong&gt;Stop spending money on undifferentiated heavy lifting&lt;/strong&gt;, as AWS does the heavy lifting of data center operations like racking, stacking, and powering servers. It also removes the operational burden of managing operating systems and applications that traditionally were done in managed services.&lt;/p&gt;

&lt;p&gt;Last but not least, AWS offers the opportunity to &lt;strong&gt;Analyze and attribute expenditure&lt;/strong&gt;, by making it easier to accurately identify the cost and usage of workloads, which then allows transparent attribution of IT costs to revenue streams and individual workload owners. &lt;/p&gt;

&lt;p&gt;If you feel like exploring further, you can always review &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/design-principles.html" rel="noopener noreferrer"&gt;the Official Documentation&lt;/a&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Architecting for Scalability and Elasticity
&lt;/h2&gt;

&lt;p&gt;Scaling your architecture based on demand is a fundamental part of cost optimization. Solution Architects should ideally design robust and scalable applications, having always in mind a must-have building principle, which is "build today with tomorrow in mind", it means that apps need to cater to current scale requirements as well as the anticipated growth of the solution. This growth can be either the organic growth of a solution or it could be as a result of a merger and acquisition or a similar scenario.&lt;/p&gt;

&lt;p&gt;This topic is closely related to the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html" rel="noopener noreferrer"&gt;AWS Reliability Pillar&lt;/a&gt;, which states that an application designed under this principle should achieve reliability in the context of scale, modularity, and constructed based on multiple and loosely coupled building blocks (functions).&lt;/p&gt;

&lt;p&gt;The image below shows how should be built highly scalable and reliable application using a microservices architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxxn1u6y75eci14sgb8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxxn1u6y75eci14sgb8p.png" alt="Scalable Modular Applications" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/architecture/architecting-for-reliable-scalability/" rel="noopener noreferrer"&gt;Scalable modular applications&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Use Case: Cost Optimizing Client's Architecture
&lt;/h2&gt;

&lt;p&gt;So let's assume we working for a client that is facing an issue related to storage and needs to migrate their warehouse comprised mainly of images and scanned documents. They need a solution that guarantees elasticity, scalability, and durability.&lt;/p&gt;

&lt;p&gt;Their amount of data goes from 60 to terabytes of data, so they we want to shift to cloud store and their first task is to determine what is the best service to use and secondly, how they'd go about transferring these objects to the cloud. &lt;/p&gt;

&lt;p&gt;We can offer our client a few options that could cover the main prerequisites they talked about before (elasticity, scalability, and durability) they can perfectly be Amazon RDS, DynamoDB or perhaps Amazon Aurora?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk6ph4t23bvj1zhqvk1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk6ph4t23bvj1zhqvk1i.png" alt="storage_options" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far so good, so now the requirement is to determine the nature of the objects to be stored if they are structured or unstructured. objects of various sizes and formats. Our client needs to store all sorts of image formats, including TIFF, JPEG and PNG, in this case Amazon S3 comes up as an ideal option for these purposes, as it provides highly durable, highly available elastic object storage. If you are at least familiar with this versatile and commonly used AWS service, chances are that you have seen this table before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5cnw65iuubazoucff9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5cnw65iuubazoucff9.png" alt="s3_storage_classes" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another remarkable feature, is that we won't need to provision the Amazon S3 storage level first and the services will scale upwards to accommodate any future volumes of data that we need to provide. That means our clients, could whenever they want store and retrieve any type of object in Amazon S3, no matter if they are images, files, videos, documents, or archives. It scales quickly and dynamically so you don't need to pre-provision space for objects if you store them. &lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://aws.plainenglish.io/aws-s3-different-types-of-storage-types-available-in-s3-3550e0b87580" rel="noopener noreferrer"&gt;three storage classes for Amazon S3&lt;/a&gt;, as seen in the table above, and each of them provides different levels of availability, performance, and cost which makes it a highly flexible solution. &lt;/p&gt;

&lt;p&gt;Compared to choosing a relational database like RDS Service for storing objects with Amazon S3, would be unlikely to be efficient and the cost might outweigh the benefit of using a relational database on a cloud service. &lt;/p&gt;

&lt;p&gt;Now that the implementation is almost done, we can start to look at ways of optimizing our storage further, starting with &lt;strong&gt;Amazon S3 Standard Class&lt;/strong&gt;, which is the most widely used class (used for what is called, "hot data"), as it provides the maximum durability and availability combination of the storage classes, and in fact, it's hard to find a use case that doesn't suit this class at all!&lt;/p&gt;

&lt;p&gt;The first point is determining the right class that will support our data, now we should look at how to import the data. As AWS Solution Architect, may think about using a device called &lt;a href="https://medium.com/programmingnotes/what-is-aws-snowball-2a83576d07a4" rel="noopener noreferrer"&gt;AWS Snowball&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnx6s99re4h831uzlbxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnx6s99re4h831uzlbxw.png" alt="snowballs" width="437" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason for choosing it and how this device works, and its main features are a little out of the scope of this article, if you want to explore further just check the very first source of truth, &lt;a href="https://aws.amazon.com/snowball/" rel="noopener noreferrer"&gt;AWS Official Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrequent Access Storage Class&lt;/strong&gt;, is the second storage class analyzed, if Standard Class is best suited for hot data (data which is accessed very frequently), then Infrequent Access Class is ideal for warm data, which can be understood as data that is not accessed so frequently, or no so frequent as the hot data :) We still would need the durability, but the availability in retrieval time is not a critical mission. So if the requirement is to retrieve assets regularly, Standard Class would end up being probably more efficient than Infrequent Access.&lt;/p&gt;

&lt;p&gt;Now the third storage class is &lt;strong&gt;Amazon Glacier&lt;/strong&gt;, it is essentially (as its name suggests) the one referred to "cold storage" or cold data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymb5ockyl6mqd42rfovz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymb5ockyl6mqd42rfovz.png" alt="glacier" width="459" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Glacier suits archives and taped library replacements and anything where we may need to keep a version of a backup or an archive to meet compliance requirements. &lt;/p&gt;

&lt;p&gt;So if accessing data using Standard Class it's very, very fast and highly available, with Infrequent Access Class, it means less availability. With Glacier, it can take two to five hours to request an archive be retrieved. So it's perfect for anything where we don't have a time constraint. Objects stored in Glacier have a relatively long return time.&lt;/p&gt;

&lt;p&gt;Right at this point, is when &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html" rel="noopener noreferrer"&gt;Lifecycle Rules&lt;/a&gt; come into play, to shift assets from Standard or Infrequent Access storage classes to Amazon Glacier. If we have older backups and taped archives, we might wanna consider importing those directly to Glacier and also setting up rules to shift older assets to Glacier after a period of time.&lt;/p&gt;

&lt;p&gt;There is another cool Amazon S3 tool that can help which will help you observe your data or access patterns over time and gather information to help you improve the lifecycle management of your Standard and Standard IA storage classes, it is called &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-analytics-storage-class.html" rel="noopener noreferrer"&gt;Storage Class Analysis Tool&lt;/a&gt;. This analysis tool will observe the access patterns of your filtered data sets for 30 days or longer to gather information for analysis before giving you a result, then the analysis continues to run after the initial result and updates the result as the access patterns change, which is a really useful feature.  &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Conclusion
&lt;/h2&gt;

&lt;p&gt;In this comprehensive guide on designing cost-optimized AWS architectures, we've covered the principles, practices, and practical exercises to create architectures that not only meet your performance needs but also save on costs.&lt;/p&gt;

&lt;p&gt;As you continue your journey with AWS, remember that cost optimization is an integral part of building efficient and sustainable cloud solutions. Happy architecting!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/storage-classes/" rel="noopener noreferrer"&gt;Amazon S3 Storage Classes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/architecture/architecting-for-reliable-scalability/" rel="noopener noreferrer"&gt;Architecting for Reliable Scalability&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc" rel="noopener noreferrer"&gt;AWS Well-Architected&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>awscostoptimization</category>
      <category>aws</category>
    </item>
    <item>
      <title>Amazon OpenSearch Service Essentials</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Wed, 20 Sep 2023 00:27:22 +0000</pubDate>
      <link>https://dev.to/fvgmspec/amazon-opensearch-service-essentials-56pg</link>
      <guid>https://dev.to/fvgmspec/amazon-opensearch-service-essentials-56pg</guid>
      <description>&lt;p&gt;In this comprehensive tutorial, we will dive deep into Amazon OpenSearch Service, a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters for search and analytics in the AWS Cloud. Whether you're new to Elasticsearch or want to harness its power in the AWS cloud, this guide will provide a blend of theoretical knowledge and hands-on exercises to get you started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction to Amazon OpenSearch Service&lt;/li&gt;
&lt;li&gt;Main Features of Amazon OpenSearch&lt;/li&gt;
&lt;li&gt;Creating an Amazon OperSearch Service Domain&lt;/li&gt;
&lt;li&gt;Visualizing and Analyzing Data with OpenSearch Dashboards&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction to Amazon OpenSearch Service
&lt;/h2&gt;

&lt;p&gt;Amazon OpenSearch Service is a fully managed service that simplifies the deployment, operation, and scaling of OpenSearch clusters. Is a community-driven, open-source search and analytics suite derived from open-source Elasticsearch 7.10.2 and Kibana 7.10.2, which allows you to search, analyze, and visualize data in real-time, making it ideal for log and event data analysis, full-text search, and more.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Main Features of Amazon OpenSearch
&lt;/h2&gt;

&lt;p&gt;OpenSearch Service includes the following features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Numerous configurations of CPU, memory, and storage capacity known as instance types, including cost-effective Graviton instances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Up to 3 PB of attached storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost-effective UltraWarm and cold storage for read-only data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Identity and Access Management (IAM) access control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy integration with Amazon VPC and VPC security groups&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encryption of data at rest and node-to-node encryption&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Cognito, HTTP basic, or SAML authentication for OpenSearch Dashboards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Index-level, document-level, and field-level security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dashboards multi-tenancy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Numerous geographical locations for your resources, known as Regions and Availability Zones&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node allocation across two or three Availability Zones in the same AWS Region, known as Multi-AZ&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dedicated master nodes to offload cluster management tasks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automated snapshots to back up and restore OpenSearch Service domains&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Flexibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SQL support for integration with business intelligence (BI) applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom packages to improve search results&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Integration with popular services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data visualization using OpenSearch Dashboards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with Amazon CloudWatch for monitoring OpenSearch Service domain metrics and setting alarms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with AWS CloudTrail for auditing configuration API calls to OpenSearch Service domains&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with Amazon S3, Amazon Kinesis, and Amazon DynamoDB for loading streaming data into OpenSearch Service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alerts from Amazon SNS when your data exceeds certain thresholds&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Creating an Amazon OpenSearch Service domain
&lt;/h2&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An OpenSearch Service domain, which is the same as an OpenSearch cluster, can be created using the OpenSearch Service console or by using the AWS CLI with the create-domain command. Domains are clusters with the settings, instance types, instance counts, and storage resources you specify.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Exercise: Creating an Amazon OpenSearch Domain
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log in to the Amazon OpenSearch Console, and click "Create Domain"&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu92yxy8n76m6fyoflgf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu92yxy8n76m6fyoflgf3.png" alt="Login OpenSearch Console" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configure your domain with settings like version, instance types, and storage.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this step, you will need to enter the name of your domain, following the suggested naming convention, and you will also need to set a username and master password for this Domain. As the data that we will store in the domain is not sensitive at all and this is a testing exercise, setting Network Public access, will be enough.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzx76m0dn38objo4sgs8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzx76m0dn38objo4sgs8.png" alt="Setting up Domain" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxagd0ncqu0vw23yfpd06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxagd0ncqu0vw23yfpd06.png" alt="Setting master username and password" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the domain to be available, you will need to wait a range of time from 10 to 15 minutes, so the next step will be to upload some sample data in your OpenSearch domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zbq6xsrwl79prrk7lbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zbq6xsrwl79prrk7lbx.png" alt="Domain created" width="800" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Uploading data to your OpenSearch Domain.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data is loaded into OpenSearch domains as JSON documents, and you can do it through the command line using &lt;a href="https://curl.se/" rel="noopener noreferrer"&gt;cURL&lt;/a&gt; or through OpenSearch UI. I feel like a Data Engineer very used to work with CLIs, so I ran into some issues sending requests to my recently created OpenSearch endpoint, this time I preferred to use the UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk77nbhflq4yr9o3uup6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk77nbhflq4yr9o3uup6.png" alt="Uploading documents through CLI" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to load data using the UI, you will need to log into your Domain by clicking in Amazon OpenSearch Service Domains UI OpenSearch Dashboards URL, which is simply your Domain endpoint + &lt;code&gt;/_dashboards&lt;/code&gt;, and then you will be prompted to introduce the &lt;code&gt;master_username&lt;/code&gt; and &lt;code&gt;master_password&lt;/code&gt; you previously set when creating the Domain.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yma7ur1hplvye706xat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yma7ur1hplvye706xat.png" alt="OpenSearch UI" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once there click in &lt;code&gt;Dev tools&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg9jid2ygzicww3pufef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg9jid2ygzicww3pufef.png" alt="OpenSearch Dashboards" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;Dev tools&lt;/code&gt; console you will need to perform a &lt;code&gt;PUT&lt;/code&gt; request to the API in order to upload some data to your OpenSearch Domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;PUT movies/_doc/1
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"director"&lt;/span&gt;: &lt;span class="s2"&gt;"Burton, Tim"&lt;/span&gt;,
  &lt;span class="s2"&gt;"genre"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Comedy"&lt;/span&gt;,&lt;span class="s2"&gt;"Sci-Fi"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
  &lt;span class="s2"&gt;"year"&lt;/span&gt;: 1996,
  &lt;span class="s2"&gt;"actor"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Jack Nicholson"&lt;/span&gt;,&lt;span class="s2"&gt;"Pierce Brosnan"&lt;/span&gt;,&lt;span class="s2"&gt;"Sarah Jessica Parker"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
  &lt;span class="s2"&gt;"title"&lt;/span&gt;: &lt;span class="s2"&gt;"Mars Attacks!"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then &lt;code&gt;Click to send request&lt;/code&gt; and your first index will be created with the name &lt;strong&gt;movies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15ldq5s5jycbt45x0m1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15ldq5s5jycbt45x0m1g.png" alt="Creating Index" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Querying data from OpenSearch UI.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once documents are successfully loaded as indexes in your OpenSearch Domain, you can be able to check that the data is loaded by querying it. You can find more sample data to load into your domain in the &lt;a href="https://github.com/fvgm-spec/cloud_experiments/tree/main/aws-opensearch/queries" rel="noopener noreferrer"&gt;repo&lt;/a&gt; that I created as a companion to this tutorial.&lt;/p&gt;

&lt;p&gt;So far I have loaded more data to the endpoint by uploading the following document through the API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;POST movies/_doc/3
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"director"&lt;/span&gt;: &lt;span class="s2"&gt;"Baird, Stuart"&lt;/span&gt;, 
    &lt;span class="s2"&gt;"genre"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Action"&lt;/span&gt;, &lt;span class="s2"&gt;"Crime"&lt;/span&gt;, &lt;span class="s2"&gt;"Thriller"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;, 
    &lt;span class="s2"&gt;"year"&lt;/span&gt;: 1998, 
    &lt;span class="s2"&gt;"actor"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Downey Jr., Robert"&lt;/span&gt;, &lt;span class="s2"&gt;"Jones, Tommy Lee"&lt;/span&gt;, &lt;span class="s2"&gt;"Snipes, Wesley"&lt;/span&gt;, &lt;span class="s2"&gt;"Pantoliano, Joe"&lt;/span&gt;, &lt;span class="s2"&gt;"Jacob, Ir&lt;/span&gt;&lt;span class="se"&gt;\u&lt;/span&gt;&lt;span class="s2"&gt;00e8ne"&lt;/span&gt;, &lt;span class="s2"&gt;"Nelligan, Kate"&lt;/span&gt;, &lt;span class="s2"&gt;"Roebuck, Daniel"&lt;/span&gt;, &lt;span class="s2"&gt;"Malahide, Patrick"&lt;/span&gt;, &lt;span class="s2"&gt;"Richardson, LaTanya"&lt;/span&gt;, &lt;span class="s2"&gt;"Wood, Tom"&lt;/span&gt;, &lt;span class="s2"&gt;"Kosik, Thomas"&lt;/span&gt;, &lt;span class="s2"&gt;"Stellate, Nick"&lt;/span&gt;, &lt;span class="s2"&gt;"Minkoff, Robert"&lt;/span&gt;, &lt;span class="s2"&gt;"Brown, Spitfire"&lt;/span&gt;, &lt;span class="s2"&gt;"Foster, Reese"&lt;/span&gt;, &lt;span class="s2"&gt;"Spielbauer, Bruce"&lt;/span&gt;, &lt;span class="s2"&gt;"Mukherji, Kevin"&lt;/span&gt;, &lt;span class="s2"&gt;"Cray, Ed"&lt;/span&gt;, &lt;span class="s2"&gt;"Fordham, David"&lt;/span&gt;, &lt;span class="s2"&gt;"Jett, Charlie"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;, 
    &lt;span class="s2"&gt;"title"&lt;/span&gt;: &lt;span class="s2"&gt;"U.S. Marshals"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find how to load indexes to OpenSearch in the &lt;a href="https://opensearch.org/docs/1.2/opensearch/index-data/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now you need to query your data by passing a single query to the API that contains a string of the recently uploaded data, it is similar to the &lt;strong&gt;like&lt;/strong&gt; statement in SQL &lt;code&gt;GET movies/_search?q=U.S.&amp;amp;pretty=true&lt;/code&gt;. YOu will get the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpt0l053sy9jfvpnfz1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpt0l053sy9jfvpnfz1p.png" alt="Quarying data" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Visualizing and Analyzing Data with OpenSearch Dashboards
&lt;/h2&gt;

&lt;p&gt;OpenSearch Dashboards is an open-source data visualization tool designed to work with OpenSearch Service Domains. OpenSearch Dashboards gives you data visualization tools to improve and automate business intelligence and support data-driven decision-making and strategic planning.&lt;/p&gt;

&lt;p&gt;You can access OpenSearch Dashboards from OpenSearch Domains UI in AWS Console, or simply accessing to&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;{your_domain_endpoint}/_dashboards/app/home#/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Once there you can add some sample data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd8oqfwlk248dzqky3gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd8oqfwlk248dzqky3gw.png" alt="Adding sample data" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you have a couple of sample data sources to choose from:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp61lzda8f1hynya9m5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp61lzda8f1hynya9m5y.png" alt="Sample data" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is the Dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr430pmpu0xi2qrv5nar0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr430pmpu0xi2qrv5nar0.png" alt="Sample Dashboard" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This data that you are visualizing through the Dashboard can also be queried in the &lt;code&gt;Dev Tools&lt;/code&gt; Console, guess you need to search in the &lt;strong&gt;sample_data_flights&lt;/strong&gt; all flights related to &lt;code&gt;Warsaw&lt;/code&gt;, so we will need to query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;GET opensearch_dashboards_sample_data_flights/_search?q&lt;span class="o"&gt;=&lt;/span&gt;Warsaw&amp;amp;pretty&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get all flights with &lt;code&gt;OriginCityName&lt;/code&gt; = Warsaw:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tsn8ec8j5h9enqqlbrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tsn8ec8j5h9enqqlbrg.png" alt="Sample query" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Conclusion
&lt;/h2&gt;

&lt;p&gt;In this Amazon OpenSearch Service article, we've covered everything from fundamental concepts to practical exercises for creating, uploading and querying data in OpenSearch Service Domain, and and also visualizing data through OpenSearch Dashboards. Amazon OpenSearch Service is a powerful tool for various use cases, from log analysis to real-time data exploration.&lt;/p&gt;

&lt;p&gt;With this knowledge, you can confidently leverage Amazon OpenSearch Service to search, analyze, and visualize your data, unlocking valuable insights for your organization. Happy searching!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon DynamoDB Unleashed: Complete tutorial for beginners</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Tue, 05 Sep 2023 14:33:17 +0000</pubDate>
      <link>https://dev.to/fvgmspec/amazon-dynamodb-unleashed-complete-tutorial-for-beginners-49d4</link>
      <guid>https://dev.to/fvgmspec/amazon-dynamodb-unleashed-complete-tutorial-for-beginners-49d4</guid>
      <description>&lt;p&gt;In this tutorial, we'll dive deep into Amazon DynamoDB, a fast and fully managed NoSQL database service designed for seamless scalability and low-latency performance. You'll gain a solid understanding of both the theoretical concepts and practical aspects of working with DynamoDB, allowing you to leverage its power for your data storage needs. &lt;/p&gt;

&lt;p&gt;DynamoDB is the preferred choice when it comes to applications that need low-latency data access. Let's start to learn and practice some of the fundamentals on this amazing fully managed NoSQL database service that AWS offers. If you’re into building scalable, serverless and high-performance applications, this is right for you!&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is Amazon DynamoDB&lt;/li&gt;
&lt;li&gt;Main Components&lt;/li&gt;
&lt;li&gt;Creating DynamoDB Tables&lt;/li&gt;
&lt;li&gt;Working with Data&lt;/li&gt;
&lt;li&gt;Data Model and Schema&lt;/li&gt;
&lt;li&gt;Why DynamoDB?&lt;/li&gt;
&lt;li&gt;DynamoDB vs. other DB Services&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.toadditional-resources"&gt;Additional Resources&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. What is Amazon DynamoDB
&lt;/h2&gt;

&lt;p&gt;Amazon DynamoDB is a fully managed NoSQL database service offered by Amazon Web Services (AWS). It is designed for developers who need a fast, scalable, and highly available database for modern applications. In this section, we'll explore why DynamoDB is a popular choice and its key features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjnjz6j5h3uklvsewegs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjnjz6j5h3uklvsewegs.png" alt="DynamoDB Console" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those of you who are new to this terminology, NoSQL databases are known as tabular databases and store data differently than relational tables. They come in a variety of types based on their data model and DynamoDB here works on key-value pairs and other data structure documents provided by Amazon.&lt;/p&gt;

&lt;p&gt;DynamoDB only requires a primary key and doesn't require a schema to create tables. Hence it can store any amount of data and serve any amount of traffic, therefore, you can expect a good performance even when it scales up. It's pretty simple to learn and a small API that follows a key-value method to store access and perform advanced data retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  2.Main components
&lt;/h2&gt;

&lt;p&gt;DynamoDB comprises three fundamental units:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attributes:&lt;/strong&gt; This is the simplest element in DynamoDB that stores data without any further division. Each attribute has a name and a value. DynamoDB supports various data types for attributes, including strings, numbers, binary data, lists, and maps. Attributes are used to store the actual data in your items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Items:&lt;/strong&gt; Items are individual data records within a DynamoDB table. Each item is uniquely identified by a primary key, which can consist of one or two attributes: a partition key (mandatory) and an optional sort key. Items can also have additional attributes that provide data for each record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tables:&lt;/strong&gt; Tables are the highest-level data structures in DynamoDB. They are where you store your data. Each table consists of items, and each item represents an individual data record. Tables are schema-less, meaning that items within a table do not need to have the same attributes, allowing flexibility in your data modeling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Managed Service:&lt;/strong&gt; DynamoDB is a fully managed service, which means AWS takes care of the operational aspects like server provisioning, scaling, and maintenance. This allows developers to focus on building applications instead of managing infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; DynamoDB can automatically scale to handle high-traffic workloads without the need for manual intervention. It can handle millions of requests per second, making it suitable for applications with variable workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance:&lt;/strong&gt; It offers single-digit millisecond latency for read and write operations, making it an excellent choice for applications that require low-latency access to data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt; DynamoDB is designed for high availability with built-in data replication and automatic failover across multiple Availability Zones. Your data is always accessible, even in the event of hardware failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; It provides robust security features, including encryption at rest and in transit, fine-grained access control with AWS Identity and Access Management (IAM), and VPC (Virtual Private Cloud) integration for network isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless Triggers:&lt;/strong&gt; You can integrate DynamoDB with AWS Lambda to create serverless workflows that respond to changes in your data, enabling real-time processing and automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Reach:&lt;/strong&gt; DynamoDB offers multi-region and multi-master capabilities, allowing you to deploy databases globally with low-latency access for users worldwide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pay-as-You-Go Pricing:&lt;/strong&gt; DynamoDB uses a pay-as-you-go pricing model, where you only pay for the read and write capacity you consume and the storage you use, with no upfront costs or long-term commitments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;DynamoDB is well-suited for a wide range of use cases, including:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time applications&lt;/strong&gt; that require low-latency access to data.&lt;br&gt;
Internet of Things (IoT) applications for managing device data.&lt;br&gt;
Gaming applications for user profiles, leaderboards, and in-game items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session management and user authentication&lt;/strong&gt; in web and mobile apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content management systems and catalogs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ad tech platforms&lt;/strong&gt; for tracking user behavior and ad impressions.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Creating DynamoDB Tables: (Practical Exercise) Creating a Sample Data Model
&lt;/h2&gt;

&lt;p&gt;Let's go through the steps to create a DynamoDB table using the AWS Management Console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Log in to the AWS Management Console&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Navigate to the DynamoDB Dashboard&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Click "Create table"&lt;/strong&gt; and specify the table name and primary key attributes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure the provisioned throughput or choose on-demand capacity&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create the table&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create a sample data model for a hypothetical e-commerce application using DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7wrfowpxtc1u7mil7hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7wrfowpxtc1u7mil7hy.png" alt="Creating your first table" width="800" height="599"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"TableName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EcommerceProducts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"KeySchema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProductID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"KeyType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HASH"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Category"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"KeyType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RANGE"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AttributeDefinitions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProductID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"N"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Category"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"AttributeType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"S"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProvisionedThroughput"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ReadCapacityUnits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"WriteCapacityUnits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu622q7otr3bzd7v62i24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu622q7otr3bzd7v62i24.png" alt="Table created" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This schema defines a DynamoDB table for storing e-commerce product data.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Working with Data
&lt;/h2&gt;

&lt;p&gt;Now let's populate with some sample data our EcommerceProcucts table, so once the table is created, you will need to click in the table, then &lt;code&gt;Explore table items&lt;/code&gt; button in the top right, and then, &lt;code&gt;Create item&lt;/code&gt; button. The easy way is to add the Items manually by assigning the Name, Value and Type, I prefer the &lt;code&gt;JSON View&lt;/code&gt;, you can put these values in that field, and then &lt;code&gt;Create item&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5h4ii5flzfz5tw81q4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5h4ii5flzfz5tw81q4h.png" alt="Creating Item" width="800" height="338"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProductID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"101"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Electronics"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Price"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"699.99"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProductName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Smartphone"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3i85nc38xj3k3jf6t2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3i85nc38xj3k3jf6t2k.png" alt="Items Created" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Easy peasy isn't it!!&lt;/p&gt;

&lt;p&gt;You can use the AWS SDKs or AWS CLI to interact with DynamoDB programmatically. Here's an example of adding data to our "EcommerceProducts" table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws dynamodb put-item &lt;span class="nt"&gt;--table-name&lt;/span&gt; EcommerceProducts &lt;span class="nt"&gt;--item&lt;/span&gt; &lt;span class="s1"&gt;'{
  "ProductID": {"N": "101"},
  "Category": {"S": "Electronics"},
  "ProductName": {"S": "Smartphone"}
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will add the Item to the table, just as we did before in AWS Console.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Data Model and Schema
&lt;/h2&gt;

&lt;p&gt;DynamoDB is schema-less, and what does schema-less mean? it means that DynamoDB doesn't require a schema to create a table, allowing you to define your data structure dynamically. It utilizes the concept of items (records) and attributes (fields) to store and retrieve data.&lt;/p&gt;

&lt;p&gt;The structure of a DynamoDB table is also comprised of Primary Keys, Partition Keys, Sort Keys and Partitions, you can deep dive into it in &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html" rel="noopener noreferrer"&gt;AWS official documentation&lt;/a&gt;. Basically keys work the same as it happens in JSON documents, but wait...so DynamoDB is basically comprised of JSON documents? Yes that's exactly what happens.&lt;/p&gt;

&lt;p&gt;In the following table structure in JSON for a sample DynamoDB table called &lt;code&gt;Employees&lt;/code&gt;, you can find Items and Attributes, where Items are similar to the concept of rows and records in relational DBs systems, each Item represents a different Employee in the table. Each item is composed of one or more attributes, so in this sample table each employee has attributes such as EmployeeID, Company, Username and so on.&lt;/p&gt;

&lt;p&gt;Employees table&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'EmployeeID':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Company':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Rose-Hill'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Username':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'ahenry'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Antonio&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Henry'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Sex':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'M'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'EmployeeID':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;102&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Company':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Lowe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Johnson&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Flynn'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Username':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'fwashington'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Frank&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Washington'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Sex':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'M'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Address':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                 &lt;/span&gt;&lt;span class="err"&gt;'Street':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;470&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;David&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Ports&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Apt.&lt;/span&gt;&lt;span class="mi"&gt;281&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                 &lt;/span&gt;&lt;span class="err"&gt;'City':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Chapmanchester'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; 
                 &lt;/span&gt;&lt;span class="err"&gt;'ZIPCode':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'MP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12525&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'EmployeeID':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;103&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Company':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Cook-Crawford'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Username':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'lhawkins'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Lisa&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Hawkins'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Sex':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'F'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Mail':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'hyoung@hotmail.com'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Birthdate':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;datetime.date(&lt;/span&gt;&lt;span class="mi"&gt;2002&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the following features of the Primary Key in the Employees table:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each item in the table has a primary key as we know them from the relational model, that sets a unique identifier from all of the others in the table. In the Employees table, the primary key consists of one attribute (EmployeeID).&lt;/li&gt;
&lt;li&gt;Some of the items have a nested attribute, which is Address. DynamoDB allows this kind of attribute to have up to 32 levels deep.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's understand a little bit more about how primary keys and sort keys works in DynamoDB, by loading a table called Music, that has some of my favourite Artists on it:&lt;/p&gt;

&lt;p&gt;Music table&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"50"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Metallica"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"...And Justice For All"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SongTitle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dyers Eve"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Genre"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Metal"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AC/DC"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"4"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Let There Be Rock"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SongTitle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Whole Lotta Rosie"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Genre"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Rock"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"132"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Soundgarden"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"7"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Superunknown"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SongTitle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Black Hole Sun"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Genre"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Rock"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the primary key for the table Music consists of two attributes (ArtistID and ArtistName). Each item in the table must have these two attributes. The combination of ArtistID and ArtistName distinguishes each item in the table from all of the others.&lt;/p&gt;

&lt;p&gt;DynamoDB also supports two different kinds of primary keys:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One of them is the &lt;strong&gt;Partition key&lt;/strong&gt;, which is basically a primary key, composed of one attribute known as the partition key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a table that has only a partition key, no two items can have the same partition key value.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The other one is the &lt;strong&gt;composite primary key&lt;/strong&gt;, which is the combination of the Partition key and the Sort key. This type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DynamoDB uses the partition key value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored. All items with the same partition key value are stored together, in sorted order by sort key value.&lt;/p&gt;

&lt;p&gt;In a table that has a partition key and a sort key, it's possible for multiple items to have the same partition key value. However, those items must have different sort key values. &lt;/p&gt;

&lt;p&gt;In our &lt;code&gt;Music&lt;/code&gt; table, such a thing would happen if we add to the table another item that has the same &lt;code&gt;ArtistID&lt;/code&gt; for Soundgarden, in this case the Composite Primary Key will be the ArtistID and SongTitle, and &lt;code&gt;ArtistID&lt;/code&gt; identified with 132 in the table will be stored in the same partition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"132"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"N"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AlbumName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Badmotorfinger"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ArtistName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Soundgarden"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Genre"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Rock"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SongTitle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Face Pollution"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08pzipl4qjkfnjqlcfeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08pzipl4qjkfnjqlcfeq.png" alt="Items from Music table" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read and learn more from the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes" rel="noopener noreferrer"&gt;DynamoDB Core Components&lt;/a&gt; in AWS official documentation. &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Why DynamoDB?
&lt;/h2&gt;

&lt;p&gt;When data no longer fits on a single MySQL server or when a single machine can no longer handle the query load, some strategy for sharding and replication is required. The pitch behind most NoSQL databases is that because they were designed from the ground up to be distributed and to handle large data volumes, they provide some combination of benefits that a simple relational database can't easily offer. &lt;/p&gt;

&lt;p&gt;So NoSQL databases allow us to model data close to what the application requires. The relational database management system's way of thinking is to usually force fit every domain model into a structure of tables and columns, and this has led to a plethora of artifacts that try to solve the impedance mismatch problem. &lt;/p&gt;

&lt;p&gt;What you'd need in this case, is something that can scale up to that number first and accept various different data types in such a case something like Dynamo DB is easy to set up and the scaling would be extremely smooth hence companies such as Expedia and next in use Amazon Dynamo DB as the primary database to leverage its features to deliver steady latency and a stable application experience to their customers. &lt;/p&gt;

&lt;h2&gt;
  
  
  7. DynamoDB vs other DB Services
&lt;/h2&gt;

&lt;p&gt;Moving on we shall discuss DynamoDB features compared with other sister services from AWS RDS, each of these services is slightly different in terms of Storage, Maintenance and Scaling features, and sometimes can be pretty difficult to determine which best fits your needs. &lt;/p&gt;

&lt;p&gt;In fact, in many situations, multiple databases are a part of a single solution. So let's go through an overview of each of them to help inform your decision further. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj04s284zrtgt02bw08a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj04s284zrtgt02bw08a0.png" alt="DynamoDB vs other DB Services" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we've delved into the world of Amazon DynamoDB, from its data model and schema to creating tables, and working with data.&lt;/p&gt;

&lt;p&gt;As you continue your journey with DynamoDB, consider exploring advanced topics such as best practices for optimizing query performance and features like Global Tables and security controls.&lt;/p&gt;

&lt;p&gt;Now equipped with a solid foundation, you can confidently harness the scalability and speed of Amazon DynamoDB for your data storage needs. &lt;/p&gt;

&lt;h2&gt;
  
  
  9. Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;[AWS DynamoDB Documentation]&lt;/strong&gt;(&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;[AWS DynamoDB Tutorial]&lt;/strong&gt;(&lt;a href="https://www.youtube.com/watch?v=k0fcbRj_pZE" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=k0fcbRj_pZE&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;[AWS DynamoDB Schema Design]&lt;/strong&gt;(&lt;a href="https://www.youtube.com/watch?v=XvD2FrS5yYM" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=XvD2FrS5yYM&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy data modeling and querying!&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>serverless</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Storage Services Tutorial: Amazon S3 Basics</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Mon, 28 Aug 2023 13:57:55 +0000</pubDate>
      <link>https://dev.to/fvgmspec/aws-storage-services-tutorial-amazon-s3-basics-4cm6</link>
      <guid>https://dev.to/fvgmspec/aws-storage-services-tutorial-amazon-s3-basics-4cm6</guid>
      <description>&lt;h1&gt;
  
  
  AWS Storage Services Tutorial: Amazon S3 Basics
&lt;/h1&gt;

&lt;p&gt;In this tutorial, we will delve into one of the foundational AWS storage services: Amazon S3 (Simple Storage Service). Amazon S3 is a highly scalable, durable, and secure object storage service that is designed to store and retrieve vast amounts of data from anywhere on the web. This tutorial will provide you with both theoretical insights and practical hands-on exercises to get you started with Amazon S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction to Amazon S3&lt;/li&gt;
&lt;li&gt;Creating an S3 Bucket&lt;/li&gt;
&lt;li&gt;Uploading and Managing Objects&lt;/li&gt;
&lt;li&gt;Pros and Cons from both approaches&lt;/li&gt;
&lt;li&gt;Access Control and Permissions&lt;/li&gt;
&lt;li&gt;Data Management Features&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction to Amazon S3
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/?did=ap_card&amp;amp;trk=ap_card" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt; is an object storage service that provides developers and IT teams with a simple and scalable storage solution. It's designed to store and retrieve vast amounts of data, such as images, videos, backups, and logs. Each piece of data is stored as an object, consisting of the actual data, a unique key (filename), and metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Creating an S3 Bucket
&lt;/h2&gt;

&lt;p&gt;An S3 bucket is a container for storing objects. Your first S3 bucket can be created using AWS CLI or through the AWS Console (User Interface), so we will perform both approaches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating bucket from console
&lt;/h3&gt;

&lt;p&gt;Creating a Bucket from the Console is a very straightforward process, you just need to log in to AWS Console and search S3 in the search bar, once there click the button &lt;code&gt;Create bucket&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkda3o4rho05opnjapzep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkda3o4rho05opnjapzep.png" alt="S3 bucket creation from console" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following screen will appear, these options coresponds to your Bucket name, the AWS Region where you wish to store your data, and some other configurations that generally, are chosen as default, unless you need to set customized and advanced settings before clicking &lt;code&gt;Create bucket&lt;/code&gt; in the end of the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx01umjt2hllwfqda412r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx01umjt2hllwfqda412r.png" alt="Configuring your bucket" width="800" height="757"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once these settings are done, you will have set your first bucket in the S3 storage service.&lt;/p&gt;

&lt;p&gt;Now let's do the same but using AWS CLI!&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating bucket from AWS CLI
&lt;/h3&gt;

&lt;p&gt;This step assumes that you have configured your AWS CLI locally, if this is not the case you can follow this link in order to configure your &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html" rel="noopener noreferrer"&gt;AWS Command Line Interface&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can run the line above in your favorite command line interface (CLI) or the one installed by default in your laptop &lt;code&gt;aws s3api create-bucket --bucket your-unique-bucket-name --region your-preferred-region --create-bucket-configuration LocationConstraint=your-preferred-region&lt;/code&gt;, please just replace &lt;code&gt;your-unique-bucket-name&lt;/code&gt; with a globally unique name and &lt;code&gt;your-preferred-region&lt;/code&gt; with the AWS region of your choice after the parameters --region and --create-bucket-configuration LocationConstraint. In the case of your bucket name this link will guide you on the naming rules for buckets &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html" rel="noopener noreferrer"&gt;Bucket naming rules&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-bucket &lt;span class="nt"&gt;--bucket&lt;/span&gt; your-unique-bucket-name &lt;span class="nt"&gt;--region&lt;/span&gt; your-preferred-region
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will receive the following output in your command line:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej4d74fqzdyd01aa3oxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej4d74fqzdyd01aa3oxc.png" alt="Creating bucket from CLI" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Uploading and Managing Objects
&lt;/h2&gt;

&lt;p&gt;Uploading data to your bucket is a process that can also be done from AWS Console and from CLI, so let's show how to do it in both cases&lt;/p&gt;

&lt;h3&gt;
  
  
  Uploading data from AWS Console
&lt;/h3&gt;

&lt;p&gt;Once you created your bucket, you just need to fill it with data coming from multiple sources (local flat files, SQL connections, API requests, among others) and on different formats (text, images, etc...). You just need to get in the bucket, and click the button &lt;code&gt;Upload&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg95xzx09ogw05ps6mee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg95xzx09ogw05ps6mee.png" alt="Uploading from console" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you can add files or folders, by clicking on one or the other button, or dragging and dropping  any object in the selected area &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjar5jif43l1l3c7d7ujq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjar5jif43l1l3c7d7ujq.png" alt="Uploading files" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on the &lt;code&gt;Upload&lt;/code&gt; button at the end of the page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uploading files from AWS CLI
&lt;/h3&gt;

&lt;p&gt;Now, let's upload a file to your S3 bucket using the CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;your-file.txt s3://your-unique-bucket-name/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I just uploaded an excel file that I had stored locally by running the line above&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0b33ibnxhpei6tc4ap0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0b33ibnxhpei6tc4ap0.png" alt="Uploading file using CLI" width="782" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6wn1bagmjp3ob9dlhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6wn1bagmjp3ob9dlhf.png" alt="File loded to bucket" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have written a tutorial previously that covers how to do the same loading data actions from CLI but using an incredible python library called &lt;a href="https://aws-sdk-pandas.readthedocs.io/en/stable/api.html#amazon-s3" rel="noopener noreferrer"&gt;awswrangler&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read it and put it into action from my &lt;a href="https://data-prof.hashnode.dev/interacting-with-amazon-s3-using-aws-data-wrangler-awswrangler-sdk-for-pandas-a-comprehensive-guide" rel="noopener noreferrer"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Pros and Cons: Creating S3 Buckets using AWS CLI vs. AWS Console
&lt;/h2&gt;

&lt;p&gt;When it comes to creating Amazon S3 buckets, you have the choice between using the AWS Command Line Interface (CLI) or the AWS Management Console (User Interface). Each option comes with its own set of advantages and disadvantages.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CLI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automation and Scripting:&lt;/strong&gt; The CLI allows for automation of bucket creation and management tasks through scripts, making it easy to repeat tasks and maintain consistency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency:&lt;/strong&gt; For users comfortable with command-line interfaces, the CLI can be faster and more efficient, especially for bulk operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration:&lt;/strong&gt; It's easy to integrate AWS CLI commands into your custom applications or scripts for seamless integration with other processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learning Curve:&lt;/strong&gt; For users unfamiliar with command-line interfaces, there might be a learning curve to understand and use CLI commands effectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Potential Errors:&lt;/strong&gt; Mistakes in commands could lead to unintended actions, potentially affecting your resources or data if not used carefully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Console (User Interface)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visual Interface:&lt;/strong&gt; The UI provides a visual representation of your resources, making it easier for users who prefer a graphical interface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Use:&lt;/strong&gt; Creating buckets through the UI is straightforward, requiring minimal technical expertise or familiarity with command-line tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation:&lt;/strong&gt; The UI often includes validation checks and prompts, reducing the likelihood of errors during the bucket creation process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual Process:&lt;/strong&gt; Creating buckets through the UI can be time-consuming, especially for repetitive tasks or when managing multiple buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited Automation:&lt;/strong&gt; While the UI can be used to perform manual tasks, it lacks the automation capabilities that the CLI provides, which might be necessary for certain use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited Scripting:&lt;/strong&gt; If you need to integrate bucket creation into your custom scripts or applications, the UI might not be as flexible as the CLI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, choosing between the AWS CLI and the AWS Console for creating S3 buckets depends on your familiarity with command-line interfaces, the level of automation you require, and your preference for a visual or scripted approach. Both options have their merits, so consider your workflow and needs before deciding which one to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Access Control and Permissions
&lt;/h2&gt;

&lt;p&gt;S3 offers fine-grained access control. You can control who can access your bucket and objects. By default, all newly created buckets and objects are private.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rpm02aokpge5ewjnzml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rpm02aokpge5ewjnzml.png" alt="Bucket permissions" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Making an object public, can be also done from AWS Console by editing the Block public access in the &lt;code&gt;Permissions&lt;/code&gt; page shown above, I will show how to do it from CLI&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api put-object-acl &lt;span class="nt"&gt;--bucket&lt;/span&gt; your-unique-bucket-name &lt;span class="nt"&gt;--key&lt;/span&gt; your-file.txt &lt;span class="nt"&gt;--acl&lt;/span&gt; public-read
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7bbwxor9l7gwbp8h6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7bbwxor9l7gwbp8h6y.png" alt="Setting public access" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Data Management Features
&lt;/h2&gt;

&lt;p&gt;Amazon S3 provides various features for managing your data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Versioning&lt;/strong&gt;: Enable versioning on your bucket to keep multiple versions of an object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lifecycle Policies&lt;/strong&gt;: Define rules to automatically transition objects to different storage classes or delete them after a specified period.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Region Replication&lt;/strong&gt;: Replicate objects across different regions for disaster recovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we've covered the basics of Amazon S3, from creating buckets to uploading objects and managing access. Amazon S3's scalability, durability, and rich feature set make it a fundamental service for various cloud-based applications.&lt;/p&gt;

&lt;p&gt;To explore further, you can learn about advanced features such as S3 event notifications, data encryption, and using S3 with other AWS services.&lt;/p&gt;

&lt;p&gt;Remember, this is just the beginning of your journey into AWS storage services. Happy exploring!&lt;/p&gt;

&lt;p&gt;Feel free to check the &lt;a href="https://docs.aws.amazon.com/s3/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; for more in-depth information and advanced features.&lt;/p&gt;

&lt;p&gt;Stay tuned for more tutorials on AWS storage services!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Interacting with Amazon S3 using AWS Data Wrangler (awswrangler) SDK for Pandas: A Comprehensive Guide</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Sun, 20 Aug 2023 19:09:24 +0000</pubDate>
      <link>https://dev.to/fvgmspec/interacting-with-amazon-s3-using-aws-data-wrangler-awswrangler-sdk-for-pandas-a-comprehensive-guide-bgl</link>
      <guid>https://dev.to/fvgmspec/interacting-with-amazon-s3-using-aws-data-wrangler-awswrangler-sdk-for-pandas-a-comprehensive-guide-bgl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon S3 is a widely used cloud storage service for storing and retrieving data. AWS Data Wrangler (awswrangler) is a Python library that simplifies the process of interacting with various AWS services, including Amazon S3, especially in combination with Pandas DataFrames. In this article, I will guide you into the process of how to effectively use the awswrangler library to interact with Amazon S3, focusing on data manipulation with Pandas DataFrames.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Introduction to AWS Data Wrangler&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is AWS Data Wrangler?&lt;/li&gt;
&lt;li&gt;Key Features and Benefits&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set Up Your AWS Account&lt;/li&gt;
&lt;li&gt;Install Required Libraries&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Connecting to Amazon S3&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configuring AWS Credentials&lt;/li&gt;
&lt;li&gt;Creating a Connection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Uploading and Downloading Data&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loading Data to S3&lt;/li&gt;
&lt;li&gt;Reading Data from S3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leveraging awswrangler for S3 Data Operations&lt;/li&gt;
&lt;li&gt;Further Learning Resources&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction to AWS Data Wrangler
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is AWS Data Wrangler?
&lt;/h3&gt;

&lt;p&gt;AWS Data Wrangler is a Python library that simplifies the process of interacting with various AWS services, built on top of some useful data tools and open-source projects such as &lt;a href="https://github.com/pandas-dev/pandas" rel="noopener noreferrer"&gt;Pandas&lt;/a&gt;, &lt;a href="https://github.com/apache/arrow" rel="noopener noreferrer"&gt;Apache Arrow&lt;/a&gt; and &lt;a href="https://github.com/boto/boto3" rel="noopener noreferrer"&gt;Boto3&lt;/a&gt;. It offers streamlined functions to connect to, retrieve, transform, and load data from AWS services, with a strong focus on Amazon S3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features and Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Seamless Integration: Integrate AWS services with Pandas DataFrames using familiar methods.&lt;/li&gt;
&lt;li&gt;Efficient Data Manipulation: Perform data transformations efficiently using optimized Pandas methods.&lt;/li&gt;
&lt;li&gt;Simplified Connection: Easily configure AWS credentials and establish connections to AWS services.&lt;/li&gt;
&lt;li&gt;Error Handling: Built-in error handling and logging mechanisms for improved reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Set Up Your AWS Account
&lt;/h3&gt;

&lt;p&gt;Before you start, ensure you have an AWS account setup with the necessary IAM (Identity and Access Management) user credentials with S3 access permissions, and have the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; configured locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Required Libraries
&lt;/h3&gt;

&lt;p&gt;It is a generally well know good practice to work in isolated environments, specially when you are trying some new pythn libraries, so if you are a conda user, you should first create a conda environment where you will after that install awswrangler. &lt;/p&gt;

&lt;p&gt;First create your conda environment by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda create &lt;span class="nt"&gt;-n&lt;/span&gt; data-wrangling &lt;span class="nv"&gt;python&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then activate the environment running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda activate data-wrangling
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs04w8obsj1tarut57106.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs04w8obsj1tarut57106.png" alt="Activating conda environment" width="673" height="847"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now is time to install the required libraries inside the environment using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;awswrangler pandas boto3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Connecting to Amazon S3
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating S3 bucket
&lt;/h3&gt;

&lt;p&gt;In order to create the S3 bucket we will use AWS CLI, and if you followed the previous guidelines on Setting up your AWS account, your access keys should be stored in your &lt;em&gt;C:\Users\%USERPROFILE%.aws&lt;/em&gt; directory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcb2z2sxz6ihfcahhe77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcb2z2sxz6ihfcahhe77.png" alt="Image description" width="711" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then you will be able to create the bucket from command line by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-bucket &lt;span class="nt"&gt;--bucket&lt;/span&gt; aws-sdk-pandas72023 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-2 &lt;span class="nt"&gt;--create-bucket-configuration&lt;/span&gt; &lt;span class="nv"&gt;LocationConstraint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have called the bucket &lt;em&gt;aws-skd-pandas72023&lt;/em&gt; but you can name it whatever you like as long as it follows the naming rules for S3 buckets &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html" rel="noopener noreferrer"&gt;Bucket naming rules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you will be receiving the following output in the command line:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpoz4rfc258yfr2y9yuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpoz4rfc258yfr2y9yuq.png" alt="Bucket creation output" width="548" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you will be able to visualize the newly created bucket in your  AWS user:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv80edymh8evpwz3v9ba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv80edymh8evpwz3v9ba.png" alt="Image description" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Connection
&lt;/h3&gt;

&lt;p&gt;awswrangler library &lt;a href="https://aws-sdk-pandas.readthedocs.io/en/stable/tutorials/002%20-%20Sessions.html" rel="noopener noreferrer"&gt;internally handles Sessions&lt;/a&gt; and AWS credentials using boto3 in order to connect to your bucket using your AWS credentials, so besides importing awswrangler you should also:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the packages you would need to import are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Importing required libraries
import awswrangler as wr
import yfinance as yf
import boto3
import pandas as pd
import datetime as dt
from datetime import date, timedelta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also git clone the &lt;a href="https://github.com/fvgm-spec/cloud-experiments/blob/main/experiments/notebooks/aws-sdk-pandas/aws-sdk-pandas.ipynb" rel="noopener noreferrer"&gt;repository&lt;/a&gt; that has the code used in this tutorial.&lt;/p&gt;

&lt;p&gt;If you have cloned the repository, you have noticed that we are using the library &lt;em&gt;yfinance&lt;/em&gt; to extract stocks data from the API, and then store it in a pandas dataframe, so we can write the extracted data (dataframes) in the previously created S3 bucket using &lt;em&gt;awswrangler&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxti6l9fyis40atfwaaof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxti6l9fyis40atfwaaof.png" alt="Get stock data function" width="673" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;awswrangler API is able to read and write data in and from a huge kind of file formats and to a numeroius number of AWS services, refer to &lt;a href="https://aws-sdk-pandas.readthedocs.io/en/stable/tutorials.html" rel="noopener noreferrer"&gt;this list&lt;/a&gt; for more information.  &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Loading and Downloading Data from S3 buckets
&lt;/h2&gt;

&lt;p&gt;Covering all the services available on the API to read and write data from and to AWS, would make this tutorial quite long, so we will be working with a commonly used data source storage in data projects, such as S3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uploading Data to S3
&lt;/h3&gt;

&lt;p&gt;In the repository shared above I have written a function that writes the dataframe extracted with the &lt;em&gt;get_data_from_api&lt;/em&gt; function to the S3 bucket previously created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;write_data_to_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Parameters:
    ----------
    mode(str): Available write modes are &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;append&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;overwrite&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; and &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;overwrite_partitions&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/raw-data/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;file_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;#Sending dataframe of corresponding ticker to bucket
&lt;/span&gt;    &lt;span class="n"&gt;wr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mode&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So let's put the &lt;em&gt;get_data_from_api&lt;/em&gt; into action by passing the stock symbol of &lt;a href="https://finance.yahoo.com/quote/NVDA?p=NVDA" rel="noopener noreferrer"&gt;NVDA stock&lt;/a&gt; and then we will load it to the bucket using &lt;em&gt;write_data_to_bucket&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefdu2dz0d7a2nbz557pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefdu2dz0d7a2nbz557pv.png" alt="NVIDIA Corporation" width="784" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we go to the S3 bucket, we'll notice that there will be a new folder inside the bucket with the name of the stock, and a CSV file inside of it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r30uxhtpnyj8kyene7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r30uxhtpnyj8kyene7y.png" alt="Folder created in S3" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also pass multiple dataframes to the function so they will be created in the bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kezvicue44vze7j9ggu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kezvicue44vze7j9ggu.png" alt="Downloading multiple dataframes" width="646" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading Data from S3
&lt;/h3&gt;

&lt;p&gt;Reading data from an S3 bucket using awswrangler, is a very straightforward task, as you only need to pass the S3 path where your files are stored and the path_suffix corresponding to the method you are using, in this case &lt;code&gt;read_csv&lt;/code&gt;. Find more parameters available to use with this method in awswrangler &lt;a href="https://aws-sdk-pandas.readthedocs.io/en/stable/stubs/awswrangler.s3.read_csv.html#awswrangler.s3.read_csv" rel="noopener noreferrer"&gt;API reference&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In my tutorial I have written a function that takes the folder name where the CSV file was stored when we wrote the data coming from the API in the S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_csv_from_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;folder_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
   &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;wr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/rawdata/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;folder_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
   &lt;span class="n"&gt;path_suffix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpulvahrk2y9a39ujx4f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpulvahrk2y9a39ujx4f7.png" alt="Output from read_csv_from_bucket function" width="706" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk04qhxjo1yj4wq26m87k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk04qhxjo1yj4wq26m87k.png" alt="Data stored in S3 bucket" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Conclusion
&lt;/h2&gt;

&lt;p&gt;There are a lot of methods available in the API Reference page inorder to interact with S3 and multiple other AWS services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsi5i9mwnv03x2vozbix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsi5i9mwnv03x2vozbix.png" alt="API Reference" width="800" height="751"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I encourage you to keep testing the once you find useful to integrate with your ETLs and pipelines.&lt;/p&gt;

&lt;p&gt;AWS Data Wrangler simplifies the interaction with Amazon S3, providing a seamless experience for performing data operations using Pandas DataFrames. This tutorial covered the fundamental concepts of connecting to S3, uploading, downloading, and transforming data, as well as advanced interactions. By leveraging AWS Data Wrangler, you can streamline your data workflows and focus on deriving insights from your data.&lt;/p&gt;

&lt;p&gt;Cheers and happy coding!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources for Further Learning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Data Wrangler Documentation: &lt;a href="https://aws-data-wrangler.readthedocs.io/" rel="noopener noreferrer"&gt;https://aws-data-wrangler.readthedocs.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS Data Wrangler GitHub Repository: &lt;a href="https://github.com/awslabs/aws-data-wrangler" rel="noopener noreferrer"&gt;https://github.com/awslabs/aws-data-wrangler&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS SDK for Python (Boto3) Documentation: &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;https://boto3.amazonaws.com/v1/documentation/api/latest/index.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>s3</category>
      <category>datawrangling</category>
      <category>pandas</category>
      <category>python</category>
    </item>
    <item>
      <title>AWS in Plain English</title>
      <dc:creator>fvgm-spec</dc:creator>
      <pubDate>Mon, 11 Apr 2022 11:58:32 +0000</pubDate>
      <link>https://dev.to/fvgmspec/aws-in-plain-english-5a15</link>
      <guid>https://dev.to/fvgmspec/aws-in-plain-english-5a15</guid>
      <description>&lt;p&gt;This is supposed to be my very first article regarding my learning path on AWS ecosystem. &lt;/p&gt;

&lt;p&gt;This time I will write about a useful tool I found in one of my readings, it is &lt;em&gt;AWS in Plain English&lt;/em&gt;, of course is not complete documentation of each AWS service like we can find in the official documentation, but it is a compiled guide that tell us, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Should we have called that service (EC2 =  Amazon Virtual Servers)&lt;/li&gt;
&lt;li&gt;How we usually use this service ÇIt´s self explanatory)&lt;/li&gt;
&lt;li&gt;It´s like: A short description of what this service is similar to other we for sure have used in the past (VPC = VLAN networks).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwhgvcdii00obrng3j2e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwhgvcdii00obrng3j2e.PNG" alt="Image description" width="800" height="477"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;As a summary, it has a lot of short description of AWS services that we already know and some others that we maybe not. &lt;/p&gt;

&lt;p&gt;For the link visit &lt;a href="https://expeditedsecurity.com/aws-in-plain-english/" rel="noopener noreferrer"&gt;https://expeditedsecurity.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
