<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Renato Losio 💭💥</title>
    <description>The latest articles on DEV Community by Renato Losio 💭💥 (@cloudiamo).</description>
    <link>https://dev.to/cloudiamo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cloudiamo"/>
    <language>en</language>
    <item>
      <title>You Know What About Me? Decoding My Digital Trail Across Major Platforms</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Sun, 08 Jun 2025 16:04:42 +0000</pubDate>
      <link>https://dev.to/cloudiamo/you-know-what-about-me-decoding-my-digital-trail-across-major-platforms-3mbd</link>
      <guid>https://dev.to/cloudiamo/you-know-what-about-me-decoding-my-digital-trail-across-major-platforms-3mbd</guid>
      <description>&lt;p&gt;&lt;em&gt;Under European rules, users can request personal data from platforms, but few do, and the results are often hard to use. I accessed and parsed data from TikTok, Amazon, Google, and Instagram, uncovering surprising insights and useful tips.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We often hear the phrase "if you're not paying for the product, you are the product." Yet despite our concerns about data privacy and corporate surveillance, very few of us know what data these platforms collect about us. Even fewer take advantage of our legal right to access this information.&lt;/p&gt;

&lt;p&gt;While European regulations are sometimes criticized for focusing on mundane issues like standardizing charging cables, there's one European rule that has quietly spread worldwide and genuinely empowers users: the right to data portability under GDPR Article 15.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Right You Often Don't Remember You Have
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Under Article 15 of the GDPR, individuals (data subjects) have the right to obtain a copy of their personal data held by a data controller, as well as information on how and why it is being processed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn't just a European privilege, most global services implement the same data takeout process for all users worldwide, regardless of location, because it's too complicated to maintain separate systems.&lt;/p&gt;

&lt;p&gt;The process is straightforward but asynchronous. Companies have up to 30 days to provide your data (extendable to 3 months with good reason), and it must be provided in a structured, machine-readable format—typically CSV files, JSON, or text files. Best of all, it's usually free of charge unless you make excessive requests.&lt;/p&gt;

&lt;p&gt;From Duolingo to Dropbox, from small German cloud services to tech giants like Google, virtually every digital service now offers some form of data takeout. You simply navigate to your account settings, request your data, and receive download links via email within days or weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality: Nobody's Looking
&lt;/h2&gt;

&lt;p&gt;Despite this powerful right, actual usage remains remarkably low. In my experience working as a software architect implementing these systems, only a tiny fraction of users ever request their data. The reasons are twofold: users don't know about this option or find it cumbersome, while service providers have no incentive to promote it. After all, data egress from cloud services is expensive, and encouraging users to download their data represents a pure cost with no business benefit.&lt;/p&gt;

&lt;p&gt;But the results can be eye-opening. File sizes range from a few megabytes for simple services to multiple terabytes for users with extensive cloud storage or long platform histories.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;951825487 Dez  5 12:04  data-takeout-amazon-renato-losio-20241205.zip&lt;br&gt;
5195593 Mai  6 19:51  duolingo.zip&lt;br&gt;
2514113 Nov 30 12:55  TikTok_Data_1732739019.zip&lt;br&gt;
2426598 Apr  6 14:59  TikTok_Data_1743792620.zip&lt;br&gt;
6546575709 Mär 29  2018  gmail.zip&lt;br&gt;
1102106991 Mär 28  2018  google-20180328T154946Z-001.zip&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Inside the Black Box: What TikTok Collects
&lt;/h2&gt;

&lt;p&gt;TikTok, despite its controversial reputation, actually provides one of the most user-friendly data takeouts I've encountered. The folder structure is intuitive, with clearly labeled directories like "Activity" containing the most interesting information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztxudj4ybf9pc7n7z0xn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztxudj4ybf9pc7n7z0xn.png" alt="TikTok Data Takeout" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The login history alone reveals extensive tracking: every login is recorded with location data, network type (WiFi vs. mobile), carrier information, and device details. But the real revelation lies in the watch history data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Date: 2024-10-06 05:43:41&lt;br&gt;
IP: 109.42.113.107&lt;br&gt;
Device Model: iPhone13,3&lt;br&gt;
Device System: iOS 17.5.1&lt;br&gt;
Network Type: 4G&lt;br&gt;
Carrier:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For one account I analyzed, the numbers were staggering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;188,000 videos shown through the feed&lt;/li&gt;
&lt;li&gt;43,000 videos watched to completion (about 25%)&lt;/li&gt;
&lt;li&gt;An average of over 500 videos per day&lt;/li&gt;
&lt;li&gt;Peak days reaching nearly 2,000 videos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4qpdqb1onzh2gmewau3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4qpdqb1onzh2gmewau3.png" alt="TikTok Usage" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;&lt;br&gt;
(Note: data may have delays of up to several days)&lt;br&gt;
Videos shared since account registration: 166&lt;br&gt;
Videos watched to the end since account registration: 43494&lt;br&gt;
Videos commented on since account registration: 1375&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using simple data analysis tools (even ChatGPT for non-developers), you can uncover viewing patterns by hour and day of the week, identify inactive periods, and calculate session durations. Most surprisingly, the claim that TikTok's algorithm only needs 260 videos to create addiction proved conservative; this threshold could be reached in less than 17 minutes of usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google's Omniscient Eye
&lt;/h2&gt;

&lt;p&gt;Google's data takeout is perhaps the most comprehensive, covering its vast ecosystem of services. For location data alone, the scale is breathtaking. A typical Android user's location history contains hundreds of thousands of data points spanning years.&lt;/p&gt;

&lt;p&gt;In one analysis of 10 years of location data, I found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;332,000 location data points&lt;/li&gt;
&lt;li&gt;Roughly 100+ location pings per day&lt;/li&gt;
&lt;li&gt;Timestamps are accurate to the millisecond&lt;/li&gt;
&lt;li&gt;Activity inference (in vehicle, walking, stationary)&lt;/li&gt;
&lt;li&gt;Complete movement patterns mapping every journey&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This data is so detailed that you could reconstruct someone's entire decade of movement, identify their home and work locations, track their travel patterns, and even infer their lifestyle habits. The precision is both impressive and unsettling.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
    "timestampMs" : "1505486494021",&lt;br&gt;
    "latitudeE7" : 525277253,&lt;br&gt;
    "longitudeE7" : 133817500,&lt;br&gt;
    "accuracy" : 20,&lt;br&gt;
    "altitude" : 75,&lt;br&gt;
    "activity" : [ {&lt;br&gt;
      "timestampMs" : "1505486658517",&lt;br&gt;
      "activity" : [ {&lt;br&gt;
        "type" : "STILL",&lt;br&gt;
        "confidence" : 71&lt;br&gt;
      }, {&lt;br&gt;
        "type" : "UNKNOWN",&lt;br&gt;
        "confidence" : 18&lt;br&gt;
      }, {&lt;br&gt;
        "type" : "IN_VEHICLE",&lt;br&gt;
        "confidence" : 12&lt;br&gt;
      } ]&lt;br&gt;
    }&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon's Data Labyrinth
&lt;/h2&gt;

&lt;p&gt;Amazon's data takeout presents a stark contrast to TikTok's user-friendly approach. The folder structure is bewildering, with dozens of cryptically named directories requiring careful exploration to understand their contents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2rhbe1kei0xqbbg14ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2rhbe1kei0xqbbg14ln.png" alt="Amazon Data Takeout" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hidden within this maze are surprising insights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete lists of advertising audiences you've been placed in&lt;/li&gt;
&lt;li&gt;Prime Video location tracking showing which countries you've streamed from&lt;/li&gt;
&lt;li&gt;Every PDF invoice from decades of purchases&lt;/li&gt;
&lt;li&gt;Detailed records of all product searches and browsing history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For one user's Prime Video data over a two-year period, location tracking revealed that 75% of their time was spent in Germany, 18% in Italy, and in various other locations, creating an accurate picture of their international movement based solely on streaming activity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Complaint: Taking Action
&lt;/h2&gt;

&lt;p&gt;The power of data takeouts extends beyond satisfying curiosity. This information enables:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better Privacy Awareness&lt;/strong&gt;: Understanding exactly what data companies collect helps inform decisions about privacy settings and service usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parental Conversations&lt;/strong&gt;: Having concrete data about social media usage patterns provides a factual foundation for discussions with children about screen time and digital habits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal Insights&lt;/strong&gt;: Identifying usage patterns, behavioral trends, and even recovering lost information (like old receipts buried in years of purchase history).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Corporate Accountability&lt;/strong&gt;: Companies can only be held accountable for data practices when users understand what data is being collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Tips for Data Exploration
&lt;/h2&gt;

&lt;p&gt;For those interested in exploring their digital footprint:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start Simple&lt;/strong&gt;: Begin with smaller services before tackling comprehensive platforms like Google or Amazon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sanitize Sensitive Data&lt;/strong&gt;: When using AI tools for analysis, remove personal identifiers and URLs to protect privacy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focus on Patterns&lt;/strong&gt;: Look for trends and usage patterns rather than processing every data point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Available Tools&lt;/strong&gt;: Non-developers can leverage ChatGPT or similar tools to analyze data patterns without coding skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expect Variety&lt;/strong&gt;: There's no standard format across platforms, but most use JSON, CSV, or plain text files that are reasonably accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path Forward
&lt;/h2&gt;

&lt;p&gt;Privacy begins with awareness. We cannot meaningfully discuss data rights, corporate responsibility, or digital privacy without first understanding what information we're sharing. The tools exist (legally mandated and technically accessible) to pull back the curtain on our digital footprints.&lt;/p&gt;

&lt;p&gt;Rather than simply complaining about data issues, we can take concrete action. Request your data, explore what's there, and use that knowledge to make informed decisions about your digital life. The companies won't advertise this option, but they're legally required to provide it when asked.&lt;/p&gt;

&lt;p&gt;Your data belongs to you. The first step in taking control is knowing what's there to control.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The author &lt;a href="https://www.youtube.com/watch?v=qogbIif1AlE" rel="noopener noreferrer"&gt;presented this session at re:publica 25&lt;/a&gt;, demonstrating live data analysis from major platforms while protecting personal privacy through data sanitization techniques. The first draft of this article has been generated starting from the session audio, using Amazon Transcribe and Claude Sonnet 4&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>data</category>
      <category>socialmedia</category>
    </item>
    <item>
      <title>Around the World in 15 Buckets</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Sun, 15 Dec 2024 15:48:49 +0000</pubDate>
      <link>https://dev.to/aws-heroes/around-the-world-in-15-buckets-o88</link>
      <guid>https://dev.to/aws-heroes/around-the-world-in-15-buckets-o88</guid>
      <description>&lt;p&gt;Today, we’ll embark on a little journey around the world using S3. We’ll copy various objects across AWS regions worldwide, aiming to answer two simple questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Does the &lt;a href="https://aws.amazon.com/s3/storage-classes/" rel="noopener noreferrer"&gt;storage class&lt;/a&gt; affect the speed?&lt;/li&gt;
&lt;li&gt;How does the size of the object influence the transfer time?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Verne’s Journey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine wagering your entire fortune on a wild, globe-trotting adventure. That’s exactly what Phileas Fogg, a meticulous and mysterious Englishman, does in Jules Verne’s &lt;em&gt;&lt;a href="https://en.wikipedia.org/wiki/Around_the_World_in_Eighty_Days" rel="noopener noreferrer"&gt;Around the World in 80 Days&lt;/a&gt;&lt;/em&gt;, racing against time and overcoming a few unexpected obstacles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrobyi53vpnhzkc1hqe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrobyi53vpnhzkc1hqe3.png" alt="Around the world - Verne" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Fogg sets out to prove he can circumnavigate the planet in just 80 days, we want to prove we can circle the globe through AWS data centers in just a few seconds—copying S3 objects worldwide in the fastest way possible. Fogg begins in London, traveling through Europe, Asia, the United States, and back to Ireland before returning to London. Can we achieve something similar with Amazon S3? AWS doesn’t (yet) have 80 regions, but we don’t need them. Instead of using all the available regions, we’ll stick to fifteen, just enough to follow Fogg’s route.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51um5ynkff4da4ri47bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51um5ynkff4da4ri47bu.png" alt="Around the world - Amazon S3" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruling Out S3 Replication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First off, do we need to implement the logic ourselves? Can’t we just use&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html" rel="noopener noreferrer"&gt;S3 replication&lt;/a&gt;? As a lazy cloud architect, that would definitely be my favorite approach. It’s the first thing I’d consider to send my objects worldwide in 80 buckets.&lt;/p&gt;

&lt;p&gt;However, replication does not work on cascade and it is an asynchronous operation that can take up to 24–48 hours. You can reduce the replication time to an SLA of 15 minutes by paying a bit more—but even that’s still not the fastest way to move objects globally. I want to be as fast as possible when moving my data, so I’ve decided to implement this journey myself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Simple S3 Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need a bucket in every region we plan to use. Here’s the list of regions for our experiment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;verne_buckets=("eu-west-2" "eu-west-3" "eu-south-1" "il-central-1" "me-south-1" "me-central-1" "ap-south-1" "ap-southeast-1" "ap-east-1" "ap-northeast-1" "us-west-1" "us-west-2" "us-east-2" "eu-west-1" "eu-west-2")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s just a subset—we could add a few more (like Northern Virginia), but doing so wouldn’t significantly affect the results. If you want to replicate this experiment yourself, keep in mind that some newer regions aren’t enabled by default, so you’ll need to take care of that first.&lt;/p&gt;

&lt;p&gt;Another hurdle is that you can’t use the same bucket name in different regions, which means all our bucket names need to be unique. How can we manage that? The simplest—though not the most elegant—approach is to create a reasonably unique prefix and append the region code to the name. For example &lt;code&gt;verne-demo-us-east-1&lt;/code&gt;. Once we have an array of the regions we’re using, we can use the AWS CLI to create those buckets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3prefix="verne-demo"

for region in "${verne_buckets[@]}"

do

   bucket_name="$s3prefix-$region"

   echo "aws s3api create-bucket --bucket $bucket_name --region $region --create-bucket-configuration LocationConstraint=$region"

done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then upload files of different sizes to our very first region, London—the city where Verne’s journey begins, and so does ours. In theory, we could upload objects up to 5TB (the maximum size of an object on Amazon S3). However, to keep the test simple—and the storage and transfer costs negligible—we’ll stick to the following items and sizes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nzg7r3h2tng8199zn7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nzg7r3h2tng8199zn7a.png" alt="Objects uploaded - S3 London" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can we now copy data between buckets around the world? The most obvious and elegant way is to go serverless, using AWS Lambda and Amazon S3 event notifications. When an object lands in a bucket, you can trigger a Lambda function to copy it to another bucket, and so on.&lt;/p&gt;

&lt;p&gt;But I’m a lazy developer, and I just need to answer a few simple questions. I don’t want to deploy Lambda functions everywhere and deal with a lot of setup. Thankfully, with the AWS CLI and AWS CloudShell, the task is straightforward and the overhead minimal. When you &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html" rel="noopener noreferrer"&gt;copy an object between two regions using the CLI&lt;/a&gt;, the binary doesn’t pass through your terminal—it’s handled directly between AWS data centers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (( i=0; i&amp;lt;$array_length-1; i++ ))

do

aws s3api copy-object --copy-source $s3prefix-${verne_buckets[$i]}/$objectname --key $objectname --bucket $s3prefix-${verne_buckets[$((i+1))]} --region ${verne_buckets[$((i+1))]} --storage-class $testclass

done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;On Our Way&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s run our simple tests. You’ll immediately notice that using the S3 copy-object operation makes our journey impressively fast. You can circle the globe in just 23 seconds! However, as the object size increases, transfer times start to scale linearly. For example, compare the times for 100 MB versus 500 MB. At smaller sizes, the overhead of the commands becomes more noticeable.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;s3api copy-object&lt;/th&gt;
&lt;th&gt;s3 cp (multipart)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;verne-1K&lt;/td&gt;
&lt;td&gt;23s&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;verne-1M&lt;/td&gt;
&lt;td&gt;31s&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;verne-10M&lt;/td&gt;
&lt;td&gt;56s&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;verne-100M&lt;/td&gt;
&lt;td&gt;5m25s&lt;/td&gt;
&lt;td&gt;3m40s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;verne-500M&lt;/td&gt;
&lt;td&gt;25m42s&lt;/td&gt;
&lt;td&gt;5m27s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What happens when you switch to the multipart scenario? The growth is no longer linear, and it’s impressive that you can copy a 500 MB object across 15 buckets in about 5 minutes using the default client configuration. No tuning of the S3 client configuration was performed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;storage class&lt;/th&gt;
&lt;th&gt;s3api copy-object 1M&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;STANDARD&lt;/td&gt;
&lt;td&gt;30.4s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;STANDARD-IA&lt;/td&gt;
&lt;td&gt;31.0s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ONEZONE-IA (*)&lt;/td&gt;
&lt;td&gt;31.2s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GLACIER-IR&lt;/td&gt;
&lt;td&gt;30.9s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REDUCED_REDUNDANCY (*)&lt;/td&gt;
&lt;td&gt;29.6s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;With multiple storage classes available, does the storage class matter? Two storage classes—the deprecated S3 Reduced Redundancy and S3 One Zone-Infrequent Access—don’t use three separate AZs. Would that offer any speed advantage? The short answer is no. All storage classes that provide immediate access to the object, including Glacier Instant Retrieval, behave similarly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This isn’t a benchmark or even a proper test. It’s just a fun experiment to validate some ideas about Amazon S3: the storage class of your objects doesn’t affect the time needed to copy them across regions, and the transfer time grows linearly with the object size—unless you use multipart upload.&lt;/p&gt;

&lt;p&gt;Also, Amazon S3 and AWS networks are impressively fast.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>s3</category>
    </item>
    <item>
      <title>S3 Lifecycle or Intelligent-Tiering? Object Size Always Matters</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Fri, 13 Dec 2024 12:02:41 +0000</pubDate>
      <link>https://dev.to/aws-heroes/s3-lifecycle-or-intelligent-tiering-object-size-always-matters-2mpb</link>
      <guid>https://dev.to/aws-heroes/s3-lifecycle-or-intelligent-tiering-object-size-always-matters-2mpb</guid>
      <description>&lt;p&gt;As a cloud architect and storage expert, I often hear the question: Which storage class should I use on Amazon S3? The answer is simple: as always, it depends. Today, we’ll explore some anti-patterns you might overlook when transitioning data between storage classes, whether using lifecycle rules or a managed class like S3 Intelligent-Tiering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Short Journey Back in Time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon S3 was &lt;a href="https://www.allthingsdistributed.com/2006/03/s3.html" rel="noopener noreferrer"&gt;announced almost 19 years ago&lt;/a&gt;. It’s one of the oldest services on the platform, second only to Amazon SQS. What we see today as Amazon S3 vastly differs from what was available many years ago.&lt;/p&gt;

&lt;p&gt;Even just a few years ago, there was no S3 Storage Lens, no conditional writes (&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/" rel="noopener noreferrer"&gt;introduced earlier this year&lt;/a&gt;), and no S3 Intelligent-Tiering storage class. There were no S3 triggers, there was no replication across buckets or regions. When I started working with S3 (around 2010), it was a fairly limited service but, in some ways, simpler than it is today.&lt;/p&gt;

&lt;p&gt;At the same time, we tend to forget how expensive it was. As part of the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2006/03/13/announcing-amazon-s3---simple-storage-service/" rel="noopener noreferrer"&gt;announcement 19 years ago&lt;/a&gt;, Jeff Barr wrote:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“1 GB of data for 1 month costs&lt;/em&gt; &lt;strong&gt;&lt;em&gt;just&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;15 cents.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The same storage class today costs just 2.3 cents (or less) per GB in the US East region. With additional storage classes designed to optimize costs, it can go below 0.1 cents per GB.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Storage Class&lt;/th&gt;
&lt;th&gt;Storage Price per GB&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;td&gt;2.3 cents (or less)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard Infrequent Access&lt;/td&gt;
&lt;td&gt;1.25 cents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Glacier Instant Retrieval&lt;/td&gt;
&lt;td&gt;0.4 cents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Glacier Deep Archive&lt;/td&gt;
&lt;td&gt;Less than 0.1 cents&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;AWS keeps iterating, adding new features and options. As a cloud architect, your job is to stay up to date. If you deployed and optimized costs on S3 five years ago, it’s time to revisit your setup. There’s a good chance your deployment is no longer optimized, and you might be leaving money on the table. Keep iterating!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Choose a Storage Class&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Choosing the right storage class depends on the scenario, use case, and associated costs. The price isn’t just about storage. Costs also depend on how you access your data, how you upload it, and how you move it around. So, deciding on the best storage class isn’t always straightforward.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpquflg7h3m7k9yc60ttl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpquflg7h3m7k9yc60ttl.png" alt="Amazon S3 Storage Classes" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hot storage refers to data that is frequently accessed and requires fast, consistent response times. On the other hand, cold storage is designed for data accessed infrequently and without the urgency required for hot data. The challenge? The same data often transitions from hot to cold over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html" rel="noopener noreferrer"&gt;Different classes&lt;/a&gt; come with varying retrieval costs, latency, availability, and minimum time commitments. Some classes store every copy of the data in a single Availability Zone, while others replicate data across three different ones. Different requirements mean different use cases. So, the big question is: how do you choose your storage class?&lt;/p&gt;

&lt;p&gt;There are some common patterns. Typically, new data is “warmer.” Over time, the likelihood of accessing that data decreases. This isn’t unique to AWS or object storage. For example, &lt;a href="https://www.infoq.com/articles/dropbox-magic-pocket-exabyte-storage" rel="noopener noreferrer"&gt;Dropbox has shown&lt;/a&gt; that even in backup solutions, most retrieved data is newly uploaded:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“As we have observed that 90% of retrievals are for data uploaded in the last year and 80% of retrievals happen within the first 100 days”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jk6c9wbxhlyys3kp68a.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jk6c9wbxhlyys3kp68a.webp" alt="InfoQ - Dropbox retrivals" width="617" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ideally, you want recent data in hotter storage classes and older data in colder ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle Rules Help&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How can you move data across storage classes? On S3, the easiest way is with &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html" rel="noopener noreferrer"&gt;S3 Lifecycle&lt;/a&gt;. These rules are incredibly powerful—you can decide which storage class to use, the size of objects to move, and when to move them. For example, I can move my data to a cheaper storage class after 45 days because I know I will not need to access it anymore. I can also specify that only large files are moved to colder storage classes, while smaller files (like thumbnails) stay in the Standard class to avoid a higher chance of retrieval fees.&lt;/p&gt;

&lt;p&gt;Why do I focus on size? Object size matters for lifecycle rules and, as we will see shortly, for S3 Intelligent-Tiering as well. You pay for operations (transitions) by the number of objects, while storage is charged by size. What does that mean in practice? Let’s break it down with a simple example.&lt;/p&gt;

&lt;p&gt;Imagine I have 1 PB of data to move from Standard Infrequent Access (Standard-IA) to Glacier Instant Retrieval (Glacier-IR). How much does the transition cost? How does it impact my overall S3 costs? And how long does it take to recoup my transition expenses?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;1GB&lt;/strong&gt; size objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;1MB&lt;/strong&gt; size objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;200KB&lt;/strong&gt; size objects&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Transition to GIR:&lt;/td&gt;
&lt;td&gt;1M&lt;/td&gt;
&lt;td&gt;1000M&lt;/td&gt;
&lt;td&gt;5000M&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost of transitions (one off):&lt;/td&gt;
&lt;td&gt;20 USD&lt;/td&gt;
&lt;td&gt;20000 USD&lt;/td&gt;
&lt;td&gt;100000 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage saving (monthly):&lt;/td&gt;
&lt;td&gt;17000 USD&lt;/td&gt;
&lt;td&gt;17000 USD&lt;/td&gt;
&lt;td&gt;17000 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Recoup time:&lt;/td&gt;
&lt;td&gt;&amp;lt; 1 day&lt;/td&gt;
&lt;td&gt;1+ month&lt;/td&gt;
&lt;td&gt;6 months&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If my 1 PB consists of 1 GB objects, I’ll pay about $20 for the transition, while monthly storage costs would be approximately $17000. However, as object sizes decrease, transition costs become less trivial. You might still choose to move the data, but it’s important to understand your upfront costs.&lt;/p&gt;

&lt;p&gt;The best way to estimate these costs is to use &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_basics_metrics_recommendations.html" rel="noopener noreferrer"&gt;S3 Storage Lens&lt;/a&gt;. Ideally, you want most small objects in the Standard class and larger objects in colder classes. Below is an example of a distribution for an almost 20 PB bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2vbl9qxffyr9a7zfd3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2vbl9qxffyr9a7zfd3f.png" alt="S3 Storage Lens" width="622" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A more detailed example was shared by Canva last year on the AWS Storage Blog: &lt;a href="https://aws.amazon.com/blogs/storage/how-canva-saves-over-3-million-annually-in-amazon-s3-costs/" rel="noopener noreferrer"&gt;How Canva saves over $3 million annually in Amazon S3 costs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about S3 Intelligent-Tiering?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why develop your logic when Amazon provides a managed class that handles this for you, without charging transition or retrieval fees? S3 Intelligent-Tiering is indeed the ideal choice for most use cases and the default choice for many workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m9ba6005xa4h9woprw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m9ba6005xa4h9woprw3.png" alt="Amazon S3 Intelligent-Tiering" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, it’s still important to understand how the S3 Intelligent-Tiering class works under the hood and why object size remains a key factor. Let’s start with the AWS &lt;a href="https://aws.amazon.com/s3/storage-classes/intelligent-tiering/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“For a&lt;/em&gt; &lt;strong&gt;&lt;em&gt;small monthly object monitoring and automation charge&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;S3 Intelligent-Tiering moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier for savings of 40%; and after 90 days of no access they’re moved to the Archive Instant Access tier with savings of 68%.&lt;/em&gt; &lt;strong&gt;&lt;em&gt;If the objects are accessed later S3 Intelligent-Tiering moves the objects back to the Frequent Access tier&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The bold emphasis is mine. Now, how much is the “small monthly object monitoring and automation charge”? What are the implications of moving objects back? Let’s revisit the example of 1 PB of data. How much would this small management fee cost?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;strong&gt;1GB&lt;/strong&gt; objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;1MB&lt;/strong&gt; objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;200KB&lt;/strong&gt; objects&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1M&lt;/td&gt;
&lt;td&gt;1000M&lt;/td&gt;
&lt;td&gt;5000M&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.5 USD&lt;/td&gt;
&lt;td&gt;2500 USD&lt;/td&gt;
&lt;td&gt;12500 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Once more, the answers depend on the average size of your objects. The management fee might be negligible, or it might cost you more each month than the storage itself.&lt;/p&gt;

&lt;p&gt;With S3 Intelligent-Tiering, there are no retrieval fees, but the object is moved back to the most expensive Frequent Access Tier (Standard class) for 30 days and then stays for another 60 days in the Infrequent Access tier. Retrieving one object means you’re not paying a retrieval fee, but you are paying more for the storage of the object for the next 90 days. How much is that extra overhead?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;1GB&lt;/strong&gt; objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;1MB&lt;/strong&gt; objects&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;200K&lt;/strong&gt; objects&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Intelligent-Tiering&lt;/td&gt;
&lt;td&gt;48.5 USD&lt;/td&gt;
&lt;td&gt;48.5 USD&lt;/td&gt;
&lt;td&gt;48.5 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GIR (once)&lt;/td&gt;
&lt;td&gt;30 USD&lt;/td&gt;
&lt;td&gt;30 USD&lt;/td&gt;
&lt;td&gt;30 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GIR (10x)&lt;/td&gt;
&lt;td&gt;300 USD&lt;/td&gt;
&lt;td&gt;300 USD&lt;/td&gt;
&lt;td&gt;300 USD&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here, the key metric is the number of times you expect to retrieve that specific (colder) object after the first retrieval. Should you not use the managed storage class? Far from it. S3 Intelligent-Tiering is an amazing option, but—as with any managed service—it’s critical to understand the cost implications. Your retrieval patterns and the average storage size of your object matter.&lt;/p&gt;

&lt;p&gt;Whenever you develop your logic with Lifecycle Rules or you delegate the logic to the S3 Intelligent-Tiering class, the average size of your objects on S3 is a significant factor in your storage costs. Move larger objects first; that’s where most of the storage saving lies.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>costoptimization</category>
      <category>s3</category>
    </item>
    <item>
      <title>Amazon Aurora is Now 60 Times Faster than RDS for MySQL. Really.</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Wed, 26 Jul 2023 16:52:06 +0000</pubDate>
      <link>https://dev.to/aws-heroes/amazon-aurora-is-now-60-times-faster-than-rds-for-mysql-really-3c0e</link>
      <guid>https://dev.to/aws-heroes/amazon-aurora-is-now-60-times-faster-than-rds-for-mysql-really-3c0e</guid>
      <description>&lt;p&gt;Today I will perform &lt;del&gt;a simple test&lt;/del&gt; a proper benchmark, comparing Aurora MySQL and RDS for MySQL. &lt;/p&gt;

&lt;p&gt;Unlike traditional benchmarks that are lengthy and complex, this evaluation will be concise, comprising only about 200 words and taking just a couple of minutes to review the results.&lt;/p&gt;

&lt;p&gt;First, let’s create two instances, using the AWS CLI. To ensure a fair comparison, we will create two instances using default values. Both instances will have the same class and size, db.t4g.medium, and will be located in the same AWS region, Frankfurt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Aurora

$ aws rds create-db-cluster --db-cluster-identifier benchmark --engine aurora-mysql --engine-version 8.0 --master-username renato --master-user-password ******** --db-subnet-group-name renato --vpc-security-group-ids sg-030c9f25422a13fea
$ aws rds create-db-instance --db-instance-identifier benchmark --db-cluster-identifier benchmark --engine aurora-mysql --db-instance-class db.t4g.medium

# RDS

$ aws rds create-db-instance --db-instance-identifier benchmark-rds --engine mysql --db-instance-class db.t4g.medium --master-username renato --master-user-password ******** --allocated-storage 100 --vpc-security-group-ids sg-030c9f25422a13fea

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The benchmark process will involve creating a database, a table, and a MySQL procedure on both endpoints. This procedure will simply insert 100K records into the table, generating random values to introduce some data variety.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE renato;
USE renato;

CREATE TABLE renato
(id bigint(20) NOT NULL AUTO_INCREMENT,
datetime TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
value float DEFAULT NULL,
PRIMARY KEY (id));

DELIMITER $$
CREATE PROCEDURE load_renato()
BEGIN
  DECLARE i INT DEFAULT 0;
  WHILE i &amp;lt; 100000 DO
    INSERT INTO renato (datetime,value) VALUES (
      FROM_UNIXTIME(UNIX_TIMESTAMP('2023-07-01 01:00:00')+FLOOR(RAND()*31536000)),
      ROUND(RAND()*100,2));
    SET i = i + 1;
  END WHILE;
END$$
DELIMITER ;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we will call the procedure on both databases, Remember: default configuration, same region, same class, same size. No networking is involved. No magic. No tricks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Aurora

mysql&amp;gt; call load_renato;
Query OK, 1 row affected (5,41 sec)

mysql&amp;gt; select count(*) from renato;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+

# RDS

mysql&amp;gt; call load_renato;
Query OK, 1 row affected (5 min 20,41 sec)

mysql&amp;gt; select count(*) from renato;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The results are striking: Amazon Aurora outperforms RDS significantly, completing the task in just 5.4 seconds compared to RDS’s 5 minutes and 20 seconds. This remarkable 60x difference clearly demonstrates the superiority of Aurora in this specific scenario.&lt;/p&gt;

&lt;p&gt;WOW, you are kidding me!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsukblwjimf81qjwaboud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsukblwjimf81qjwaboud.png" alt="Write IOPS, very different range" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just remember this simple test every time you read “50% faster”, “47% faster and costing up to 43% less” or “127% better IOPS”.  This benchmark serves just as a reminder that real-world tests are crucial in understanding the true capabilities of a database.&lt;/p&gt;

&lt;p&gt;As they used to say, there are three kinds of lies: lies, damned lies, and &lt;del&gt;statistics&lt;/del&gt; database benchmarks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>performance</category>
      <category>mysql</category>
      <category>database</category>
    </item>
    <item>
      <title>What Do Cloud Services and Jam Have in Common?</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Tue, 25 Apr 2023 10:38:23 +0000</pubDate>
      <link>https://dev.to/aws-heroes/what-do-cloud-services-and-jam-have-in-common-1d7e</link>
      <guid>https://dev.to/aws-heroes/what-do-cloud-services-and-jam-have-in-common-1d7e</guid>
      <description>&lt;p&gt;In this short video I talk about the similarities between cloud services and jam. Yes, jam. &lt;/p&gt;

&lt;p&gt;I explain how cloud services offer a variety of options for users to choose from, but too many options can lead to decision paralysis. Yes, the famous jam experiment.  &lt;/p&gt;

</description>
      <category>jokes</category>
      <category>beginners</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>Unlocking the Secrets of the Magic Number 35</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Tue, 14 Mar 2023 15:56:29 +0000</pubDate>
      <link>https://dev.to/aws-heroes/unlocking-the-secrets-of-the-magic-number-35-4lk0</link>
      <guid>https://dev.to/aws-heroes/unlocking-the-secrets-of-the-magic-number-35-4lk0</guid>
      <description>&lt;p&gt;When you run a database on RDS, regardless of the engine you use, you can set the automatic backup retention period to any number between 0 and 35 days. Running a MySQL 8.0? You can do a point in time up to 35 days in the past. Not 40 days, not 50 days, 35 days is the maximum. Oracle? 35 days. SQL Server? &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html" rel="noopener noreferrer"&gt;The continuous backups have a maximum retention of 35 days&lt;/a&gt;.   &lt;/p&gt;

&lt;p&gt;But why 35? What is so special about the number 35?&lt;/p&gt;

&lt;p&gt;This is an inherent hard limit tailored for automated backups and providing point-in-time recovery, not for the purpose of archiving data for the long term.  And 35 days has been the maximum for over 10 years since AWS increased the backup retention period from a maximum of 8 days in 2012. Jeff Barr &lt;a href="https://aws.amazon.com/blogs/aws/relational-database-service-increased-snapshot-retention-period/" rel="noopener noreferrer"&gt;wrote at that time&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After fielding a number of customer requests, we have increased the maximum retention period from 8 days to 35 days. A number of our customers have asked for this new limit in order to retain at least one month’s worth of backups for compliance purposes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f1cj9uelm5cxsxpjb8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f1cj9uelm5cxsxpjb8t.png" alt="RDS goes from 8 to 35 days" width="440" height="550"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: &lt;a href="https://aws.amazon.com/blogs/aws/relational-database-service-increased-snapshot-retention-period/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/relational-database-service-increased-snapshot-retention-period/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You might have the &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/two-pizza-teams.html" rel="noopener noreferrer"&gt;two-pizza team rule&lt;/a&gt; in AWS, but the number became quickly the limit in other services too, not RDS only. &lt;/p&gt;

&lt;p&gt;Point-in-time recovery for DynamoDB? You can restore that table to any point in time during the last 35 days. What about Aurora? You cannot disable automated backups on Aurora, but you can choose any number as long as it is between 1 and 35. &lt;a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore.html" rel="noopener noreferrer"&gt;DocumentDB&lt;/a&gt;? It continuously backs up your data to S3 for 1 to 35 days. The number is &lt;em&gt;now a de-facto standard,&lt;/em&gt; not only on AWS. &lt;/p&gt;

&lt;p&gt;What about other cloud providers? &lt;a href="https://learn.microsoft.com/en-us/azure/backup/tutorial-sql-backup" rel="noopener noreferrer"&gt;Azure has 35 days&lt;/a&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Log backups can occur as often as every 15 minutes and can be retained for up to 35 days.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What about Google? AlloyDB recently &lt;a href="https://cloud.google.com/alloydb/docs/backup/overview" rel="noopener noreferrer"&gt;announced&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Continuous backups have a retention period of between 1 and 35 days, depending upon the configuration that you choose when setting up this feature. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And other services on GCP and Azure match the 35 days too. But why is everyone on 35? Why not 30? Or 33 days? We can only guess:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once a service uses 35, it is easier to converge on one value. For all the services of the same provider at least. Even Azure confirmed that when they increased the value for an existing service: &lt;em&gt;“This change reflects our goal of providing a consistent user experience across service tiers”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;To retain at least one month’s worth of backups for compliance purposes, you need to consider the longest month (31 days) and add a buffer for a (long) weekend that would cover an 8x5 monitoring and support scenario. Not everyone is on 24x7. &lt;/li&gt;
&lt;li&gt;35 is a &lt;a href="https://en.wikipedia.org/wiki/Tetrahedral_number" rel="noopener noreferrer"&gt;tetrahedral &lt;/a&gt;number. I am sure it matters.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Do you have a better explanation for the magic 35? Let me know below or on &lt;a href="https://www.linkedin.com/in/rlosio/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>A new CRT HTTP Client, a Reference Architecture for Deployment Pipelines and the VPC Resource Map</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Tue, 28 Feb 2023 10:31:37 +0000</pubDate>
      <link>https://dev.to/aws-heroes/a-new-crt-http-client-a-reference-architecture-for-deployment-pipelines-and-the-vpc-resource-map-5677</link>
      <guid>https://dev.to/aws-heroes/a-new-crt-http-client-a-reference-architecture-for-deployment-pipelines-and-the-vpc-resource-map-5677</guid>
      <description>&lt;p&gt;As an editor for InfoQ, every month I follow and (try to) cover the most interesting news in the cloud space. From VPC Resource Map to the new CRT HTTP Client in AWS SDK for Java 2.x, below are the announcements and news in the AWS space that caught my attention in February. You can read the full articles on InfoQ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Adds VPC Resource Map to Simplify Management of Virtual Networks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The VPC section of the AWS Management Console now provides &lt;a href="https://www.infoq.com/news/2023/02/vpc-resource-map/?itm_source=infoq&amp;amp;itm_campaign=user_page&amp;amp;itm_medium=link" rel="noopener noreferrer"&gt;visualization of VPC resources&lt;/a&gt;, such as the relationships between a VPC and its subnets, routing tables, and gateways. The map displays existing VPC resources and their routing on a single page, allowing a better understanding of the networking layout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Publishes Reference Architecture and Implementations for Deployment Pipelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS recently released a &lt;a href="https://www.infoq.com/news/2023/02/aws-deployment-pipelines/?itm_source=infoq&amp;amp;itm_campaign=user_page&amp;amp;itm_medium=link" rel="noopener noreferrer"&gt;reference architecture&lt;/a&gt;and a set of reference implementations for deployment pipelines. The recommended architectural patterns are based on best practices and lessons collected at Amazon and customer projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faster Startup Time and Lower Memory Usage: New CRT HTTP Client in AWS SDK for Java&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS recently announced the &lt;a href="https://www.infoq.com/news/2023/02/aws-sdk-java-crt-client/?itm_source=infoq&amp;amp;itm_campaign=user_page&amp;amp;itm_medium=link" rel="noopener noreferrer"&gt;general availability of the Common Runtime (CRT) HTTP Client in the AWS SDK for Java 2.x&lt;/a&gt;. The new asynchronous client provides faster SDK startup time and a smaller memory footprint improving Lambda serverless workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Patches Undocumented APIs Bypassing CloudTrail Event Logging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS recently &lt;a href="https://www.infoq.com/news/2023/02/aws-undocumented-apis/?itm_source=infoq&amp;amp;itm_campaign=user_page&amp;amp;itm_medium=link" rel="noopener noreferrer"&gt;patched undocumented IAM APIs&lt;/a&gt; that bypassed CloudTrail logging. The vulnerability allowed a malicious user to perform reconnaissance activities on IAM without recording events in CloudTrail or being detected by Amazon GuardDuty.&lt;/p&gt;

&lt;p&gt;To read my other articles on InfoQ, &lt;a href="https://www.infoq.com/profile/Renato-Losio/" rel="noopener noreferrer"&gt;follow me&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>inspiration</category>
      <category>gratitude</category>
      <category>quotes</category>
    </item>
    <item>
      <title>My Journey Through the AWS Blog: Lessons Learned and Random Statistics</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Thu, 26 Jan 2023 20:41:03 +0000</pubDate>
      <link>https://dev.to/aws-heroes/my-journey-through-the-aws-blog-lessons-learned-and-random-statistics-31o2</link>
      <guid>https://dev.to/aws-heroes/my-journey-through-the-aws-blog-lessons-learned-and-random-statistics-31o2</guid>
      <description>&lt;p&gt;Last week I checked the (primary) &lt;a href="https://aws.amazon.com/blogs/aws/" rel="noopener noreferrer"&gt;AWS blog&lt;/a&gt;. I went through all the articles from the &lt;a href="https://aws.amazon.com/blogs/aws/welcome/" rel="noopener noreferrer"&gt;welcome post in 2004&lt;/a&gt; until the very last one. I did not read all of them. I just collected some data using Python and a Bash script.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Random Numbers
&lt;/h2&gt;

&lt;p&gt;Let’s start with some random numbers. There are until today &lt;strong&gt;3925 posts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The year with the most posts? &lt;strong&gt;2015&lt;/strong&gt; with 320 posts, almost one a day. AWS then realized more  “secondary” blogs were needed. You now have more content to read, but spread across multiple blogs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F1_wmL0OVAEj-Z295848PN24yLgGwOUDzMf3BAOweltXWIyGTLJgSseZmZ95VZsXYrGtzssOB71TxxfdM33KRkV4jTMiaE3rNuJQ0ANj9a9OpzkvQNoOBdU6By9C_UaXTCHLm4eKnVlfuZdXdpoRaDG_aJSakScee97mXBaqvzO5Q41yEDkuDQkYLnnBF-g" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2F1_wmL0OVAEj-Z295848PN24yLgGwOUDzMf3BAOweltXWIyGTLJgSseZmZ95VZsXYrGtzssOB71TxxfdM33KRkV4jTMiaE3rNuJQ0ANj9a9OpzkvQNoOBdU6By9C_UaXTCHLm4eKnVlfuZdXdpoRaDG_aJSakScee97mXBaqvzO5Q41yEDkuDQkYLnnBF-g" alt="posts versus years" width="596" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Top month? Well, &lt;strong&gt;November&lt;/strong&gt; of course, 571 posts.  The re:Invent effect, with significant spikes in the last few years. Quieter month? &lt;strong&gt;February&lt;/strong&gt; with 229.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FYqUc6IZgKvWI9oKgZapRCU1hvWb5FO_VNrs97g67eWtpexSACU4Ffhl5Eed6vbvDi6yeTp6bhE1-lpbK6XrBuhXnUmzVplTy2-JnCE-qUyc9IEjxrT8Uqj8lMlPEs9tXXrmzCH5m_BClPmbIT0NFIVBAmCxpl0IgpxfbZlSetJxi5V2vfkb3M3aaCwX65g" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FYqUc6IZgKvWI9oKgZapRCU1hvWb5FO_VNrs97g67eWtpexSACU4Ffhl5Eed6vbvDi6yeTp6bhE1-lpbK6XrBuhXnUmzVplTy2-JnCE-qUyc9IEjxrT8Uqj8lMlPEs9tXXrmzCH5m_BClPmbIT0NFIVBAmCxpl0IgpxfbZlSetJxi5V2vfkb3M3aaCwX65g" alt="posts versus months" width="591" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I plot the month-by-month over the years we can see how the role of the primary AWS blog has shifted and how it is impacted by announcements in Vegas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FReeLa3PNtcLTGgPwAJT0avAHpNkJJfwNaPGNke7YIyahrJaH5x1iGamrPTsA5Hr08ON9QthUBYPaWDqmFjm9CssUXUs14PLIavXEd270D8ModYOenqCzCZUUYATx7NplxPfCcubucagWgh4crKaJEkVv9GsL_H-5ZdKcODCurYR6imNzwr809RZBKUDRAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FReeLa3PNtcLTGgPwAJT0avAHpNkJJfwNaPGNke7YIyahrJaH5x1iGamrPTsA5Hr08ON9QthUBYPaWDqmFjm9CssUXUs14PLIavXEd270D8ModYOenqCzCZUUYATx7NplxPfCcubucagWgh4crKaJEkVv9GsL_H-5ZdKcODCurYR6imNzwr809RZBKUDRAQ" alt="monthly trends" width="912" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The top day ever? A re:Invent day (&lt;a href="https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-tuesday-december-3rd/" rel="noopener noreferrer"&gt;December 3rd, 2019&lt;/a&gt;) with &lt;strong&gt;27 articles.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Top author? &lt;strong&gt;Jeff Barr&lt;/strong&gt;, of course. He appears as the author of 80% of the posts (3136) but this includes a small number of guest posts and posts that in earlier years were coming from other authors but were posted by Jeff himself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frugality or Complexity?
&lt;/h2&gt;

&lt;p&gt;I love discussing useless numbers but you might wonder, what was I looking for? &lt;/p&gt;

&lt;p&gt;Trigger by jeff Barr’s recent &lt;em&gt;“&lt;/em&gt;&lt;a href="https://www.linkedin.com/pulse/leadership-principles-aws-news-bloggers-jeff-barr/?trackingId=%2FdaawTaySzC%2FrtXvk4gavQ%3D%3D" rel="noopener noreferrer"&gt;&lt;em&gt;Leadership Principles for AWS News Bloggers&lt;/em&gt;&lt;/a&gt;&lt;em&gt;”,&lt;/em&gt; I was curious about how much longer (and usually more complex) posts became over the years.&lt;/p&gt;

&lt;p&gt;As an &lt;a href="https://www.infoq.com/profile/Renato-Losio/#news" rel="noopener noreferrer"&gt;InfoQ editor&lt;/a&gt; and as a cloud architect, I struggle to keep myself up-to-date even with just the biggest announcements and the main AWS blog alone. Jeff Barr writes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Be frugal with your own time and with that of your readers (…) Keep posts concise and don’t spend the readers’ time unnecessarily. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without any complex analysis, I just checked the length of the page in bytes, removing the fixed size of the page’s look and feel. With a few exceptions aside, including a &lt;a href="https://aws.amazon.com/blogs/aws/transcript_of_j/" rel="noopener noreferrer"&gt;&lt;em&gt;Transcript of Jeff Barr’s Web 2.0 Interview&lt;/em&gt;&lt;/a&gt;, all the longest articles are very recent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FcL5msgVDU0H381rfchxv0KYL9YPc2l-mVTMBn22xE8zCNUJq9qN-cp0pr5VrUDCHgjiGpfalm5nZ6OuLvhF4TjZ0TmlpvWB5K1v03aUtqiNCv98C3yRk52jgtx6eg4wr3aDx_cKGQZSRLDDYMioGTZp2NA4RvtcZYwwCCPuVP9vMRiBkQ6KDCfyP3pTtPQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FcL5msgVDU0H381rfchxv0KYL9YPc2l-mVTMBn22xE8zCNUJq9qN-cp0pr5VrUDCHgjiGpfalm5nZ6OuLvhF4TjZ0TmlpvWB5K1v03aUtqiNCv98C3yRk52jgtx6eg4wr3aDx_cKGQZSRLDDYMioGTZp2NA4RvtcZYwwCCPuVP9vMRiBkQ6KDCfyP3pTtPQ" alt="length of the posts" width="584" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Posts are getting longer&lt;/strong&gt;. Are the authors not frugal enough? The platform gets more and more updates and features, increasing the complexity of the announcements too.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 and Route53
&lt;/h2&gt;

&lt;p&gt;Let’s compare a couple of articles, from the early times and from recent posts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-route-53-application-recovery-controller/" rel="noopener noreferrer"&gt;Introducing Amazon Route 53 Application Recovery Controller&lt;/a&gt; &lt;em&gt;(2021)&lt;/em&gt;&lt;br&gt;&lt;br&gt;
Almost &lt;strong&gt;3000 words&lt;/strong&gt; and over 30 screenshots and diagrams are required to describe how the new feature of Route53 works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-route-53-the-aws-domain-name-service/" rel="noopener noreferrer"&gt;Amazon Route 53 – The AWS Domain Name Service&lt;/a&gt; &lt;em&gt;(2010)&lt;/em&gt;&lt;br&gt;&lt;br&gt;
About &lt;strong&gt;600 words&lt;/strong&gt; , not a single screenshot, and a rudimental Route53 logo were enough to announce the service and its main features. &lt;/p&gt;

&lt;p&gt;What about an even shorter post?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon_s3/" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt; &lt;em&gt;(2006)&lt;/em&gt;&lt;br&gt;&lt;br&gt;
Just over &lt;strong&gt;200 words&lt;/strong&gt; , one of the shortest posts ever, were enough for arguably one of the most important announcements in the history of AWS, Amazon S3.  Jeff Barr had (literally) a plane to catch:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I’ve got to catch a plane to Silicon Valley in a few minutes, or I’d write a lot more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-s3-encrypts-new-objects-by-default/" rel="noopener noreferrer"&gt;Amazon S3 Encrypts New Objects By Default&lt;/a&gt; (2023)&lt;br&gt;&lt;br&gt;
By comparison, the most recent announcement about S3, the change in the default of a checkbox, required four times more words. &lt;/p&gt;

&lt;p&gt;I wonder how long would be an &lt;a href="https://aws.amazon.com/blogs/aws/s3_roundup/" rel="noopener noreferrer"&gt;S3 roundup&lt;/a&gt; 17 years later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Long gone are the days of deployments of web applications relying solely on EC2, RDS, and S3. AWS now claims &lt;em&gt;“over 200 fully featured services from data centers globally”&lt;/em&gt; each with multiple options, pricing models, and deployment options.&lt;/p&gt;

&lt;p&gt;There’s never been a better time to be a developer. But there’s never been a harder time to make the correct choices. Even following just the primary AWS blog requires a lot of effort. And it is only one of the different blogs offered by the cloud provider. &lt;/p&gt;

&lt;p&gt;There is nothing wrong with the authors, I am sure they try to be as frugal as possible but it is getting harder to keep ourselves up to date.&lt;/p&gt;

&lt;p&gt;Enough, we are at just over 700 words already.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://cloudiamo.com/2023/01/26/my-journey-through-the-aws-blog-lessons-learned-and-random-statistics/" rel="noopener noreferrer"&gt;My Journey Through the AWS Blog: Lessons Learned and Random Statistics&lt;/a&gt; appeared first on &lt;a href="https://cloudiamo.com" rel="noopener noreferrer"&gt;cloudiamo.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>productivity</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>The Best AWS re:Invent Ever? Looking Back at Eleven Editions</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Fri, 13 Jan 2023 11:00:39 +0000</pubDate>
      <link>https://dev.to/aws-heroes/the-best-aws-reinvent-ever-looking-back-at-eleven-editions-2mpg</link>
      <guid>https://dev.to/aws-heroes/the-best-aws-reinvent-ever-looking-back-at-eleven-editions-2mpg</guid>
      <description>&lt;p&gt;There is no year that I do not read many &lt;em&gt;“This was the best re:Invent ever!”&lt;/em&gt; comments, posts, or articles in the weeks after the AWS conference in Las Vegas. While it is likely true in terms of fun, the number of announcements, or personal networking, I tend to think we are biased by the latest edition. &lt;/p&gt;

&lt;p&gt;What was the biggest re:Invent announcement ever? Keep in mind that crucial services like EC2, EBS, RDS, S3, SQS, or IAA were available well before the first edition. Even DynamoDB was announced at the beginning of 2012 before any AWS global conference. &lt;/p&gt;

&lt;p&gt;Let’s look back and see which services have stood the test of time, starting from the first edition in 2012, highlighting one announcement per year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2012&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the very first re:Invent there was the preview of the data warehouse service &lt;strong&gt;Amazon Redshift.&lt;/strong&gt;   Here was the original &lt;a href="https://aws.amazon.com/blogs/aws/amazon-redshift-the-new-aws-data-warehouse/" rel="noopener noreferrer"&gt;post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2013&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudTrail&lt;/strong&gt; was announced to address audit requirements. Maybe not an exciting service but I cannot think of any other one that is so widespread and used almost ten years later. &lt;a href="https://aws.amazon.com/about-aws/whats-new/2013/11/13/announcing-aws-cloudtrail/" rel="noopener noreferrer"&gt;Few lines&lt;/a&gt; were enough for the announcement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffownvqgump810cj225ei.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffownvqgump810cj225ei.jpg" alt="Las Vegas" width="800" height="536"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by David Vives on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2014&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, there were &lt;a href="https://aws.amazon.com/blogs/developer/aws-reinvent-2014-recap-2/" rel="noopener noreferrer"&gt;Amazon Aurora&lt;/a&gt;, ECS, AWS Config, and AWS Key Management Service. Who cares, nothing matters, this was the year of &lt;strong&gt;AWS Lambda&lt;/strong&gt;. You can go back in time &lt;a href="https://aws.amazon.com/about-aws/whats-new/2014/11/13/introducing-aws-lambda/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2015&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kinesis Firehose? QuickSight? AWS Mobile Hub? Maybe Amazon Inspector? I find it hard to highlight a &lt;a href="https://aws.amazon.com/blogs/developer/aws-reinvent-2015-and-more/" rel="noopener noreferrer"&gt;single one that stands out&lt;/a&gt;. I vote for &lt;strong&gt;AWS Snowball&lt;/strong&gt;, the rugged appliance for efficient data storage and transfer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2016&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the year of the first so-called Amazon AI services, including the Rekognition image recognition service and almost all Amazon Lex. But if I have to choose &lt;a href="https://press.aboutamazon.com/2016/11/aws-launches-amazon-athena" rel="noopener noreferrer"&gt;one announcement&lt;/a&gt;, I would say, &lt;strong&gt;Amazon Athena&lt;/strong&gt;, today still more popular than the Snowmobile truck that appeared on stage. &lt;a href="https://www.geekwire.com/2016/use-amazons-snowball-snowballs-unleashes-45-foot-truck-model/" rel="noopener noreferrer"&gt;Really&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2017&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There were Alexa for Business, the Serverless App repository, and AWS Cloud9. Aurora Serverless (the now v1) and Amazon Neptune were interesting but it does not matter. This was the year of &lt;a href="https://medium.com/slalom-technology/aws-re-invent-2017-recap-ab343983751" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, &lt;strong&gt;EKS, and Fargate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2018&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Quantum Ledger Database? Amazon Managed Blockchain? Global Accelerator? I have to highlight &lt;strong&gt;S3 Intelligent Tiering&lt;/strong&gt;. Yes, not a new service but a &lt;a href="https://aws.amazon.com/about-aws/whats-new/2018/11/s3-intelligent-tiering/" rel="noopener noreferrer"&gt;paradigm shift&lt;/a&gt;for the object storage service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5gek233wkgbi2pz6dr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5gek233wkgbi2pz6dr.jpg" alt="Las Vegas" width="800" height="531"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Grant Cai on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2019&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It starts to get harder to pick up one announcement. There are too many and it is too early to say the long-term impact. Tempted by Amazon SageMaker Studio or Amazon EventBridge, I vote for the &lt;a href="https://aws.amazon.com/blogs/aws/aws-now-available-from-a-local-zone-in-los-angeles/" rel="noopener noreferrer"&gt;new type of infrastructure deployment&lt;/a&gt;, &lt;strong&gt;Local Zone.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2020&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My very first &lt;a href="https://www.infoq.com/news/2019/12/aws-reinvent-2019/" rel="noopener noreferrer"&gt;recap of the conference for InfoQ&lt;/a&gt;, a subdue virtual-only but free edition. As an AWS Data Hero and a MySQL enthusiast, I am biased to highlight the preview of Aurora Serverless v2. I had higher hopes for AWS Fault Injection Simulator but I will &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/introducing-aws-cloudshell/" rel="noopener noreferrer"&gt;single out&lt;/a&gt; (yes, no kidding) &lt;strong&gt;Cloudshell&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2021&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Back in person in Las Vegas, this was the year of the &lt;strong&gt;Graviton3&lt;/strong&gt; chip, with &lt;a href="https://www.infoq.com/news/2021/12/recap-reinvent-2021/" rel="noopener noreferrer"&gt;more instances still to come&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2022&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are &lt;a href="https://www.infoq.com/news/2022/12/recap-reinvent-2022/" rel="noopener noreferrer"&gt;finally at 2022&lt;/a&gt;. What will be the key announcement that will last long term? Who knows, SimSpace Weaver, &lt;a href="https://www.infoq.com/news/2022/12/aws-clean-rooms/" rel="noopener noreferrer"&gt;Clean Rooms&lt;/a&gt;, CodeCatalyst, Amazon Omics? My first feeling is that nothing will match the early editions, but I vote for &lt;strong&gt;Security Lake&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Was 2022 the best re:Invent ever? What is your favorite edition or announcement? Let me know your thoughts and &lt;a href="https://www.linkedin.com/in/rlosio/" rel="noopener noreferrer"&gt;follow me&lt;/a&gt; for more posts.&lt;/p&gt;

&lt;p&gt;The article appeared first on &lt;a href="https://cloudiamo.com" rel="noopener noreferrer"&gt;cloudiamo.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AWSome December, from OpenSearch Serverless to Application Composer</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Sun, 01 Jan 2023 09:30:00 +0000</pubDate>
      <link>https://dev.to/aws-heroes/awsome-december-from-opensearch-serverless-to-application-composer-2he2</link>
      <guid>https://dev.to/aws-heroes/awsome-december-from-opensearch-serverless-to-application-composer-2he2</guid>
      <description>&lt;p&gt;As an editor for InfoQ, every month I follow and (try to) cover the most interesting news in the cloud space. From Blue/Green Deployments for RDS to Clean Rooms, below are the announcements and news in the AWS space that caught my attention in December. You can read the full articles on &lt;a href="https://www.infoq.com/profile/Renato-Losio/#news" rel="noopener noreferrer"&gt;InfoQ&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Announces GA of DocumentDB Elastic Clusters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the recent re:Invent conference, AWS announced the general availability of DocumentDB Elastic Clusters, a service that manages the elasticity and sharding for MongoDB workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Previews VPC Lattice for Service-to-Service Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To simplify networking for service-to-service communication, AWS recently announced the preview of Amazon VPC Lattice. The new capability of Virtual Private Cloud (VPC) abstracts network complexity and creates a logical application layer network that connects clients and services across different VPCs and accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Previews Application Composer to Visualize and Create Serverless Workloads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the recent re:Invent conference, AWS announced the preview of Application Composer, a visual designer to build serverless applications from multiple AWS services. The new option helps create the architecture by dragging, grouping, and connecting services in a visual canvas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Announces Clean Rooms for Secure Collaboration with Analytics Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During the recent re:Invent conference, AWS announced the preview of Clean Rooms for analytics data. The new service provides safe environments where multiple customers can securely share and analyze data with control of how the data is used, reducing the risk of sharing personal data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Announces Preview of OpenSearch Serverless&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS recently announced the preview of OpenSearch Serverless, a new option of OpenSearch service that automatically provisions and scales the resources for data ingestion and query responses. The minimum capacity required for the serverless option raised some concerns in the community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Announces Blue/Green Deployments for MySQL on Aurora and RDS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the beginning of the re:Invent conference, AWS announced the general availability of RDS Blue/Green Deployments, a new feature for Aurora with MySQL compatibility, RDS for MySQL, and RDS for MariaDB to perform blue/green database updates.&lt;/p&gt;

&lt;p&gt;To read my other articles on InfoQ, &lt;a href="https://www.infoq.com/profile/Renato-Losio/#news" rel="noopener noreferrer"&gt;follow me&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>indonesia</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Blue/Green Deployments for RDS: How Fast is a Switchover? </title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Tue, 13 Dec 2022 11:46:23 +0000</pubDate>
      <link>https://dev.to/aws-heroes/bluegreen-deployments-for-rds-how-fast-is-a-switchover-48bf</link>
      <guid>https://dev.to/aws-heroes/bluegreen-deployments-for-rds-how-fast-is-a-switchover-48bf</guid>
      <description>&lt;p&gt;AWS recently announced the general availability of &lt;a href="https://www.infoq.com/news/2022/12/aws-rds-aurora-blue-green/" rel="noopener noreferrer"&gt;RDS Blue/Green Deployments&lt;/a&gt;, a new feature for RDS and Aurora to perform blue/green database updates. One of the aspects that caught my eye is how fast a switchover is. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In as fast as a minute, you can promote the staging environment to be the new production environment with no data loss.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s give it a try!  How can we check how long a switchover really takes? We will perform the simplest test, using only a MySQL client and a while loop in Bash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while true; do mysql -s -N -h renato-cluster.***.eu-west-1.rds.amazonaws.com -u renato 
-e "select now()"; sleep 1; done

(...)
2022-12-09 07:58:55
2022-12-09 07:58:56
2022-12-09 07:58:57
2022-12-09 07:58:58
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We keep asking every second for the current date and time to the RDS server and we will check what happens when we trigger a switchover. We will compare the results with a reboot and a reboot with failover. To perform the test we will use an m6g.large Multi-AZ instance with some production traffic, a cluster where the &lt;em&gt;Seconds_Behind_Master&lt;/em&gt; value of a replica hardly goes above a single-digit number. &lt;/p&gt;

&lt;h2&gt;
  
  
  Rebooting an instance
&lt;/h2&gt;

&lt;p&gt;Using the CLI, we can perform a reboot of an RDS instance without forcing a failover of the database server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws rds reboot-db-instance --db-instance-identifier renato-cluster --no-force-failover

(...)
2022-12-09 08:00:01
2022-12-09 08:00:02
2022-12-09 08:00:03
2022-12-09 08:00:23
2022-12-09 08:00:24
2022-12-09 08:00:25
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is just about &lt;strong&gt;20 seconds&lt;/strong&gt; for a reboot of an RDS instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rebooting with failover
&lt;/h2&gt;

&lt;p&gt;We can now repeat the test forcing a failover:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws rds reboot-db-instance --db-instance-identifier renato-cluster --force-failover

(...)
2022-12-09 08:01:11
2022-12-09 08:01:12
2022-12-09 08:01:13
2022-12-09 08:03:28
2022-12-09 08:03:29
2022-12-09 08:03:30
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time it takes longer, including the time required to switch the CNAME of our database to the new IP address: &lt;strong&gt;2 minutes and 15 seconds&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Switching a blue/green deployment
&lt;/h2&gt;

&lt;p&gt;Finally, we can trigger a switchover in our blue/green deployment calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws rds switchover-blue-green-deployment --blue-green-deployment-identifier renato-cluster-bg

(...)
2022-12-09 08:06:59
2022-12-09 08:07:00
2022-12-09 08:07:01
2022-12-09 08:08:07
2022-12-09 08:08:08
2022-12-09 08:08:09
(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The switchover time of the RDS Blue/Green Deployments is just about a minute: 1 minute and 6 seconds. &lt;/p&gt;

&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Reboot: 20 seconds&lt;/li&gt;
&lt;li&gt;Reboot with failover: 135 seconds&lt;/li&gt;
&lt;li&gt;Switchover: 66 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The switchover of an RDS for MySQL instance is “as fast as a minute” and significantly faster than a reboot with failover of a Multi-AZ instance. Of course, the switchover can take significantly longer according to the write load of your production database. This was only a basic test to validate the minimum downtime for a switchover.&lt;/p&gt;

&lt;p&gt;Questions? Comments? &lt;a href="https://www.linkedin.com/in/rlosio/" rel="noopener noreferrer"&gt;Contact or follow me&lt;/a&gt;.&lt;br&gt;
Originally published on &lt;a href="https://cloudiamo.com/" rel="noopener noreferrer"&gt;cloudiamo.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>aws</category>
    </item>
    <item>
      <title>My 7 Favorite Announcements from AWS re:Invent 2022</title>
      <dc:creator>Renato Losio 💭💥</dc:creator>
      <pubDate>Mon, 05 Dec 2022 14:27:01 +0000</pubDate>
      <link>https://dev.to/aws-heroes/my-7-favourite-announcements-from-aws-reinvent-2022-4ao0</link>
      <guid>https://dev.to/aws-heroes/my-7-favourite-announcements-from-aws-reinvent-2022-4ao0</guid>
      <description>&lt;p&gt;Were you overwhelmed by 100s of announcements at re:Invent last week? Here is the ultimate list of &lt;strong&gt;the 7 most important ones&lt;/strong&gt; 💥.&lt;/p&gt;

&lt;p&gt;Just kidding, this is only my very personal selection (as a cloud architect and an AWS Data Hero) of the announcements that I found curious, useful, or a significant step forward 🙌. &lt;/p&gt;

&lt;p&gt;You can find my full recap of re:Invent 2022 on &lt;a href="https://www.infoq.com/news/2022/12/recap-reinvent-2022/" rel="noopener noreferrer"&gt;InfoQ&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Let me know what you think and if I missed something obvious!&lt;/p&gt;

&lt;h4&gt;
  
  
  RDS Blue/Green Deployments
&lt;/h4&gt;

&lt;p&gt;At the beginning of the conference, AWS&lt;a href="https://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/" rel="noopener noreferrer"&gt; announced the general availability of RDS Blue/Green Deployments&lt;/a&gt;, a new feature for Aurora with MySQL compatibility, RDS for MySQL, and RDS for MariaDB to perform blue/green database updates. I love MySQL and I love RDS, this has to be on the top of my list.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda SnapStart
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html" rel="noopener noreferrer"&gt;Lambda SnapStart&lt;/a&gt; is now available for Java. It has been one of the announcements &lt;a href="https://lumigo.io/blog/aws-lambda-cold-starts-are-about-to-get-faster/" rel="noopener noreferrer"&gt;better received&lt;/a&gt; by developers as it addresses one of the limitations of implementing serverless Java applications.  &lt;/p&gt;

&lt;h4&gt;
  
  
  SimSpace Weaver
&lt;/h4&gt;

&lt;p&gt;The new &lt;a href="https://aws.amazon.com/blogs/aws/new-aws-simspace-weaver-build-large-scale-spatial-simulations-in-the-cloud/" rel="noopener noreferrer"&gt;SimSpace Weaver&lt;/a&gt; service targets a niche market that runs large-scale spatial simulations at scale, avoiding that a developer is limited by the computing and memory of the hardware. &lt;/p&gt;

&lt;h4&gt;
  
  
  Aurora zero-ETL integration with Redshift
&lt;/h4&gt;

&lt;p&gt;Enabling near real-time analytics and machine learning, Aurora's &lt;a href="https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-aurora-zero-etl-integration-redshift/" rel="noopener noreferrer"&gt;"zero-ETL" integration with Redshift&lt;/a&gt; is now available in preview.&lt;/p&gt;

&lt;h4&gt;
  
  
  OpenSearch Serverless
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://twitter.com/realchrisebert/status/1599199583984226305" rel="noopener noreferrer"&gt;Controversially priced&lt;/a&gt;, OpenSearch Serverless manages the provisioning and scaling of the resources to deliver data ingestion and query responses using ElastingSearch-compatible APIs.  Not everybody is convinced that naming  “serverless” every &lt;a href="https://twitter.com/ben11kehoe/status/1597637965496299520" rel="noopener noreferrer"&gt;autoscaling&lt;/a&gt; service is a good idea. &lt;/p&gt;

&lt;h4&gt;
  
  
  DataZone
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/datazone/" rel="noopener noreferrer"&gt;DataZone&lt;/a&gt; is a management service to share, search, and discover data at scale across organizational boundaries. All the data in DataZone is governed by access and use policies that the organization can define.   &lt;/p&gt;

&lt;h4&gt;
  
  
  CloudWatch Internet Monitor
&lt;/h4&gt;

&lt;p&gt;Monitoring data about internet traffic before it reaches the application layer, &lt;a href="https://aws.amazon.com/blogs/aws/cloudwatch-internet-monitor-end-to-end-visibility-into-internet-performance-for-your-applications/" rel="noopener noreferrer"&gt;Internet Monitor&lt;/a&gt; uses the connectivity data that AWS captures from their global networking to determine a baseline of performance and availability. &lt;/p&gt;

&lt;h4&gt;
  
  
  Clean Rooms
&lt;/h4&gt;

&lt;p&gt;At the end of the first keynote, Adam Selipsky announced &lt;a href="https://aws.amazon.com/clean-rooms/" rel="noopener noreferrer"&gt;Clean Rooms&lt;/a&gt;. With a &lt;a href="https://twitter.com/fayecloudguru/status/1597657403033780224" rel="noopener noreferrer"&gt;questionable name &lt;/a&gt;for a company that recently acquired iRobot, the new service (in preview) helps collaborate with other companies on AWS without sharing or revealing underlying data. &lt;/p&gt;

&lt;p&gt;This is just a very personal and small selection of the top announcements of AWS re:Invent 2022. You can find my full recap of the conference on &lt;a href="https://www.infoq.com/news/2022/12/recap-reinvent-2022/" rel="noopener noreferrer"&gt;InfoQ&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;What were your favorite announcements? I would love to know what you think 💚.&lt;/p&gt;

</description>
      <category>certification</category>
      <category>career</category>
      <category>networking</category>
    </item>
  </channel>
</rss>
