<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dzhuneyt</title>
    <description>The latest articles on DEV Community by Dzhuneyt (@dzhuneyt).</description>
    <link>https://dev.to/dzhuneyt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dzhuneyt"/>
    <language>en</language>
    <item>
      <title>Synology - Container Manager - Run a Docker Compose Project on CRON schedule</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Fri, 22 Mar 2024 13:42:42 +0000</pubDate>
      <link>https://dev.to/dzhuneyt/synology-container-manager-run-a-docker-compose-project-on-cron-schedule-16n4</link>
      <guid>https://dev.to/dzhuneyt/synology-container-manager-run-a-docker-compose-project-on-cron-schedule-16n4</guid>
      <description>&lt;p&gt;I have a Synology NAS at home that I use for various purposes. One of the things I do with it is run a Docker Compose&lt;br&gt;
project that contains a few services.&lt;/p&gt;

&lt;p&gt;I have a few services that I want to run on a schedule. For example, a backup service that&lt;br&gt;
runs every night or a media optimizer container that takes my movie library and converts it to playback formats, that&lt;br&gt;
are more suitable for older smart TVs.&lt;/p&gt;

&lt;p&gt;In this post, I will show you how to run a Docker Compose project on a schedule using the Synology Container Manager and&lt;br&gt;
the Synology Task Scheduler.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You will need to use Google Chrome or another browser that allows you to inspect Network requests.&lt;/p&gt;
&lt;h3&gt;
  
  
  Getting the Docker Compose project ID
&lt;/h3&gt;

&lt;p&gt;Inspecting the Network requests is the only way to retrieve the unique ID that Synology uses to identify the Docker&lt;br&gt;
Compose project.&lt;/p&gt;

&lt;p&gt;We will need this ID later, so let's first explore how to find it.&lt;/p&gt;

&lt;p&gt;Assuming you already created a "Project" inside "Container Manager" (a project is a fancy name that DSM uses for Docker&lt;br&gt;
Compose stacks). Open the "Container Manager" and navigate to the project you want to run on a schedule. If it was&lt;br&gt;
already running, stop it.&lt;/p&gt;

&lt;p&gt;Before starting it again, open the Network inspector of your browser. If you are using Chrome on a Mac, you can do this&lt;br&gt;
via &lt;code&gt;Cmd + Option + I&lt;/code&gt; and then click on the "Network" tab. Other browsers obviously have similar developer tools but&lt;br&gt;
you need to Google a bit how to find it, if you haven't used it before.&lt;/p&gt;

&lt;p&gt;Hit the "Clear network log" button at the left of this floating window if you want to reduce clutter. Finally, start the&lt;br&gt;
project by clicking the "Start" button in the Container Manager and observe the Network requests. You should find one&lt;br&gt;
that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SYNO.Docker.Project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open it, and inside the "Payload" tab, you will find a JSON object that contains the ID of the project. It should be&lt;br&gt;
something long and random looking, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;9fb91ca5-d817-42f8-8ddc-2acdf4d94494
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cbrmngmsy7hceqg1ms6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cbrmngmsy7hceqg1ms6.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take note of this ID, because you will need it in the next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scheduling a CRON job via Synology Task Scheduler
&lt;/h3&gt;

&lt;p&gt;Open the Control Panel of your Synology NAS and navigate to the "Task Scheduler" app. Click on "Create" and then "&lt;br&gt;
Scheduled Task" -&amp;gt; "User-defined script".&lt;/p&gt;

&lt;p&gt;Name your task based on your preferences, e.g. "Run Backup service on schedule".&lt;/p&gt;

&lt;p&gt;In the "User" dropdown, select "root"; otherwise Synology will not have the necessary permissions to run the Docker&lt;br&gt;
Compose project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrz1yfa5xjqqhr56jruf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrz1yfa5xjqqhr56jruf.png" alt="Image description" width="800" height="876"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "Schedule" tab pick the schedule that suits your needs.&lt;/p&gt;

&lt;p&gt;In the "Task Settings" tab, inside the "User-defined script" text box, paste the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;synowebapi &lt;span class="nt"&gt;--exec&lt;/span&gt; &lt;span class="nv"&gt;api&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SYNO.Docker.Project &lt;span class="nv"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"start_stream"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"9fb91ca5-d817-42f8-8ddc-2acdf4d94494"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsifp3o39sx6rgotelib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsifp3o39sx6rgotelib.png" alt="Image description" width="800" height="884"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Replace the &lt;code&gt;id&lt;/code&gt; value with the ID you found in the previous step.&lt;/p&gt;

&lt;p&gt;That's it! Synology will not start the Project based on the schedule that you provided.&lt;/p&gt;

</description>
      <category>synology</category>
      <category>nas</category>
      <category>docker</category>
    </item>
    <item>
      <title>Procrastination Trick - Create Large Pull Requests</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Fri, 24 Nov 2023 10:50:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/procrastination-trick-create-large-pull-requests-1m7h</link>
      <guid>https://dev.to/aws-builders/procrastination-trick-create-large-pull-requests-1m7h</guid>
      <description>&lt;p&gt;I know this is an unpopular opinion: Large PRs are a good way to mask procrastination.&lt;/p&gt;

&lt;p&gt;Change my mind. No seriously. I've yet to see a solid example of why bundling many changes in the same PR is a good thing; versus having those changes introduced in smaller PRs.&lt;/p&gt;




&lt;p&gt;Is your team suffering from the "Large PRs" syndrome? In my opinion, large PRs are hurtful in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower Code Quality: Due to the complexity and volume of changes in large PRs, there's a higher likelihood of compromising on code quality just to get the PR merged.&lt;/li&gt;
&lt;li&gt;Integration Issues: Large PRs often touch multiple parts of the system, which can lead to unexpected integration issues, especially if the changes aren’t properly isolated or modularized.&lt;/li&gt;
&lt;li&gt;Increased Merge Conflicts: The longer a PR remains open, the higher the chance of merge conflicts with other changes being merged into the same codebase. Resolving these conflicts can be time-consuming and error-prone.&lt;/li&gt;
&lt;li&gt;Knowledge Silos: When a single developer or a small group works on a large set of changes without regular integration, it can lead to knowledge silos, where only a few people understand certain parts of the codebase.&lt;/li&gt;
&lt;li&gt;Demotivation and Overwhelm: Reviewers might feel overwhelmed or demotivated by the sheer size of the PR, leading to less effective reviews or reluctance to review at all.&lt;/li&gt;
&lt;li&gt;Blocking Other Work: Large PRs can block other work from being merged, especially if they involve core parts of the system that other developers need to work on.&lt;/li&gt;
&lt;li&gt;Rollback Complexity: If a problem is discovered after merging a large PR, rolling back the changes can be complex and risky, especially if multiple features or fixes are entangled.&lt;/li&gt;
&lt;li&gt;Delaying user value: Large PRs delay delivering small increments of value to the end users, which is at the core of agile software development practices.&lt;/li&gt;
&lt;li&gt;Hiding procrastination: Lazy engineers can use large PRs to hide the actual time it took to work on individual parts of the code. Because it's so big, it's difficult to spot periods of procrastination somewhere in the middle. That's why - from an organization perspective - it's important to discourage large PRs.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;What size are the Pull Requests in your organization? Can you relate to any of these?&lt;/p&gt;

</description>
      <category>engineering</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Cut Cloud Costs by Migrating to On-Premises. Is It That Simple?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Tue, 14 Nov 2023 14:38:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/cut-cloud-costs-by-migrating-to-on-premises-is-it-that-simple-2a9d</link>
      <guid>https://dev.to/aws-builders/cut-cloud-costs-by-migrating-to-on-premises-is-it-that-simple-2a9d</guid>
      <description>&lt;p&gt;Let’s imagine you are a Netflix-sized company, and you are paying AWS $1 million per month on Cloud bills. Your app is composed of a bunch of Lambdas behind an API Gateway.&lt;/p&gt;

&lt;p&gt;You think $1m per month is too much infrastructure costs.&lt;/p&gt;

&lt;p&gt;After spending some engineering effort, you realize that if you re-architect your app into a monolithic container and launch it in a Kubernetes cluster in your on-premises infrastructure, you can run the same app for $100,000 per month — a 10x savings.&lt;/p&gt;

&lt;p&gt;Now, you might ask yourself — why is AWS charging you 10 times more? Couldn’t they apply the same optimization that you did, and cascade the cost savings to you as a customer?&lt;/p&gt;

&lt;p&gt;The answer is — YES. They could do that.&lt;/p&gt;

&lt;p&gt;But they would have to do the same for EVERY SINGLE CUSTOMER of AWS.&lt;/p&gt;

&lt;p&gt;Your application and its infrastructure is always unique, to some degree. There are an infinite number of configurations and fine-tuned optimizations that would be needed for every unique customer.&lt;/p&gt;

&lt;p&gt;The thing that cloud providers, like AWS, do — is they optimize for common denominators that all customers are interested in — scalability, ease and speed of redeployment, high availability — and finally costs. Cost is one of the factors, but certainly not the most important one for all customers of AWS. Maybe it’s the most important factor for your organization (and that’s fine). But some organizations value more the ease of redeployment and ease of scalability. Every organization is unique and has slightly unique needs.&lt;/p&gt;

&lt;p&gt;That’s why you can migrate to On-Premises, fine-tune and optimize better for the factors that are more important to you, whereas AWS will always try to find the middle ground and optimize for the weighted average of all AWS customers’ needs. Not for your organization’s individual needs.&lt;/p&gt;

&lt;p&gt;Does your organization fall under the “weighted average customer requirements” segment, or you are more on one side of the amplitude — putting more weight on saving costs?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>economy</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Migrating from the Cloud to On-Premises infrastructure. Is it worth it?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Tue, 14 Nov 2023 10:27:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/should-you-stick-to-cloud-native-or-migrate-to-on-premises-3epd</link>
      <guid>https://dev.to/aws-builders/should-you-stick-to-cloud-native-or-migrate-to-on-premises-3epd</guid>
      <description>&lt;p&gt;When a major company shares a case study about saving $1 million annually by moving from AWS cloud to an On-Premises setup, it always grabs the attention of the DevOps community.&lt;/p&gt;

&lt;p&gt;It sparks the years-old debates about cloud-native vs. on-premises and Serverless vs. serverfull.&lt;/p&gt;

&lt;p&gt;E.g. using on-demand Lambdas and DynamoDB VS always-on Docker containers and PostgreSQL/MySQL, deployed to Kubernetes.&lt;/p&gt;




&lt;p&gt;Now, it’s easy to jump on the hype train of the next article that suits your ideology better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maybe you have a longer engineering background of developing traditional applications using containers, monolithic applications, and SQL databases — you might be more biased and inclined to support such a Cloud to On-Premises migration.&lt;/li&gt;
&lt;li&gt;But if you are coming from the Serverless world, and you prefer the simplicity of NOT managing servers and predictable pricing (pricing that goes hand in hand with usage) — you might be a bit more supportive of the Cloud-native infrastructure.
And there’s no right or wrong answer. It’s just engineering perspectives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A common problem, especially for large organizations, is that such decisions are usually based on this — engineering perspectives, rather than business use cases.&lt;/p&gt;

&lt;p&gt;E.g. can the app benefit from the flexible scalability and predictable pricing of being cloud-native? Maybe you should pick that option then. Does it need to be on 24/7 and can’t tolerate cold starts? Maybe an always-on server is the better option here.&lt;/p&gt;

&lt;p&gt;And remember — the grass is always greener on the other side. But once you step into the weeds — you quickly realize — that serverless and serverful are both far from perfect options, same as cloud-native and on-premises.&lt;/p&gt;

&lt;p&gt;In an ideal world, you would combine the strengths of both sides. So the next time you are thinking of architecture for your app — think about how you can leverage both.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Security: What is "Privilege Escalation"?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Thu, 27 Jul 2023 16:12:15 +0000</pubDate>
      <link>https://dev.to/dzhuneyt/what-is-privilege-escalation-4hmj</link>
      <guid>https://dev.to/dzhuneyt/what-is-privilege-escalation-4hmj</guid>
      <description>&lt;p&gt;Privilege escalation is a common term in the Security industry.&lt;/p&gt;

&lt;p&gt;Let's illustrate what it means through an example.&lt;/p&gt;

&lt;p&gt;Imagine having a key to your house and you give it temporarily to a plumber, so that they can fix something while you are on vacation.&lt;/p&gt;

&lt;p&gt;Your intent is to give temporary access to your house to the plumber. But the locksmith visits a locksmith and makes a copy of the key. Essentially, evading the temporary restriction of accessing your house within a limited timeframe.&lt;/p&gt;

&lt;p&gt;The same concept can be applied to software security. It's particularly relevant in Cloud security and giving access to some service to access your Cloud account (e.g. temporary access to assume an IAM role within your AWS account).&lt;/p&gt;

&lt;p&gt;If the service is limited to just access resources, but not create new resources - everything is fine and security works as intended. But if the limited access allows the service to create new IAM roles (essentially generate new keys at the locksmith), the service can later access your Cloud resources without your permission. Essentially, doing Privilege Escalation.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS EFS Elastic vs Burstable throughput (benchmark)</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Thu, 29 Dec 2022 23:19:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-efs-elastic-vs-burstable-throughput-benchmark-47b8</link>
      <guid>https://dev.to/aws-builders/aws-efs-elastic-vs-burstable-throughput-benchmark-47b8</guid>
      <description>&lt;p&gt;With the recent &lt;a href="https://aws.amazon.com/blogs/aws/new-announcing-amazon-efs-elastic-throughput/" rel="noopener noreferrer"&gt;announcement of AWS EFS Elastic Throughput mode&lt;/a&gt;, I was curious to understand if it's actually any better than the Burstable throughput mode, which I've used as the file storage for a few of my WordPress sites.&lt;/p&gt;

&lt;p&gt;I was encountering a few hiccups here and there during WordPress version or plugin upgrades, because of the way the Burstable throughput mode works. Basically as long as your app is not doing any IO operations, the EFS accumulates some burst credits, which are then used during periods of reads/writes, up to a certain limit, after which (when the burst credits are depleted) the EFS read/write operations become painfully slow (at least from my experience).&lt;/p&gt;

&lt;p&gt;The announcement of Elastic throughput mode promises that you no longer have to worry about unpredictability of reads/writes to the file system and you should get a pretty consistent performance when using EFS, without resorting to Provisioned throughput mode, which can be pretty &lt;a href="https://aws.amazon.com/efs/pricing/" rel="noopener noreferrer"&gt;expensive&lt;/a&gt; due to over-provisioning during prolonged periods of low IO activity.&lt;/p&gt;

&lt;p&gt;What better way to evaluate and compare two options than creating a benchmark that does it programmatically for me. Here are the results.&lt;/p&gt;

&lt;h1&gt;
  
  
  Benchmark results
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 1KB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;95.2ms&lt;/td&gt;
&lt;td&gt;100.6ms&lt;/td&gt;
&lt;td&gt;-5.37%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 1MB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;366.2ms&lt;/td&gt;
&lt;td&gt;369.8ms&lt;/td&gt;
&lt;td&gt;-0.97%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 100MB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;12.161s&lt;/td&gt;
&lt;td&gt;17.081s&lt;/td&gt;
&lt;td&gt;-28.80%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Results breakdown
&lt;/h1&gt;

&lt;p&gt;Elastic throughput seems to be completing the write operations faster in all of the benchmarks above, compared to Bursting throughput mode.&lt;/p&gt;

&lt;p&gt;For simple sporadic file writes it seems like there is not much of a difference, but Elastic throughput really starts to show its benefits in larger file sizes. Writing 10 files of 100MB can easily save your app 5 seconds of waiting time; savings you can potentially propagate to your end users and improve user experience.&lt;/p&gt;

&lt;p&gt;Long story short, I am definitely switching my existing EFS file systems to Elastic throughput mode after these results. The pricing is pretty much the same and there's nothing that stops me from doing the switch at this point.&lt;/p&gt;

&lt;p&gt;Of course, don't take my word for it. Do your own due diligence and benchmarks before making a similar switch.&lt;/p&gt;

&lt;h1&gt;
  
  
  Considerations
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The tests were done using an identical Lambda with EFS file system attached&lt;/li&gt;
&lt;li&gt;The two Lambdas ran exactly the same code&lt;/li&gt;
&lt;li&gt;The numbers above are adjusted to exclude potential side effects like Lambda cold starts, network latency and variability in any surrounding code inside the Lambda runtime. Timestamps are only snapshotted just before and right after the filesystem IO.&lt;/li&gt;
&lt;li&gt;Tests are repeated 5 times with a sleep time of 10 seconds in between each run, to give plenty of time for both EFS throughput modes to pick up the pace and trigger any internal caching or warming mechanisms that EFS might have. The results of all 5 test are averaged to come up with the numbers in the benchmark.&lt;/li&gt;
&lt;li&gt;Code used to benchmark is available in a &lt;a href="https://github.com/awesome-cdk/experiment-efs-elastic-throughput" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Hope you found this benchmark useful. Looking forward to read your findings in the comments below. You can also catch me at my &lt;a href="https://aws-cdk.com" rel="noopener noreferrer"&gt;AWS CDK blog&lt;/a&gt;, where you can learn more about corner cases like this or find interesting AWS CDK constructs you can use for your app infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>benchmark</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
