<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dmytro Sirant</title>
    <description>The latest articles on DEV Community by Dmytro Sirant (@sirantd).</description>
    <link>https://dev.to/sirantd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sirantd"/>
    <language>en</language>
    <item>
      <title>It’s AI coding era but naming is still hard</title>
      <dc:creator>Dmytro Sirant</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:29:50 +0000</pubDate>
      <link>https://dev.to/sirantd/its-ai-coding-era-but-naming-is-still-hard-3i6h</link>
      <guid>https://dev.to/sirantd/its-ai-coding-era-but-naming-is-still-hard-3i6h</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“There are only two hard things in computer science: cache invalidation and naming things.”&lt;br&gt;
~ Phil Karlton&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s been about a year since I started using AI daily.&lt;/p&gt;

&lt;p&gt;In my core role as an SRE/DevOps engineer, AI behaves like a very fast and very obedient assistant. It helps me collect data across services via MCP, generate Terraform modules, work with Helm charts, and automate routine tasks.&lt;/p&gt;

&lt;p&gt;There is no “wow” factor here. Just speed.&lt;/p&gt;

&lt;p&gt;I don’t expect creativity.&lt;br&gt;
I don’t want initiative.&lt;br&gt;
I want predictable output.&lt;/p&gt;

&lt;p&gt;In these scenarios, I already know what the result should look like. I have examples, constraints, and a clear definition of done. AI just accelerates execution. Control stays on my side.&lt;/p&gt;




&lt;p&gt;Then I switch context.&lt;/p&gt;

&lt;p&gt;I start using AI for software development. And everything changes.&lt;/p&gt;

&lt;p&gt;As an AWS Community Builder, I have access to tools like Kiro and Kiro CLI - agentic AI development environments backed by Anthropic (and other) models. These tools are powerful.&lt;/p&gt;

&lt;p&gt;But they expose a different problem.&lt;/p&gt;

&lt;p&gt;Not in AI. In us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real bottleneck
&lt;/h2&gt;

&lt;p&gt;When working with AI on anything non-trivial, the bottleneck is not code.&lt;/p&gt;

&lt;p&gt;It’s how you describe what you want.&lt;/p&gt;

&lt;p&gt;Not syntax.&lt;br&gt;
Not frameworks.&lt;br&gt;
Not even architecture.&lt;/p&gt;

&lt;p&gt;Language.&lt;/p&gt;

&lt;p&gt;If your instructions are vague, inconsistent, or overloaded with assumptions - the output will reflect that.&lt;/p&gt;

&lt;p&gt;And unlike humans, AI won’t ask clarifying questions by default. It will generate something that looks correct but is fundamentally wrong.&lt;/p&gt;

&lt;p&gt;This is where things start breaking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming breaks everything
&lt;/h2&gt;

&lt;p&gt;One example from practice.&lt;/p&gt;

&lt;p&gt;I had to modify an old project that wasn’t maintained for ~8 years. The goal was simple: update dependencies, remove legacy functionality, and add new features.&lt;/p&gt;

&lt;p&gt;The problem was not the code.&lt;/p&gt;

&lt;p&gt;The problem was naming.&lt;/p&gt;

&lt;p&gt;The same concept - a media file - was called:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;track&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;item&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;product&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on the part of the codebase (mobile apps, backend, frontend), they were developed by different freelancers at different times.&lt;/p&gt;

&lt;p&gt;From a human perspective, this is annoying.&lt;/p&gt;

&lt;p&gt;From an AI perspective, this is chaos.&lt;/p&gt;

&lt;p&gt;The model tries to infer relationships. It guesses. Those guesses propagate into generated code, queries, and logic.&lt;/p&gt;

&lt;p&gt;Before doing any actual work, I had to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recall the real domain model&lt;/li&gt;
&lt;li&gt;Standardise terminology&lt;/li&gt;
&lt;li&gt;Refactor naming across the codebase&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Only after that AI became useful again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-native constraint (and advantage)
&lt;/h2&gt;

&lt;p&gt;As a non-native English speaker, I hit another limitation.&lt;/p&gt;

&lt;p&gt;My vocabulary is narrower. I often use simpler words. Sometimes these words are technically correct, but semantically ambiguous.&lt;/p&gt;

&lt;p&gt;For humans, this is manageable. For AI, it creates drift.&lt;/p&gt;

&lt;p&gt;At the same time, this constraint forces something useful:&lt;/p&gt;

&lt;p&gt;More precision. Less noise.&lt;/p&gt;

&lt;p&gt;And with AI, precision matters more than expressiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t start with code
&lt;/h2&gt;

&lt;p&gt;Most people approach AI like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Here’s the idea — build it.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the fastest way to get something wrong.&lt;/p&gt;

&lt;p&gt;The approach that actually works is different:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define terminology&lt;/li&gt;
&lt;li&gt;Lock naming conventions&lt;/li&gt;
&lt;li&gt;Describe entities and relationships&lt;/li&gt;
&lt;li&gt;Let AI ask questions&lt;/li&gt;
&lt;li&gt;Refine understanding&lt;/li&gt;
&lt;li&gt;Only then generate code&lt;/li&gt;
&lt;li&gt;This feels slower. It isn’t. AI can write code in seconds. Refactoring unclear code takes hours.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key shift is simple:&lt;/p&gt;

&lt;p&gt;Don’t ask AI to build. Ask it to interrogate you.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Ask me everything you need to fully define this system.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This forces you to externalise assumptions — things that are obvious to you but invisible to AI.&lt;/p&gt;

&lt;p&gt;This is not new. This is what business analysts have always done. What changes now is the leverage. AI handles a large part of implementation. The bottleneck moves to understanding and definition.&lt;/p&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;p&gt;People who can clearly describe problems gain more influence than people who can just implement solutions. Not because they write code. Because they define it. AI doesn’t remove complexity. It shifts it.&lt;/p&gt;

&lt;p&gt;From code → to thinking&lt;/p&gt;

&lt;p&gt;From implementation → to communication&lt;/p&gt;

&lt;p&gt;The developers who benefit the most are not the fastest coders. They are the ones who can clearly describe what needs to be built.&lt;/p&gt;

&lt;p&gt;Naming things is still hard. Now it’s also critical.&lt;/p&gt;




&lt;p&gt;Looking for your comments, no matter if you agree or don’t!&lt;br&gt;
Let’s connect on &lt;a href="https://www.linkedin.com/in/dmytro-sirant/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; !&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>How I Overlooked the Problem and Shot Myself in the Foot</title>
      <dc:creator>Dmytro Sirant</dc:creator>
      <pubDate>Fri, 07 Nov 2025 02:54:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-overlooked-the-problem-and-shot-myself-in-the-foot-2ok8</link>
      <guid>https://dev.to/aws-builders/how-i-overlooked-the-problem-and-shot-myself-in-the-foot-2ok8</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Migration Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As part of my work as an AWS consultant this year, I’ve been doing migrations from IAM Users to SSO (yes, I know, but better late than never). There’s a checklist I follow during such migrations, and one of the last stages is to keep IAM Users disabled (keys and web access) for about a month, just to ensure that everything works fine without issues. This approach proved helpful in the case of EKS clusters before AWS introduced the ability to manage cluster access via API instead of the legacy aws-auth configmap.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Missed Detail&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;But there was one issue I kept overlooking until it finally caught up with me. Long story short: the migration from IAM to SSO was completed, and after the planned cooldown, IAM users were deleted. Some time later, I decided to upgrade the IaC with a new version of the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="noopener noreferrer"&gt;terraform-aws-eks&lt;/a&gt; module. The terraform plan showed expected changes, but during terraform apply I got an error stating that my SSO account had no permission to update the KMS key alias (a minor change due to improved naming conventions).&lt;/p&gt;

&lt;p&gt;A quick check showed that the KMS key created by the previous version of the module had a neat, least-privilege key policy — with &lt;code&gt;kms:PutPolicy&lt;/code&gt; permission granted only to the IAM user I’d used to create the EKS cluster with KMS envelope encryption. Ironically, that IAM user had been disabled for a month and deleted only a week earlier.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;False Sense of Victory&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;My first thought was that it wasn’t a big deal — I’d just remove the current KMS key object from the Terraform state, let it create a new key, and associate it with the cluster. Sounded good. Even better, terraform plan and terraform apply completed without errors. Problem solved!&lt;/p&gt;

&lt;p&gt;or so I thought...&lt;/p&gt;

&lt;p&gt;After a few small tweaks, I ran another change and noticed that Terraform tried to update the EKS cluster resource again. The only difference was the KMS envelope encryption key association. Multiple runs, same behaviour — Terraform applied the change successfully, yet it wasn’t reflected in the cluster settings (it still used the old KMS key I had no access to).&lt;/p&gt;

&lt;p&gt;A quick check of the documentation confirmed that the envelope encryption key can’t be changed after creation. Fair enough. But why didn’t Terraform respect that? I’m not sure yet whether the AWS API isn’t returning a proper response code, or the Terraform AWS provider doesn’t handle it correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Recovery Attempt&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;So, I needed recovery access to the KMS key somehow. How do you get maximum access permissions to an AWS account? Correct — use the root login. Unfortunately, even with root permissions, I couldn’t update the KMS key policy. Great for security, but I still needed to regain access to the key. Time to check if anyone else had faced the same issue, and within a few minutes the answer was clear — contact AWS Support.&lt;/p&gt;

&lt;p&gt;I opened a support case, explained the problem, and provided the ARN of the KMS key along with the policy I wanted to attach. What could be simpler? The support response surprised me. I had to follow a specific set of instructions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To recover your unmanageable keys, please follow these steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create 1 IAM user for every affected key using the following naming convention: kms_key_recovery_ For Example. If the Key ID is “17e51010-cc0f-2268-bccd-2699f10c133a” , then the corresponding recovery user would be “kms_key_recovery_17e51010-cc0f-2268-bccd-2699f10c133a”. Create and attach the following IAM policy to the newly created users: { “Version”: “2012–10–17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “kms:ListAliases” ], “Resource”: “*” } ] } &lt;/li&gt;
&lt;li&gt;Once the recovery users have been created, please respond to this case with a list of the ARNs of the keys for which you have made recovery users and the respective users ARNs. &lt;/li&gt;
&lt;li&gt;We will then contact you via the phone number listed in your account Contact Information to verify the One-Time Password listed above. Once verified, we will engage our internal KMS team to initiate recovery and update your case when complete. Your key recovery users will have the ability to modify the key policies of their respective keys. If the process was abandoned, we will inform you which users in your account still have access.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because it’s not a synchronous operation and the KMS key policy can be updated at any time within a 12-hour window, I was worried the cluster might enter a degraded state before I could apply the proper policy. Luckily, that didn’t happen. The AWS team just granted the required permissions to the key as an additional rule without dropping the existing ones. The next day, I received an update confirming the recovery procedure was complete, and I was able to attach the correct policy to the key.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Lessons Learnt&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now there’s a new step in my IAM-to-SSO migration checklist: &lt;br&gt;
[ ] &lt;strong&gt;update KMS key policies before deleting IAM users.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Looking for your comments and let’s connect on &lt;a href="https://www.linkedin.com/in/dmytro-sirant/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kms</category>
      <category>security</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>AWS and Docker Hub Limits: Smart Strategies for April 2025 Changes</title>
      <dc:creator>Dmytro Sirant</dc:creator>
      <pubDate>Wed, 19 Mar 2025 09:01:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-and-docker-hub-limits-smart-strategies-for-april-2025-changes-1514</link>
      <guid>https://dev.to/aws-builders/aws-and-docker-hub-limits-smart-strategies-for-april-2025-changes-1514</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker Hub has implemented rate limits on image pulls back on November 2, 2020, which made a lot of headaches for those who are using it in their environment. The following limits were introduced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anonymous Users: Limited to 100 pulls per 6 hours&lt;/li&gt;
&lt;li&gt;Authenticated Users: Limited to 200 pulls per 6 hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many of us migrated to private repositories, quay.io, ghrc.io, registry.gitlab.com, and others. For some, it was enough to use an authentication token to pull from Docker Hub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starting from April 1, 2025 there is a new limits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Personal (authenticated): Limited to 100 pulls per hour&lt;/li&gt;
&lt;li&gt;Unauthenticated Users: Limited to 10 pulls per IPv4 address or IPv6 /64 subnet per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Solution&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s see what we have at AWS to solve the problem with the rate limit and even more perks.&lt;/p&gt;

&lt;p&gt;In November 2021, &lt;a href="https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-ecr-cache-repositories/" rel="noopener noreferrer"&gt;AWS introduced the pull-through cache feature for Amazon Elastic Container Registry (Amazon ECR)&lt;/a&gt;, which can help us with the following problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;caching public and private images using an authentication token in your private ECR registry&lt;/li&gt;
&lt;li&gt;speedup pulling from private ECR to your local services (ECS, EKS, Lambdas, etc.)&lt;/li&gt;
&lt;li&gt;lifecycle policy to keep only the required number of the latest tags&lt;/li&gt;
&lt;li&gt;security scanning of images during pull&lt;/li&gt;
&lt;li&gt;a single place to update your token in case of rotation or expiration (e.g. Gitlab do not allow you to create tokens with an expiration date longer than one year). Just imagine you need to go through all your credentials in all K8s clusters one per year to update tokens.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, let’s dive into the implementation of this feature. To make it work, you need to create:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a &lt;strong&gt;pull-through cache rule&lt;/strong&gt;, where you define the prefix you want to use in your registry and the destination registry where you forward all requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;secrets manager credentials&lt;/strong&gt; you want to use to authenticate to the destination registry&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;a pull-through cache repository template&lt;/strong&gt; if you want to have a lifecycle policy, security scans&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;and some other features, which are optional at this stage.&lt;/p&gt;

&lt;p&gt;As you can see, you are not creating the repository itself. It will be created automatically during the first pull-through cache request using default settings or a template if you have assigned one.&lt;/p&gt;

&lt;p&gt;If I created a rule with the prefix &lt;code&gt;dockerhub&lt;/code&gt; in the &lt;code&gt;us-east-1&lt;/code&gt; region and say to forward my pull requests to the &lt;code&gt;registry-1.docker.io&lt;/code&gt;, I can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# direct pull from Docker Hub
docker pull timberio/vector:0.45.0-alpine
# pull through ECR
docker pull 123456789012.dkr.ecr.us-east-1.amazonaws.com/dockerhub/timberio/vector:0.45.0-alpine

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, I only added my repository and &lt;code&gt;dockerhub&lt;/code&gt; as prefix, and then the original image to pull. It’s what you need to update in manifests later to pull images through ECR, which is pretty straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Automation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Sounds easy, but why not make it using Terraform (with a Terragrunt wrapper)? I created a simple Terraform module which helps you create multiple pull-through rules from the basic YAML file with the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dockerhub:
  registry: registry-1.docker.io
  username: user
  accessToken: token
gitlab:
  registry: registry.gitlab.com
  username: user
  accessToken: token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and module will create two rules with prefixes: &lt;code&gt;dockerhub&lt;/code&gt; and &lt;code&gt;gitlab&lt;/code&gt; , secrets manager credentials with &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;accessToken&lt;/code&gt;, and basic lifecycle policy to keep only 3 latest images in cache (makes sense to extend the YAML with per-registry settings but I have no time for it now).&lt;br&gt;
Because I’m using Terragrunt I can encrypt this file with Sops and commit to the repository safely, so my &lt;code&gt;terragrunt.hcl&lt;/code&gt; is looking like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  source = "${get_parent_terragrunt_dir()}modules/ecr-pullthrough"
}

include "root" {
  path = find_in_parent_folders("root.hcl")
  expose = true
}

locals {
  registries = yamldecode(sops_decrypt_file("${get_terragrunt_dir()}/secrets.enc.yaml"))
}

inputs = {
  registries = local.registries
  tags = merge(include.root.locals.default_tags)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;You can find my module on Github &lt;a href="https://github.com/opsworks-co/ecr-pull-through" rel="noopener noreferrer"&gt;https://github.com/opsworks-co/ecr-pull-through&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are using &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="noopener noreferrer"&gt;https://github.com/terraform-aws-modules/terraform-aws-eks&lt;/a&gt; module to create your cluster and using in-built role it is creating for the worker nodes, don’t forget to extend it with a new policy to allow &lt;code&gt;ecr:BatchImportUpstreamImage&lt;/code&gt; because by default it attaches managed policy &lt;code&gt;arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly&lt;/code&gt; which doesn’t allow pull-through cache.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Looking for your comments, suggestions and improvements!&lt;br&gt;
Follow me on &lt;a href="https://sirantd.com" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, connect on &lt;a href="https://www.linkedin.com/in/dmytro-sirant/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>ecr</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
