<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexy Grabov</title>
    <description>The latest articles on DEV Community by Alexy Grabov (@coreoxide).</description>
    <link>https://dev.to/coreoxide</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/coreoxide"/>
    <language>en</language>
    <item>
      <title>AWS who? Meet AAS</title>
      <dc:creator>Alexy Grabov</dc:creator>
      <pubDate>Thu, 26 Feb 2026 16:14:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-who-meet-aas-3b69</link>
      <guid>https://dev.to/aws-builders/aws-who-meet-aas-3b69</guid>
      <description>&lt;p&gt;Yes, I know that predicting the downfall of SaaS and SaaS providers is all the rage right now, and no - this is not an AWS doomsday prophecy. AWS still holds about &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.statista.com%2Fchart%2F18819%2Fworldwide-market-share-of-leading-cloud-infrastructure-service-providers%2F%23%3A~%3Atext%3DAfter%2520having%2520established%2520itself%2520as%2Cthree%2520months%2520ended%2520December%252031." rel="noopener noreferrer"&gt;30% of the cloud market&lt;/a&gt; (although Microsoft is really closing the gap with 20% market share) and remains, at least in my personal opinion, a market leader in terms of stability, features and innovation. AWS is not the best at everything, but is the best at most things, and that how it holds a third of the market (and &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.statista.com%2Fchart%2F18819%2Fworldwide-market-share-of-leading-cloud-infrastructure-service-providers%2F%23%3A~%3Atext%3DAfter%2520having%2520established%2520itself%2520as%2Cthree%2520months%2520ended%2520December%252031." rel="noopener noreferrer"&gt;half of Amazon's profits&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Do you remember when AWS was pushing serverless compute? Seems so long ago, a time when we were all still spinning VMs on our vCenters. They were offering S3, when companies were still piling on &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.dell.com%2Fcommunity%2Fen%2Fconversations%2Fvnx%2Femc-vnx-series%2F647f103ef4ccf8a8de4b147b" rel="noopener noreferrer"&gt;VNX&lt;/a&gt; storage racks. I know, I was there, at &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEMC_Corporation" rel="noopener noreferrer"&gt;EMC²&lt;/a&gt;, when it was happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti61qtbwjynvxrs6vjj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti61qtbwjynvxrs6vjj9.png" alt="Agents working inside AWS' cloud" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In the Beginning There Was Cloud
&lt;/h2&gt;

&lt;p&gt;That kind of forward thinking approach put AWS in the lead, and allowed it to maintain it's lead over the years. A few years ago- cloud is all anyone talks about. SaaS companies rise. Server racks are a thing of the past. Everybody is developing a cloud-native something. Consuming and managing resources made simple, accessible, cheap.&lt;/p&gt;

&lt;p&gt;Unfortunately for AWS, this business model was disrupted recently. AI came barging into everyone's life - and is threatening AWS' dominance. I &lt;a href="https://medium.com/r/?url=https%3A%2F%2Falexy-grabov.medium.com%2Fsaas-is-dead-long-live-saas-fbd6a58512e1" rel="noopener noreferrer"&gt;predicted the SaaSpocalipse&lt;/a&gt; almost a year ago, without even realizing. I wrote how we would shift from consuming software to consuming the underlying service this software provides - using agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  AAS
&lt;/h2&gt;

&lt;p&gt;Being the innovative company that it is, AWS also noticed the same paradigm shift I have. AWS is telling us what it's focusing on, as a company. All we have to do is listen (and be a little bit observant).&lt;/p&gt;

&lt;p&gt;So, lets look at the facts.&lt;/p&gt;

&lt;p&gt;AWS has been expanding its &lt;a href="https://aws.amazon.com/bedrock/agentcore/" rel="noopener noreferrer"&gt;Bedrock AgentCore&lt;/a&gt; offering, packing it with features and trying to make it into a one-stop-shop for all of your agentic needs. Want to &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fdocs.aws.amazon.com%2Fbedrock-agentcore%2Flatest%2Fdevguide%2Fagents-tools-runtime.html" rel="noopener noreferrer"&gt;create an agent&lt;/a&gt;? easy. Need to &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fdocs.aws.amazon.com%2Fbedrock-agentcore%2Flatest%2Fdevguide%2Fpolicy.html" rel="noopener noreferrer"&gt;safeguard it&lt;/a&gt;? they've got you there as well. Need some tools for your agent? no problem - a &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Fmachine-learning%2Ftransform-your-mcp-architecture-unite-mcp-servers-through-agentcore-gateway%2F" rel="noopener noreferrer"&gt;full MCP suite&lt;/a&gt; is available for you.&lt;/p&gt;

&lt;p&gt;On top of providing a toolbox for agent builders and administrators, AWS has some agents of its own. Perhaps you've heard about the &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Faws%2Faws-devops-agent-helps-you-accelerate-incident-response-and-improve-system-reliability-preview%2F" rel="noopener noreferrer"&gt;DevOps Agent&lt;/a&gt; and &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fsecurity-agent%2F" rel="noopener noreferrer"&gt;Security Agent&lt;/a&gt;. They are Agent-as-a-service type of offerings (or - service-as-a-service, if you will). Instead of building an agent, AWS provides a working one for you, tailored to perform a specific task. Enterprise customers are moving away from leasing static infrastructure or basic software tools, and are instead leasing automated digital labor. Who needs WIZ when you have a security auditing agent which is, according to a customer quoted by AWS &lt;em&gt;"reducing the typical testing duration by more than 90%"&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo1887mteoca1gqfo9qr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo1887mteoca1gqfo9qr.png" alt="AWS Security Agent will perform penetration tests on your applications. Autonomously." width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another honorable mention are the &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fawslabs%2Fagent-plugins" rel="noopener noreferrer"&gt;agent plugins&lt;/a&gt;, "an open source repository of curated plugins that bring packaged AWS expertise directly into AI coding assistants" said &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Ftheagenticguy%2F" rel="noopener noreferrer"&gt;Laith Al-Saadoon&lt;/a&gt;, Principal AI Engineer at AWS.&lt;br&gt;
The pattern is clear. AWS wants to be your one-stop-shop for any and all agentic needs, just like it has become a one-stop-shop for anything cloud related. Thus, I propose a new conceptual name for the provider - AAS: Amazon Agentic Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agentic Tides
&lt;/h2&gt;

&lt;p&gt;The AI tsunami did not spare AWS. Even though they are still actively developing "legacy" cloud native services , the shift in focus is evident. Even new features, like &lt;a href="https://medium.com/r/?url=http%3A%2F%2Falexy-grabov.medium.com%2Flambda-durable-functions-hands-on-technical-overview-cb26d0c6a159" rel="noopener noreferrer"&gt;Lambda Durable Functions&lt;/a&gt;, is clearly a direct response to the rise of specialized durable execution engines such as Temporal, Azure Durable Functions, Cadence, and Cloudflare Workflows. These competing platforms have explicitly targeted the orchestration of agentic workflows.&lt;br&gt;
AWS wants to be a Swiss army knife when in comes to building anything agent \ LLM related. Any model, any complimentary service, on demand.&lt;/p&gt;

&lt;p&gt;We can assume that AWS will continue expanding its AI-related portfolio, with even more ready to go agentic services and more tools for builders, striving to become an AI platform, by spending &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.cnbc.com%2F2026%2F02%2F05%2Fwhy-amazons-ceo-is-confident-with-200-billion-spending-plan.html" rel="noopener noreferrer"&gt;200 billion dollars&lt;/a&gt; on AI infrastructure. To accurately conceptualize this scale, Amazon will be spending approximately $548 million every single day, or $21 million every hour procuring, powering, and deploying artificial intelligence infrastructure for AWS data centers globally. &lt;/p&gt;

&lt;p&gt;Unlike Wall Street, AWS leadership views this monumental investment not as a speculative risk, but as an absolute, &lt;strong&gt;existential&lt;/strong&gt; necessity.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>agents</category>
      <category>ai</category>
      <category>bedrock</category>
    </item>
    <item>
      <title>How to Train Your ̶D̶r̶a̶g̶o̶n̶ ̶ Model</title>
      <dc:creator>Alexy Grabov</dc:creator>
      <pubDate>Mon, 05 May 2025 10:12:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-train-your-dragon-model-43lf</link>
      <guid>https://dev.to/aws-builders/how-to-train-your-dragon-model-43lf</guid>
      <description>&lt;p&gt;In one of my &lt;a href="https://medium.com/@alexy-grabov/saas-is-dead-long-live-saas-fbd6a58512e1" rel="noopener noreferrer"&gt;previous posts&lt;/a&gt;, we’ve discussed how we can utilize the AWS Bedrock suite of services to provision and run Agents that are 100% serverless.&lt;/p&gt;

&lt;p&gt;We were able to achieve it because AWS was kind enough to host some of the most popular models for us, granting access on a pay-per-usage model. This might work for rapid prototyping or PoCs, but using this payment model in production will have 3 major disadvantages.&lt;/p&gt;

&lt;p&gt;First, you will hit quota limit pretty fast. Since this model is shared, AWS limits the number of requests you can do in a given time slot. For example, Cross-Region InvokeModel tokens per minute for Anthropic Claude 3.5 Sonnet V2 is limited to 800,000. That might sound like a lot, but note that this limit is for a whole minute, for all requests (i.e. is shared by all sessions you might open to that model in a region). I’ve even hit this limit during my PoC!&lt;/p&gt;

&lt;p&gt;Second, is pricing. The pricier models, like the various versions of Claude Sonnet models, can run up quite a bill. You are charged per 1,000 input/output tokens (batching is a bit cheaper) and per caching. I’ve tried conversing with my Claude 3.5 Sonnet V2 based agent, and as the context grew I reached $0.1 per interaction (question\answer).&lt;/p&gt;

&lt;p&gt;Third, is speed. This is not true for all models, but it is for the large, multi-purpose ones. They can take a good 30 seconds to think, reason and generate responses sometimes. That’s understandable, as those models are very large and are shared. For real, production, loads that’s probably not acceptable.&lt;/p&gt;

&lt;p&gt;Now, we understand that not everyone can just host ChatGPT 4.5 on their laptop, and paying AWS to host a dedicated huge LLM just for you is not going to be cheap. The solution? Creating your own models.&lt;/p&gt;

&lt;p&gt;It might sound like something that only AI engineers can do, but using AWS Model Distillation in the Bedrock suite — this process can be quick, cheap and very easy.&lt;/p&gt;

&lt;p&gt;In this blog we’ll learn how to distill, host, and query your very own LLM using AWS Bedrock &amp;amp; Provisioned Throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Model Distillation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb6je5lge84l41qlqn5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb6je5lge84l41qlqn5p.png" alt="Model Distillation Flow" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-distillation.html" rel="noopener noreferrer"&gt;Model distillation&lt;/a&gt; is a machine learning technique where knowledge from a large, complex model (teacher) is transferred to a smaller, simpler model (student). The student model learns by mimicking the outputs or internal representations of the teacher, aiming to achieve similar performance with reduced computational resources. This approach enables deployment of efficient models suitable for resource-constrained environments without significantly sacrificing accuracy — for a very specific use case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quick side note here — I personally think this is what this whole “AI” frenzy will eventually come down to — small, fast, specialized models that run everywhere, and can collaborate with other models if they need to do something outside of their expertise.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Luckily for us, AWS Bedrock offers a very easy, fully managed process that allows you to train your own models to suit your specific business flow. The resulting model will (hopefully) mimic the larger, slower, expensive model that trained it, for the specific task it was trained on. This is also the place to note that the trainer and trainee models must be of the same model family. You can’t use AWS’ Nova Pro to train AI21 Labs’ Jamba 1.5 Mini, for example.&lt;/p&gt;

&lt;p&gt;For the purposes of this blog, we will use Nova Pro to train a Nova Micro, because I already used Claude 3.5 for my AI real-estate agents and have some conversation history I can use (which we are going to use to train Nova Micro).&lt;/p&gt;

&lt;p&gt;The training dataset needs to be in a very specific &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/distillation-data-prep-option-1.html" rel="noopener noreferrer"&gt;JSONL format&lt;/a&gt;, which can take some time to prepare. I used another model, Gemini 2.5 Pro, ironically, to structure the prompts from my chat history in a format that Bedrock Model Distillation can read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Training Methods
&lt;/h2&gt;

&lt;p&gt;A quick side note here about &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html" rel="noopener noreferrer"&gt;Fine-tuning and Continued Pre-training&lt;/a&gt;, which are two additional training methods that AWS Bedrock offers. Fine-tuning is somewhat similar to Distillation, but focuses on a specific task. You have something that the model does, and you want to make it better at doing exactly that. Model distillation is about focus — strip the dead weight from a bigger, more capable model, and only keep what you need from it, instead of training the larger model to be better at doing something very specific.&lt;/p&gt;

&lt;p&gt;Continued pre-training is a broader training option, used when you need a model to become familiar with your industry, company and professional jargon. This method is most relevant when embedding an Agent into your company and you want the base model (say, GPT-4.1) to “know” what your employees are talking about when they use industry or company specific lingo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Distilled Model
&lt;/h2&gt;

&lt;p&gt;Distilling a model using the AWS Bedrock console is very easy. Going to the “Custom Models” tab reveals all the available jobs that you can run to customize models. We are going to select “Model Distillation”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vgckgxpsf20olke97ns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vgckgxpsf20olke97ns.png" alt="Creating Distillation Job" width="800" height="1053"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we will select the teacher mode, the student model, and our dataset to be used for training. As mentioned before, the teacher and student models must be of the same model family, i.e. you must use models from the same vendor.&lt;/p&gt;

&lt;p&gt;The distillation job can run for quite a while, depending on your dataset, but at the end of it you should have a distilled model in your Models tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ia8gsr0k2ngjskjgy40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ia8gsr0k2ngjskjgy40.png" alt="Model Distillation Job Run" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzakmgww97ahlqmtiyn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzakmgww97ahlqmtiyn1.png" alt="Model Distillation Job Overview" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! It’s a model!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8kyy8mbeu2cwrz72bcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8kyy8mbeu2cwrz72bcr.png" alt="Distilled Custom Model" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Your Model
&lt;/h2&gt;

&lt;p&gt;Now, this is a bit of a “gotcha”. To run a model, you need to purchase &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html" rel="noopener noreferrer"&gt;Provisioned Throughput&lt;/a&gt;. That means that AWS will provision some resources, 100% dedicated for your model — and it may cost you a &lt;a href="https://aws.amazon.com/bedrock/pricing/" rel="noopener noreferrer"&gt;very pretty penny&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie1quphpofabp7xdf75l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie1quphpofabp7xdf75l.png" alt="Provisioned Throughput" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had issues purchasing Provisioned Throughput myself, and even reached out through internal channels for AWS Builders and got some help from a Principal AI Architect on the Bedrock team (thanks guys!). Because purchasing Provisioned Throughput can cost you literally hundreds of thousands of dollars per month for some of the larger models — it’s not the go-to solution for most individuals — even most companies.&lt;/p&gt;

&lt;p&gt;A smaller Nova Micro did not break my bank, but pricing quotes are not to be taken lightly here. Only those with consistently steady, predictable, production-grade throughput should opt in for purchasing reserved throughput over &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/inference-how.html" rel="noopener noreferrer"&gt;Inference&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, we are experimenting here, and we need to check out the result of our distillation job, so… let’s do it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Model Results
&lt;/h2&gt;

&lt;p&gt;Once we have purchased Provisioned Throughput, you can run your custom model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4grq8scg45fkzmmvnc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4grq8scg45fkzmmvnc7.png" alt="Model Comparison" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use the “Chat” option, and compare the AWS-hosted Nova Pro model vs. our distilled Nova Micro-based model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxxr3bl4df8ujre23un9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxxr3bl4df8ujre23un9.png" alt="Model Comparison" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, our custom Nova Micro model was twice as fast and provided more accurate information. Our distilled model took 1527ms, while Nova Pro took 2623ms. Also, it used ILS as the main currency (but still converted to USD, for some reason) because the prices in its training data were in ILS. The only thing I’m not sure about is the fact that Nova Micro said its information is updated for the year 2023, when my listing information was very recent. Post in the comments if you know why that happened.&lt;/p&gt;

&lt;p&gt;I am not going to use this model for my real-estate agents, for now. Provisioned Throughput is still very expensive to just leave running and forget about it. However, I hear that AWS is working on a new version, internally dubbed as &lt;em&gt;“Provisioned Throughput 2”&lt;/em&gt; which will be available in Q3 of 2025. I don’t know exactly what’s going to be different there, but it will “make Provisioned Throughput a viable option for more of our customers”.&lt;/p&gt;

&lt;p&gt;I’ll be sure to check that out once it’s available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;In this short technical blog, we have created a custom, one-of-a-kind, language model. It’s still very pricy to run 24/7, but if you have a production-grade, predictably and long running GenAI backed workload — this can really save you a pretty penny, while offering a lot more throughput.&lt;/p&gt;

&lt;p&gt;P.S.&lt;br&gt;
Did I mention this whole project was done &lt;a href="https://dev.to/aws-builders/who-needs-software-for-development-anyway-1hmf"&gt;using only the web browser&lt;/a&gt;?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>bedrock</category>
      <category>training</category>
    </item>
    <item>
      <title>Who Needs Software for Development Anyway?</title>
      <dc:creator>Alexy Grabov</dc:creator>
      <pubDate>Fri, 17 Jan 2025 11:40:49 +0000</pubDate>
      <link>https://dev.to/aws-builders/who-needs-software-for-development-anyway-1hmf</link>
      <guid>https://dev.to/aws-builders/who-needs-software-for-development-anyway-1hmf</guid>
      <description>&lt;p&gt;We already know that we don't need servers to run software, right? We're all about serverless and IaC here. But what about the software and supporting applications that we need to actually ship that software out of the door?&lt;/p&gt;

&lt;p&gt;With the rise of AI driven, online generation code and AWS pushing it's browser-based Console experience to new levels - such as &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Faws%2Fannouncing-a-visual-update-to-the-aws-management-console-preview%2F" rel="noopener noreferrer"&gt;improving the UI&lt;/a&gt; and &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Fcompute%2Fintroducing-an-enhanced-in-console-editing-experience-for-aws-lambda%2F" rel="noopener noreferrer"&gt;incorporating a VSCode-like experience directly into the Lambda Code&lt;/a&gt; tab - has there ever been a better time to move to developing 100% using just your browser?&lt;/p&gt;

&lt;p&gt;Perhaps. But, what AWS has giveth it can also taketh, and it has. Some development-friendly tools, like the &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fx.com%2Fjeffbarr%2Fstatus%2F1818461689920344321" rel="noopener noreferrer"&gt;Cloud9, CodeCommit&lt;/a&gt; and &lt;a href="https://medium.com/r/?url=https%3A%2F%2Frepost.aws%2Fquestions%2FQUzfvPNaF6T3CVMRta-qcziA%2Fcode-star-vs-code-catalyst" rel="noopener noreferrer"&gt;CodeStar&lt;/a&gt; being axed by AWS without notice can indicate that this might not be a priority for them. Yes, they are sort-of being integrated into &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fcodecatalyst.aws%2Fexplore%2Ffaq" rel="noopener noreferrer"&gt;CodeCatalyst&lt;/a&gt;, which is more managed and should be easier to use, but to me it feels more focused on DevOps (or, maybe, Platform Engineering) teams and not for individual developers. AWS even states it's an "Integrated DevOps Service".&lt;/p&gt;

&lt;p&gt;We will explore two of the available options, the native CodeCatalyst and it's popular rival GitHub, discuss their pros and cons so that you can choose the option (or combination of multiple options) that works best for you.&lt;/p&gt;

&lt;p&gt;Quick side note about &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fproton%2F" rel="noopener noreferrer"&gt;AWS Proton&lt;/a&gt;, which I feel like people sometimes compare to CodeCatalyst. While some features do seem similar, AWS Proton is really more of an infrastructure to deploy &amp;amp; run your code as a service on AWS - which might be exactly what you want - and less a suite for project development.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AWS Way
&lt;/h2&gt;

&lt;p&gt;Since it feels like AWS wants to pull you away from the Code&amp;lt;*&amp;gt; service pack, let's take a look at what they offer as an alternative. AWS CodeCatalyst is a fairly new service, &lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.amazon.com%2Fabout-aws%2Fwhats-new%2F2023%2F04%2Fgeneral-availability-amazon-codecatalyst%2F" rel="noopener noreferrer"&gt;about 2 years of age&lt;/a&gt;, and it offers a complete way to mange your codebase, build and deploy your applications (or just infrastructure) on one or more AWS accounts, and in one or more environments (i.e. dev, staging and production).&lt;/p&gt;

&lt;p&gt;We should first take note that AWS treats this as an external service. just look at the URL - &lt;em&gt;"codecatalyst.aws"&lt;/em&gt;. This is because you connect your existing resources to it, such as GitHub repositories and AWS accounts to it. It's just managing them.&lt;/p&gt;

&lt;p&gt;So the first thing you need to do is to create a new project in CodeCatalyst. We are all about serverless here, so lets use the serverless application blueprint for the sake of this demo. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64h21kyturnr5rygv5u0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64h21kyturnr5rygv5u0.png" alt="Creating a New Project" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Creating a New Project&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;You can configure some options for your selected project, like which language you want tom develop in. For more complicated blueprints that have databases, for example, you have more robust configuration options, like schemas or even the type of provisioned database you want to use.&lt;/p&gt;

&lt;p&gt;!&lt;br&gt;
  &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g21g7pt2q0dghp3jxo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g21g7pt2q0dghp3jxo1.png" alt="Project Customization Options" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Project Customization Options&lt;/em&gt;
  &lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Like I said, this is an external service to AWS, and when you link an AWS account to be used as a resource - all you will see in your AWS account is some information about it being used in a CodeCatalyst "space". All of the orchestration is done from the CodeCatalyst console.&lt;/p&gt;

&lt;p&gt;I'm not sure why that is. Maybe AWS thinks that DevOps are afraid of the Console?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohfa6j675azqdvjf5j2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohfa6j675azqdvjf5j2g.png" alt="The View From Your AWS Account" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;The View From Your AWS Account&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;After your project is created, you can access your repositories, browse them, create pull requests and review them, merge, track branches and even - God forbid- clone them to your local code editor. All the usual things you might expect from a Git tracker. You can also setup remove development environments if you're into that kind of a thing. Strangely, here Cloud9 is still one of the options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyxswfapracjlo7rpom9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyxswfapracjlo7rpom9.png" alt="Your Git Project" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Your Git Project&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlm6nwd7kbw8s94aspzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlm6nwd7kbw8s94aspzm.png" alt="Code Editor" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Code Editor&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;Now, after you have got your pull request approved and merged to your Main branch, it's time to build, test and deploy!&lt;br&gt;
AWS CodeCatalyst supports drag-and-drop, editable workflow steps that you can arrange to look much like the ones you are probably familiar with from the Blue Ocean view in Jenkins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01tm1ftj6cf3vrnk5vit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01tm1ftj6cf3vrnk5vit.png" alt="Workflows" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Workflows&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;It allows you to build, test, deploy (in stages) to your environments, test between deployments and even automate actions that are to be carried out in case of failures in any of the stages. The AWS integration here is remarkable and the level of fine-grained control offered is amazing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F589b6es2ppbdfvtw44b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F589b6es2ppbdfvtw44b4.png" alt="Editing Workflows" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Editing Workflows&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;For example, you can track per-commit changes to your deployments and rollback or sync specific deployment environments to other (successful) deployments. I mean, for Ops personnel - this is like learning magic spells.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic6u7cp3i19biu63fkh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic6u7cp3i19biu63fkh3.png" alt="Change Tracking" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Change Tracking&lt;/em&gt;
  &lt;/p&gt;

&lt;h2&gt;
  
  
  The GitHub Way
&lt;/h2&gt;

&lt;p&gt;Now lets take a look at what your life might look like if you choose &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2F" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; - but still would like you deploy your infrastructure on AWS. When working in GitHub you can choose any number of targets for your deployment, but for an apples-to-apples comparison, we will choose AWS (and also because I am an AWS Serverless Builder).&lt;/p&gt;

&lt;p&gt;I would wager that the vast majority of you know how a GitHub project looks like. This familiar interface packs all of the actions that you might want from a Git provider, but without any of the AWS-specific features of CodeCatalyst, like per-role management of access or pull request approval. That might be an upside or a downside for you, depending on how much you are integrated into AWS and how comfortable you are with managing your collaborator's Roles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq428sj6txquufdq3t8du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq428sj6txquufdq3t8du.png" alt="My GitHub Project" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;My GitHub Project&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;The GitHub code editor (immediately accessible by changing the &lt;em&gt;".com"&lt;/em&gt; to &lt;em&gt;".dev"&lt;/em&gt; in your browser URL, in case you didn't know) is miles, leagues ahead of what AWS has to offer. It has a full, working version of &lt;a href="https://vscode.dev/" rel="noopener noreferrer"&gt;&lt;em&gt;vscode.dev&lt;/em&gt;&lt;/a&gt;, which is pretty much the same as &lt;a href="https://github.dev/github/dev" rel="noopener noreferrer"&gt;&lt;em&gt;github.dev&lt;/em&gt;&lt;/a&gt; those days, I hear. It will allow you to install supported extensions, do some code completion, run your tests - and even has a shell! You can't install Copilot extension for that sweet AI-assisted programming, though.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5ytg5srk40xc9g3stuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5ytg5srk40xc9g3stuq.png" alt="GitHub Code Editor" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;GitHub Code Editor&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;However, this is the best experience for a web-based IDE that I've seen, and is the closest to an actual IDE running on your machine locally.&lt;/p&gt;

&lt;p&gt;There is also an option for remote and collaboration development environments, just like the AWS CodeCatalyst, called &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Ffeatures%2Fcodespaces" rel="noopener noreferrer"&gt;Codespaces&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As for the Ops side of things, GitHub has Actions and Security tabs to help with that. The Security tab includes built-in scanners. which are ready-to-use tools you can run to check your repository for potential risks. It's a great feature because it simplifies security and puts it front and center. Be honest, though - how often do you think about security when you're in the middle of developing?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkayr6rf867k6mb4op6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkayr6rf867k6mb4op6j.png" alt="GitHub Security" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;GitHub Security&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;As for the Actions, these are runnable, YAML-based workflows that you can trigger on events. There isn't a fancy UI interface for creating or editing them, but GitHub does provide a simple text editor with syntax highlighting and some starter templates. The runtime interface is somewhat similar to what you get in CodeCatalyst.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jqtf9chg7p8h1gad0k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jqtf9chg7p8h1gad0k1.png" alt="Workflow YAML" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;Workflow YAML&lt;/em&gt;
  &lt;/p&gt;

&lt;p&gt;You can build, test, and deploy with GitHub Actions as well, but here you're missing the seamless AWS integration. You'll need to manage your deployments, stages, and AWS secrets for your accounts manually (there's a "Secrets" tab for that in GitHub). You'll also have to handle any failed deployments, either by coding in the logic or addressing them manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsii1d3kl0nhb60eqyhhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsii1d3kl0nhb60eqyhhz.png" alt="GitHub Pipeline" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;&lt;em&gt;GitHub Pipeline&lt;/em&gt;
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you are building your project on AWS, and do not have a full-fledged DevOps group, you can't go wrong with using CodeCatalyst for your deployments. The integration and the level of abstraction they achieved is something special, and other cloud providers should really take notes here.&lt;/p&gt;

&lt;p&gt;With that being said, it just can't compare to GitHub when it comes to coding. GitHub's VSCode web offer is so much more advanced and can even accommodate small to medium sized teams, if you ask me.&lt;/p&gt;

&lt;p&gt;Luckily, you can connect your GitHub project as a source repository to AWS CodeCatalyst and enjoy the best of both worlds. You can even use some of the built-in security scanners in GitHub and some of the pre-made Actions, while CodeCatalyst will manage your AWS account resources, environments and deployments. It's a setup that, in my opinion, is hard to beat, and it should serve you well whether you're building a small project or launching a startup on AWS.&lt;/p&gt;




&lt;p&gt;As always, thanks and love to my beautiful wife and talented DevOps Architect &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fyafit-tupman%2F" rel="noopener noreferrer"&gt;Yafit Tupman&lt;/a&gt;, who helps me navigate the strange and scary world of modern build platforms.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devtools</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Resource Names Validation and Generation</title>
      <dc:creator>Alexy Grabov</dc:creator>
      <pubDate>Fri, 28 Jun 2024 15:04:11 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-resource-names-validation-and-generation-2ai3</link>
      <guid>https://dev.to/aws-builders/aws-resource-names-validation-and-generation-2ai3</guid>
      <description>&lt;p&gt;Have you ever wondered how can you validate AWS resource definitions (names, ARNs, patterns) in runtime? Well, if you have, you probably know that you can’t.&lt;/p&gt;

&lt;p&gt;In this blogpost we’ll cover the current solutions, their limitations and introduce a new open-source package that actually can perform those validation for you automatically. It can also generate said patterns for your testing and mocking needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflows Without Validation or Generation
&lt;/h2&gt;

&lt;p&gt;Let me tell you a short story and see if it sounds familiar to you.&lt;/p&gt;

&lt;p&gt;You make changes to your CDK code, you deploy. A CloudFormation template is synthesized, which takes a good minute. Then, AWS starts creating the resources you requested. It runs for a few minutes — and fails. Your Lambda function name is too long.&lt;/p&gt;

&lt;p&gt;This, of course, is not the only workflow that might be affected by validations that are performed too late. Imagine your code receives a string during a business flow, either as user input or from another application it interacts with. It’s supposed to represent an ARN of a resource — but it doesn’t. You code tries to “access” this resource using a boto3 client — and fails. Now, you need to debug. Is this resource really missing? was is deleted? did AWS fail to find it? did the boto3 client expect to receive it in a different format?&lt;/p&gt;

&lt;p&gt;Let's also consider testing. Often when you need to test your logic flows, you might need to somehow get your hands on “real” AWS resource ARNs or paths. You might need them to check your internal validations and error flows, or just use them as return values to your mocks. What most developers do in those cases is just go to their development AWS environment, find the suitable resource — and copy names or parameters to their test code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resource Schemas and Constraints Sources
&lt;/h2&gt;

&lt;p&gt;AWS is usually really good with documentation, and this case is no different. You can actually find &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-type-schemas.html"&gt;CloudFormation schemas&lt;/a&gt; publicly published, but I really doubt anyone actually reads those.&lt;/p&gt;

&lt;p&gt;A more convenient way would be to search for any constraints or validation patterns in docs.aws.amazon.com, and indeed we can have a look at this example for a &lt;a href="https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html#API_CreateFunction_RequestBody"&gt;Lambda Function create body&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94mbahag46jk4rqe9ymu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94mbahag46jk4rqe9ymu.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s not bad, as far as documentation goes. But who has the time or patience to read it, or search for it every time it’s required?&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Solutions
&lt;/h2&gt;

&lt;p&gt;You probably know that in software engineering you’re probably not the first ever to hit a certain issue. Someone has probably already dealt with this same exact thing, there are probably already 10 threads on StackOverflow and Reddit discussing it, other engineers have suggested solutions etc. Just pick a solution you like and copy-paste.&lt;/p&gt;

&lt;p&gt;In this case, unfortunately, I was unable to find a suitable solution to some of the problems I was facing.&lt;/p&gt;

&lt;p&gt;First, AWS itself have recently &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/03/aws-cloudformation-new-validation-checks-stack-operations/"&gt;acknowledged my pain&lt;/a&gt; and have baked some of the validation functionality into AWS CloudFormation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS CloudFormation improves its deployment experience to validate customer stack operation upfront for invalid resource property errors.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "StackId": "arn:aws:cloudformation:us-west-2:123456789012:stack/MyStack/50d6e750-5a71-11e6-afc7-50d5ca9f1234",
  "EventId": "6ba1a560-5a71-11e6-bf4a-500c28168c4b",
  "StackName": "MyStack",
  "LogicalResourceId": "MyS3Bucket",
  "ResourceType": "AWS::S3::Bucket",
  "Timestamp": "2024-03-14T19:57:18.129Z",
  "ResourceStatus": "CREATE_FAILED",
  "ResourceStatusReason": "Property validation failure: [unexpected property PropertyName1]",
  "ResourceProperties": {
    "BucketName": "my-bucket",
    "PropertyName1": "invalid-value",
    "AccessControl": "PublicRead"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this means is that your deployments will fail much sooner, when CDK generates the CloudFormation template, instead of during the later deployment stage, without creating any actual resources on AWS. Great news!&lt;/p&gt;

&lt;p&gt;If you like your validation a little bit more hard-core, you might want to look at &lt;a href="https://github.com/aws-cloudformation/cfn-lint?tab=readme-ov-file"&gt;AWS CloudFormation Linter&lt;/a&gt;. Remember all those CloudFormation chemas nobody reads? Well, those guys have (or, at leased, parsed) and have created a linter based on them.&lt;/p&gt;

&lt;p&gt;It can run as a standalone linter before CF tried to template your CDK code and also supports custom rules. This allows you to add custom validations for conventions that might be specific to your organization.&lt;/p&gt;

&lt;p&gt;Very cool, but still does not solve our runtime and testing challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runtime Resource Property Validator and Generator
&lt;/h2&gt;

&lt;p&gt;To solve our real-time validation and testing problems, we must have a solution that runs on-demand, and not in the linting to deployment stage.&lt;/p&gt;

&lt;p&gt;Exactly for those purposes, I have developed the aws_resource_validator package.&lt;/p&gt;

&lt;p&gt;It contains auto-generated classes from &lt;a href="https://github.com/boto/botocore/tree/develop/botocore/data"&gt;this botocore dataclasses repository&lt;/a&gt; (special shout-out to fellow AWS Community Builder &lt;a href="https://www.linkedin.com/in/michael-kirchner-at/"&gt;Michael Kirchner&lt;/a&gt; who made me aware of it). Each respective class represents an AWS service. Within each service there are resources which can be accessed using CamelCase and snake_case names. Each resource has some informative fields about it’s own limitations and expected pattern. It can also validate itself — and generate a string conforming to all validations and patterns for your testing needs.&lt;/p&gt;

&lt;p&gt;So a typical resource looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lambda
  - name (or Name)
    - .validate()
    - .generate()
    - pattern
    - min_length
    - max_length
  - arn (or Arn)
    - .validate()
    - .generate()
    - pattern
    - min_length
    - max_length
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a usage example might look like this: &lt;em&gt;(note the mixed usage of camel &amp;amp; snake cases)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_resource_validator.class_definitions import Acm, class_registry

# Use type hint so that you can use `api_registry` with full class definitions
acm: Acm = class_registry.Acm

print(acm.Arn.pattern)
print(acm.Arn.type)
print(acm.arn.validate("example-arn"))
print(acm.Arn.generate())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s that simple!&lt;/p&gt;

&lt;p&gt;All you have to do is install the package from PyPI and you’re good to go:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install aws_resource_validator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By the way, those auto-generated classes I’ve mentioned? They are pretty cool. They are deducted from the JSON files in the botocore repository, their members are generated and then written, as Python-readable code, into a file. It’s actually Python code that writes Python code. No AI, but still makes you wonder when you’re going to be replaced, right?&lt;/p&gt;

&lt;p&gt;Anyway, if you want to check out the logic behind this package, you can check out my &lt;a href="https://github.com/CoreOxide/aws_resource_validator"&gt;GitHub repository&lt;/a&gt; — it’s open source. Special thanks to my beautiful wife and talented Principal DevOps engineer &lt;a href="https://www.linkedin.com/in/yafit-tupman-25014262/"&gt;Yafit Tupman&lt;/a&gt; who added GitHub Action based pipelines and overall repository productization, so that you can be sure each release is tested and uploaded to PyPI automatically.&lt;/p&gt;

&lt;p&gt;Feel free to open bugs, report issues or contribute code to this project if you find it useful or interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Common Use Case Example
&lt;/h2&gt;

&lt;p&gt;Let’s take a quick look of a concrete usage example. We will consider a typical Lambda function code where a 3rd party service is passing items for it to process. For the purposes of the example, we will assume those items represent ARNs of IoT Job templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from http import HTTPStatus
from typing import Any, Dict, List

from aws_lambda_context import LambdaContext
from aws_resource_validator.class_definitions import JobTemplateArn, class_registry

def handler(event: Dict[str, Any], context: LambdaContext) -&amp;gt; Dict[str, Any]:
    arn_list: List[str] = event['body']  # we would parse and validate the event first, of course
    job_template_arn: JobTemplateArn = class_registry.JobTemplateArn
    if not all(job_template_arn.validate(template_arn) for template_arn in arn_list):
        return {'statusCode': HTTPStatus.BAD_REQUEST, 'headers': {'Content-Type': 'application/json'}, 'Your ARNs are invalid'
    # some boto3 calls here
    return {'statusCode': HTTPStatus.OK, 'headers': {'Content-Type': 'application/json'}, None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, not only did we not perform any boto3 calls (where we would have discovered the error) — we can also report back to the calling service to inform it of an issue in it’s code.&lt;/p&gt;

&lt;p&gt;Now, let’s look at a usage example inside a test. We will test the same handler code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from typing import Any, Dict, List

from aws_resource_validator.class_definitions import JobTemplateArn, class_registry

def test_handler():
    job_template_arn: JobTemplateArn = class_registry.JobTemplateArn
    arns_list: List[str] = [job_template_arn.generate() for _ in enumerate(10)]
    event = {'body': arns_list}
    assert(handler(event, None))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With just 2 lines of code we have generated a list of 10 valid ARNs that represent IoT job templates to be used in our test. No mocks necessary!&lt;/p&gt;

&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;We have discussed validators for AWS resources naming conventions and why they are important. We have also mentioned the different times when said validators can be run — at code runtime, during the linting stage or during CloudFormation template generation.&lt;/p&gt;

&lt;p&gt;We also briefly mentioned the usability of generated resource names during testing and how the aws_resource_validator package can help you with that as well.&lt;/p&gt;

&lt;p&gt;Thanks for reading and I hope you learned something new :)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>validation</category>
      <category>python</category>
      <category>boto</category>
    </item>
  </channel>
</rss>
