<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oleg Pustovit</title>
    <description>The latest articles on DEV Community by Oleg Pustovit (@opustovit).</description>
    <link>https://dev.to/opustovit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/opustovit"/>
    <language>en</language>
    <item>
      <title>AWS Serverless: Still the Boring Correct Choice</title>
      <dc:creator>Oleg Pustovit</dc:creator>
      <pubDate>Fri, 16 Jan 2026 12:26:40 +0000</pubDate>
      <link>https://dev.to/opustovit/aws-serverless-still-the-boring-correct-choice-m11</link>
      <guid>https://dev.to/opustovit/aws-serverless-still-the-boring-correct-choice-m11</guid>
      <description>&lt;p&gt;In the last 6 months, I've helped &lt;strong&gt;3 AI startups migrate from Vercel or Cloudflare to AWS Lambda&lt;/strong&gt;. The pattern is the same: they start on a platform with great DX. Then the wall shows up: background jobs, retries, queues, cron, and eventually a "this endpoint needs 2-8 GB RAM for 4-10 minutes" workload — and they land on AWS.&lt;/p&gt;

&lt;p&gt;To be fair: Vercel and Cloudflare captured developer attention for good reasons. Vercel ships Next.js fast — previews, simple deploys, great DX. Workers are great for edge use-cases: low latency, fast cold starts, global distribution. Both solve real problems.&lt;/p&gt;

&lt;p&gt;Where things get harder is when the app grows a backend shape: queues, retries, scheduled jobs, heavier compute, private networking. Vercel still relies on third-party partners for queuing (like Upstash or Inngest), adoption involves piecing together vendors. Workers are fantastic for edge latency, but you feel constraints fast (memory limits, lack of native binary support, and file system restrictions), when Lambda is built for "bigger" invocations in mind (more memory and longer max runtime), with SQS, DynamoDB, and EventBridge under the same network.&lt;/p&gt;

&lt;p&gt;For request-based apps calling LLMs, AWS Lambda tends to cover what startups actually need: compute, queues, persistence, scheduling in one network. &lt;strong&gt;Pay-per-use, no infra to manage, often near $0&lt;/strong&gt; for small workloads. The tooling improved too — &lt;a href="https://sst.dev/docs/" rel="noopener noreferrer"&gt;SST&lt;/a&gt; made deployment much easier. But the hype moved on before anyone noticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hype Died, but did Serverless?
&lt;/h2&gt;

&lt;p&gt;The biggest criticism of serverless technology, especially with AWS, is that setting up the infrastructure is complicated, starting from defining policies to actually creating all of the AWS resources and connecting them together. It has a learning curve and tools like &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;SAM&lt;/a&gt; simplify it, but they oftentimes are brittle or have bugs. SAM was a great start — it built the hype and community around serverless — but it wasn't as straightforward as modern development tools. Working at orgs where I had to introduce it to engineers used to Docker containers, Docker was a faster workflow than CloudFormation wrappers. &lt;strong&gt;SST&lt;/strong&gt; fixed this, but by then developers had already moved to Vercel or Cloudflare.&lt;/p&gt;

&lt;p&gt;Another big problem is &lt;strong&gt;cold start&lt;/strong&gt; with the compute itself, the time that is required to spin up the compute resource and load the runtime and then execute the code. This means serverless shouldn't be viewed as a short-running server process, but rather as a different computing paradigm that requires factoring specifics of the underlying constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qvpotq3f99d85rxrnmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qvpotq3f99d85rxrnmj.png" alt="Cold start timeline" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/blog/aws-lambda-migration" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;, a CI/CD platform, went the other direction in 2024: ECS to Lambda for async jobs. Spiky traffic made always-on containers expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  When NOT to use Serverless
&lt;/h2&gt;

&lt;p&gt;Of course, serverless is not universal. Know when to reach for something else.&lt;/p&gt;

&lt;p&gt;In 2025, &lt;a href="https://www.infoq.com/news/2025/12/unkey-serverless/" rel="noopener noreferrer"&gt;Unkey moved away from serverless&lt;/a&gt; after performance struggles. Their pattern: high-volume workloads with tight coupling between components. As traffic grew, pay-per-invocation stopped making economic sense. This mirrors the &lt;a href="https://www.infoq.com/news/2023/05/prime-ec2-ecs-saves-costs/" rel="noopener noreferrer"&gt;Prime Video case from 2023&lt;/a&gt; — both had architectures where serverless overhead exceeded the benefits. The lesson isn't that serverless failed; it's that &lt;strong&gt;serverless has a sweet spot&lt;/strong&gt;, and high-throughput tightly-coupled systems aren't in it.&lt;/p&gt;

&lt;p&gt;When to reach for something else:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Long-running processes&lt;/strong&gt;. Applications like AI agent orchestrators would not work on Lambda due to hard 15-minute timeout. In this case, switch to &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html" rel="noopener noreferrer"&gt;Fargate&lt;/a&gt; or regular EC2 instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable high traffic or constant load&lt;/strong&gt;. You would gain more benefit from using containers in this case. Serverless is way better for bursty or unpredictable traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU workloads&lt;/strong&gt;. Lambda does not support GPUs: for machine learning inference that requires CUDA, you have to use either &lt;strong&gt;EC2&lt;/strong&gt; or &lt;strong&gt;SageMaker&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-throughput media pipelines&lt;/strong&gt;. Orchestrating many state transitions per second through &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html" rel="noopener noreferrer"&gt;Step Functions&lt;/a&gt; gets expensive fast. The Prime Video case is typical — they triggered a transition for &lt;strong&gt;every single video chunk&lt;/strong&gt;, hitting massive limits and costs. Use containers for stream processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your team is already efficient elsewhere&lt;/strong&gt;. If you have existing infrastructure — Kubernetes, for example — and the team knows it well, don't force serverless. It takes time for an org to adopt an unfamiliar paradigm. For greenfield projects and validation, serverless is great. For teams already shipping on K8s, keep shipping.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legacy dependencies that need a full OS&lt;/strong&gt;. Some applications depend on libraries that are hard to package for Lambda. At times you just need a VM to run the thing. Serverless is problematic when you're fighting runtime constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unsupported programming languages&lt;/strong&gt;. Don't experiment with languages Lambda doesn't officially support. Custom runtimes add overhead that's rarely worth it. Stick to Node.js, Python, Go, Java, .NET — the supported options.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlgao3mev6m0e32ph207.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlgao3mev6m0e32ph207.png" alt="When to Use Serverless" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For request-based apps with variable traffic, especially AI-integrated APIs, serverless fits well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;If you already have AWS basics, building serverless there makes sense. Here's the stack and how to use it effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsn4topww4k6j5v8doqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsn4topww4k6j5v8doqs.png" alt="The Serverless Stack" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Presentation Layer
&lt;/h3&gt;

&lt;p&gt;For the presentation layer, use a CDN and object storage for static assets. That's typically &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html" rel="noopener noreferrer"&gt;CloudFront&lt;/a&gt; + &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html" rel="noopener noreferrer"&gt;S3&lt;/a&gt;&lt;/strong&gt;, as you get benefits from the edge computing and the AWS infrastructure. S3 is useful because you can just build your HTML and CSS artifacts and upload them to the object storage. This decouples your frontend and web assets from your server, but brings architectural limitations: you can only do static exports. Fine for blogs, but you lose Server-Side Rendering (SSR) capabilities needed for dynamic SEO or personalized content.&lt;/p&gt;

&lt;p&gt;When you have the CDN in place, it's worth thinking about how you would coordinate request execution. You can use an &lt;strong&gt;Application Load Balancer&lt;/strong&gt; to forward requests to Lambda, but I'd recommend &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt;&lt;/strong&gt; for most cases. It handles request routing, rate limiting, and authorization out of the box. Getting IAM permissions right is critical, but once configured, your requests flow directly to Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compute
&lt;/h3&gt;

&lt;p&gt;The next component is your compute layer — where business logic lives. For serverless execution, use &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;&lt;/strong&gt;. It runs your code without provisioning servers, with usage-based pricing: you pay per 100ms of execution. Lambda is designed for event-driven workloads and short-lived compute (up to 15 minutes); anything longer, reach for Fargate. For prototypes, web apps, and AI-integrated APIs, Lambda is a natural starting point — call LLMs, build UI wrappers, handle business logic, all without managing servers.&lt;/p&gt;

&lt;p&gt;When deploying Lambda, you have two options: native runtime or custom Docker images. Native is recommended for faster cold starts. Cold starts are real, treat Lambda as an event-driven runtime, not a "tiny server". Keep the handler small with simple initialization, and be intentional about concurrency and the warmup when latency becomes a problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenhtfyaa9ks0cmu2h4sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenhtfyaa9ks0cmu2h4sb.png" alt="Lambda Deployment Options" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For complex configurations, use &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html" rel="noopener noreferrer"&gt;Lambda Layers&lt;/a&gt; to package dependencies separately from your function code. Layers let you include binaries, libraries, or custom runtimes while keeping cold starts fast. Use Docker as a last resort, when you need full control over the OS environment or dependencies that won't fit in layers. The tradeoff: slower cold starts and CI/CD complexity. On GitHub Actions, you need a Docker build pipeline instead of just dropping code to S3 and calling the update API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background processing
&lt;/h3&gt;

&lt;p&gt;For async work, use &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html" rel="noopener noreferrer"&gt;SQS&lt;/a&gt;&lt;/strong&gt;. Lambda's event source integration handles batching, scaling, and polling for you.&lt;/p&gt;

&lt;p&gt;Years back, I worked with an enterprise architect on a startup backend. He proposed SQS for our messaging layer. At the time, this seemed odd — SQS wasn't easy to run locally. You couldn't reproduce the infrastructure the way you could with RabbitMQ. But what I gained from that experience was understanding that sometimes you should explore managed services and accept the tradeoff: you lose local reproducibility, but you stop dealing with memory and compute constraints entirely.&lt;/p&gt;

&lt;p&gt;To this day, if the messaging architecture is simple, I go with SQS and Lambda combined with event source mapping. You don't have to write the consumer yourself — the integration handles all of that. And that consumer code is often problematic to test anyway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiql0zadb879acb44r3qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiql0zadb879acb44r3qz.png" alt="SQS + Lambda Event Flow" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a clickstream startup, we faced this exact pattern: process event data from high-traffic e-commerce sites, unknown traffic patterns, weeks to launch. Lambda workers pulled from SQS with event source batching, processing multiple events per invocation. CDK handled deployment. The system scaled on its own.&lt;/p&gt;

&lt;p&gt;An EKS equivalent would have meant provisioning a cluster, configuring autoscaling, setting up observability, managing node health. We skipped all of that and shipped.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistence
&lt;/h3&gt;

&lt;p&gt;For persistence, use &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;&lt;/strong&gt;, but don't treat it like a relational database. Its power comes from partition keys, sort keys, and secondary indexes, so invest time understanding the data model. Think of it as an advanced key-value store with sorting capability. Optimize your queries when you hit scale; for prototypes, just build. For deeper learning, Alex DeBrie's &lt;a href="https://www.dynamodbguide.com/" rel="noopener noreferrer"&gt;DynamoDB Guide&lt;/a&gt; covers single-table design and access patterns.&lt;/p&gt;

&lt;p&gt;At a B2B marketing startup I was working on, the main data tier was MongoDB collecting events from large e-commerce stores. But the application had also domain tables to store data related to dashboard: organizations, users, authentication, settings. Originally they lived on RDS, which was overkill. At the start there were 10-15 enterprise clients, and paying a dedicated RDS instance for that load made no sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RDS Cost:&lt;/strong&gt; &lt;code&gt;~$35.00&lt;/code&gt; / month for db.t3.small, &lt;strong&gt;DynamoDB cost after migration:&lt;/strong&gt; &lt;code&gt;~$0.00 - $2.00&lt;/code&gt; / month (mostly storage costs) for the same workload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls9jhrckyje33edx6ha7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls9jhrckyje33edx6ha7.png" alt="Cost Comparison" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On launch we stored that data in DynamoDB, organizations, users, auth, settings had their own table. At a later point Dynamo was used for the more data-intensive part with session tracking (by using TTL indexes) and debugging logs. The pattern worked for low traffic tables because of zero maintenance and pay-per-request pricing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;For observability, &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html" rel="noopener noreferrer"&gt;CloudWatch&lt;/a&gt;&lt;/strong&gt; shows your errors and aggregations. Metrics and alarms work out of the box, and logs appear automatically without configuration. Later you can instrument with OpenTelemetry or connect other services, but for a basic serverless application, CloudWatch is more than enough.&lt;/p&gt;

&lt;p&gt;For years, I found CloudWatch UI and Insights sluggish compared to Grafana. But now I wire AWS SDK to Claude Code and let the AI pull logs and analyze issues. The stable CLI and REST API make log processing trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to be successful with AWS serverless
&lt;/h2&gt;

&lt;p&gt;Build applications without technology bias. A few years ago, Docker containers and microservice orchestration were popular, which created misconceptions about serverless. Aim for simplicity: reduce your problem to the simplest actions, refine your data model, and design your system as a transactional request-based application. That's what makes serverless work.&lt;/p&gt;

&lt;p&gt;Start with an &lt;strong&gt;Infrastructure as Code&lt;/strong&gt; tool like Terraform, AWS CDK, or the increasingly popular SST. You define how infrastructure gets created, then deploy that stack to your AWS account. I personally use Terraform because I want full control over my infrastructure. But for getting started quickly with pre-built blocks, SST is the better choice since productivity matters early on.&lt;/p&gt;

&lt;p&gt;Previously, AWS was less approachable since deploying with CloudFormation or SAM was painful. CloudFormation itself is stable and battle-tested: CDK and SST (before v3) both sit on top of it, but the raw DX isn't great. That's why picking the right abstraction layer matters: you get CloudFormation's reliability without writing YAML by hand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pick your IaC tool carefully
&lt;/h3&gt;

&lt;p&gt;In 2026, Lambda deployment has vastly improved. For getting deep expertise in AWS, I'd recommend learning a few alternatives: start with CloudFormation and CDK to understand AWS-native infrastructure, then explore Terraform.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Advantages&lt;/th&gt;
&lt;th&gt;Disadvantages&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SST&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rethought DX for serverless, hot-reload, efficient resource usage&lt;/td&gt;
&lt;td&gt;New, smaller ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full control, predictable plan/apply, scales to EKS and complex infra&lt;/td&gt;
&lt;td&gt;HCL learning curve&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CDK&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native TypeScript/Python, easy to code&lt;/td&gt;
&lt;td&gt;CloudFormation underneath, can be brittle&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frutwxqx06w4qvtb7kd7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frutwxqx06w4qvtb7kd7y.png" alt="IaC Tools Spectrum" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the startup teams I've consulted, Terraform is typically the go-to infrastructure as code solution because of its architecture where you execute plan and apply changes. It's been reliable in practice.&lt;/p&gt;

&lt;p&gt;For developer experience and prototyping, SST fits well. A few years ago, serverless meant wrestling CloudFormation stacks. SST changed that, so you can hot-reload Lambda functions and iterate fast without managing infrastructure YAML. For getting started, &lt;strong&gt;SST is a solid default&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Setting up Lambda + API Gateway + DynamoDB with &lt;a href="https://sst.dev/docs/" rel="noopener noreferrer"&gt;SST v3&lt;/a&gt; is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;/// &amp;lt;reference path="./.sst/platform/config.d.ts" /&amp;gt;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;$config&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nf"&gt;app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;removal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;stage&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;retain&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;remove&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;home&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;sst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Dynamo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;table&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;pk&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;sk&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;primaryIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;hashKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;rangeKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;sst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ApiGatewayV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST /&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;functions/handler.main&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;link&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With coding agents like &lt;a href="https://github.com/anthropics/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; or &lt;a href="https://opencode.ai/" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;, getting this stack running takes minutes. Point the tool at your project, describe what you need: "set up Next.js with Lambda, SQS, and API Gateway using SST", and it figures out the configuration, writes the infrastructure code, and deploys it for you. The entire setup is under 100 lines of code. The barrier to serverless dropped from "learn CloudFormation" to "describe what you want."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5hih5qf5inrwhiwa2ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5hih5qf5inrwhiwa2ij.png" alt="SST + OpenNext Demo" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cloudflare Workers is popular but still maturing for backend use cases. Lambda remains the more common choice for serverless backends.&lt;/p&gt;

&lt;p&gt;What about Vercel? It provides Next.js with serverless functions, but you can't build background execution logic or advanced infrastructure like queue services. The serverless environment is limited to Node.js API routes. It's popular among beginners because React and Node.js are familiar, but you're locked into Vercel as a vendor. Enterprises and startups still use AWS, and even modern AI applications run on AWS Bedrock. As a full-stack developer, investing in AWS serverless gives you more flexibility and portability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not Vercel or Cloudflare?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn67a7pox5iymeodeg3wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn67a7pox5iymeodeg3wd.png" alt="Platform Comparison" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Vercel with Next.js
&lt;/h3&gt;

&lt;p&gt;Vercel is a good service for having everything set up. You write code, push it to GitHub, and it gets configured and deployed without any effort. It supports previews and permissions, simple environment variable configuration, and your frontend available on a CDN — all without messing with infrastructure code. This is powerful for getting your software out, and that's why it got popular. Not only because they develop Next.js, but because Next.js integrates well with Vercel, and it’s frictionless.&lt;/p&gt;

&lt;p&gt;Vercel works for prototypes and UI-driven apps. If you're in the React ecosystem, you can move fast. I've built several apps on Vercel, mostly AI-integrated tools that need a quick frontend. Last time I created a poster generator with custom typography — the app called an LLM to generate a JSON schema, then rendered the poster. Vercel handled that perfectly: simple UI, one API route, done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgws1vzzgf5lbtux7p1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgws1vzzgf5lbtux7p1g.png" alt="Poster Generator App" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my consulting work, I've seen two patterns:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern 1: Vercel as frontend layer.&lt;/strong&gt; One social network startup runs their infrastructure on Kubernetes but still uses Vercel for the web app. Why? The implementation stays in sync with their React Native mobile app, and Vercel's API routes connect cleanly to their backend. They get the benefits of both: React ecosystem on the frontend, scalable backend on K8s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern 2: Vercel + AI pipeline.&lt;/strong&gt; An AI startup I'm working with uses Next.js as the frontend layer connecting to their document processing pipeline. The LLM-driven backend handles research on internal documents; Next.js just renders results. You'll find tons of templates for this pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguq15wjpgaf87efuv7rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguq15wjpgaf87efuv7rm.png" alt="Vercel + Backend Infrastructure" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vercel's limitation is the backend. They &lt;a href="https://vercel.com/changelog/vercel-queues-is-now-in-limited-beta" rel="noopener noreferrer"&gt;announced queues in 2025&lt;/a&gt;, but it's still in limited beta. For background jobs today, you need external services like &lt;a href="https://www.inngest.com/" rel="noopener noreferrer"&gt;Inngest&lt;/a&gt; or &lt;a href="https://upstash.com/docs/qstash/overall/getstarted" rel="noopener noreferrer"&gt;QStash&lt;/a&gt;. And you're locked into their platform; Fluid Compute is Vercel-proprietary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I've seen this limitation create absurd workarounds&lt;/strong&gt;. One project I consulted on — a news aggregator built on Netlify — needed scheduled background jobs. Their solution: GitHub Actions calling a Netlify serverless function on a cron. It had no retries, no timeouts, and when the function failed, nobody knew until users complained. We reworked it to AWS: EventBridge scheduled rule triggering a Lambda with built-in retries, CloudWatch alarms, and dead-letter queues. The hacky setup became infrastructure that worked.&lt;/p&gt;

&lt;p&gt;For a frontend layer that connects to backend services, Vercel works. For a complete backend, you'll outgrow it.&lt;/p&gt;

&lt;p&gt;If you want Next.js without vendor lock-in, look at &lt;a href="https://opennext.js.org/" rel="noopener noreferrer"&gt;OpenNext&lt;/a&gt;. It's an open-source adapter that deploys Next.js to AWS Lambda, and SST uses it under the hood. You get App Router, Server Components, ISR, image optimization — most Next.js features work. The deployment is one line: &lt;code&gt;new sst.aws.Nextjs("Web")&lt;/code&gt;. NHS England, Udacity, and Gymshark run production workloads on it. The main gotcha is middleware: it runs on the server, not at the edge, so cached requests skip it. For most apps, that's fine. If you want Next.js but need AWS infrastructure underneath, &lt;strong&gt;OpenNext&lt;/strong&gt; is the escape hatch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudflare Workers
&lt;/h3&gt;

&lt;p&gt;Cloudflare is good at edge computing with innovative technologies. Workers run in V8 isolates — a smart idea that gives you near-instant cold starts. They excel at CDN and DNS, and offer a compelling alternative to get started.&lt;/p&gt;

&lt;p&gt;I use Cloudflare for CDN and frontend hosting. The UI is clean, the CLI is simple, and deployment is quick. For static sites and edge caching, it's easier than AWS CloudFront.&lt;/p&gt;

&lt;p&gt;But Workers are a different runtime model — not full Node.js. That's a feature for edge latency (cold starts under 5ms), but a constraint if you expect full Node compatibility or heavier workloads: many npm packages don't work. The 128 MB memory per isolate and 5-minute CPU time limit (not wall clock) make sense for edge, but they're restrictive compared to Lambda's multi-GB memory options and 15-minute max runtime. I played with deploying WebAssembly apps in Rust and Go, and the developer experience wasn't there yet.&lt;/p&gt;

&lt;p&gt;I wouldn't build a startup on Cloudflare Workers yet. For edge routing and authentication, it's fine. For a full backend, it falls behind AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Firebase
&lt;/h3&gt;

&lt;p&gt;At one startup, we had the infrastructure partially on AWS — the AI agent running in the background, but the frontend was React with Firebase Functions calling Firestore. Firebase did a great job as a prototyping tool; we were able to build a complex frontend with the database initially. But the problems stacked up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The data was fragmented, living outside AWS. Generally considered bad practice.&lt;/li&gt;
&lt;li&gt;React calling Firestore directly created tight vendor lock-in with Firestore.&lt;/li&gt;
&lt;li&gt;Google Cloud feels disjointed compared to Firebase — Firebase is its own island.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We spent two months migrating to AWS, using equivalent resources to keep networking and IAM policies consistent across the whole application.&lt;/p&gt;

&lt;p&gt;The one exception: I typically choose Firebase for Google authentication. It's the easiest way to get Google auth working — pluggable, no client configuration needed. For that specific use case, Firebase is a solid default. Otherwise, I go straight to AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why I default to AWS
&lt;/h3&gt;

&lt;p&gt;For startups expecting growth, here's why AWS makes sense.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ilij9yla8kqc08qovy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ilij9yla8kqc08qovy2.png" alt="AWS One Network" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Industry-proven.&lt;/strong&gt; Large companies run production workloads on Lambda. Capital One runs &lt;a href="https://aws.amazon.com/solutions/case-studies/capital-one-lambda-ecs-case-study/" rel="noopener noreferrer"&gt;tens of thousands of Lambda functions&lt;/a&gt; after going all-in on serverless. Thomson Reuters processes &lt;a href="https://aws.amazon.com/lambda/resources/customer-case-studies/" rel="noopener noreferrer"&gt;4,000 events per second&lt;/a&gt; for usage analytics on Lambda. The failure modes are well-documented; the solutions exist.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure flexibility.&lt;/strong&gt; You can optimize costs, swap components, migrate from Lambda to Fargate — all within one network. With Vercel plus external services, you're stitching together pieces that don't guarantee coherent infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One network space.&lt;/strong&gt; Your Lambda talks to DynamoDB talks to SQS without leaving AWS. No cross-provider latency, no credential juggling, no surprise egress fees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low cost to start.&lt;/strong&gt; Some argue serverless is overkill — just rent a $5/month VPS. But a VPS costs money from day one, while Lambda's free tier includes &lt;a href="https://aws.amazon.com/lambda/pricing/" rel="noopener noreferrer"&gt;1 million requests and 400,000 GB-seconds per month&lt;/a&gt; permanently, DynamoDB gives you 25 GB free, and API Gateway offers 1 million HTTP calls free for 12 months. For low-traffic projects you can run for near $0 — and for prototypes with variable traffic, serverless is often cheaper than fixed infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI-ready.&lt;/strong&gt; AWS is investing heavily in AI, and &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html" rel="noopener noreferrer"&gt;Bedrock&lt;/a&gt; gives you access to Anthropic models (Claude and others) within AWS networking, so your Lambda calls Claude without leaving the network. If you qualify as a startup, they offer generous credits for large inference workloads. For AI-integrated apps, the whole stack stays in one place.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Learn the alternatives. When you need to scale, start with AWS serverless.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get started with it in 2026
&lt;/h2&gt;

&lt;p&gt;Start by building a complete backend within serverless constraints. Design around cold start limitations and use SQS and &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html" rel="noopener noreferrer"&gt;EventBridge&lt;/a&gt; for background execution. This stack works well for AI apps that call LLM inference APIs — not for AI agents that need to run for hours, but for request-based AI features. Whether you're a beginner or an advanced full-stack developer, serverless is worth the investment. Understand the limitations first, build after. The serverless stack rewards this discipline.&lt;/p&gt;

&lt;p&gt;One caveat: serverless requires your team to think differently. At an ad tech startup, I watched a team struggle with a Lambda-based bidding system. The architecture was designed serverless because of the maintenance overhead we'd avoid — in theory, it was much easier to add or change parts of the ad tech we were building. But the backend engineers came from Docker and long-running servers. They understood request-response, but the tooling around AWS serverless — CloudWatch, S3, the whole stack — felt alienating compared to containerized apps built on FastAPI or Django. That workflow just wasn't available for serverless. The deadline moved three months, which brought a lot of problems. We had to switch to an ECS cluster with containers, which was suboptimal for the bursty nature of ad bidding. The architecture wasn't wrong; the team-stack fit was. If your engineers aren't familiar with serverless, budget time for learning or pick what they know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with SST, hit your first bottleneck, then reevaluate.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The serverless stack isn't going anywhere. &lt;strong&gt;Master the constraints, and you'll ship faster than teams managing their own infrastructure.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/lambda/pricing/" rel="noopener noreferrer"&gt;AWS Lambda Pricing &amp;amp; Free Tier&lt;/a&gt; — Detailed pricing information including the generous free tier (1M requests/month permanently).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.dynamodbguide.com/" rel="noopener noreferrer"&gt;DynamoDB Guide - Alex DeBrie&lt;/a&gt; — The definitive resource for DynamoDB data modeling, covering single-table design and access patterns.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sst.dev/docs/" rel="noopener noreferrer"&gt;SST Documentation&lt;/a&gt; — Official docs for SST v3, the modern serverless framework with hot-reload and TypeScript support.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opennext.js.org/" rel="noopener noreferrer"&gt;OpenNext&lt;/a&gt; — Open-source adapter for deploying Next.js to AWS Lambda without vendor lock-in.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://spacelift.io/blog/aws-lambda-migration" rel="noopener noreferrer"&gt;Spacelift: AWS Lambda Migration (2024)&lt;/a&gt; — Case study of migrating from ECS to Lambda for async workloads with spiky traffic.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.infoq.com/news/2025/12/unkey-serverless/" rel="noopener noreferrer"&gt;Unkey: Moving Away from Serverless (2025)&lt;/a&gt; — Counter-example showing when high-volume tightly-coupled systems outgrow serverless.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.infoq.com/news/2023/05/prime-ec2-ecs-saves-costs/" rel="noopener noreferrer"&gt;Prime Video Serverless to Monolith (2023)&lt;/a&gt; — The infamous case where Step Functions costs drove a move to ECS for video analysis.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opencode.ai/" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; — Open-source AI coding agent by SST, provider-agnostic and privacy-focused.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.inngest.com/" rel="noopener noreferrer"&gt;Inngest&lt;/a&gt; — Durable workflow engine for background jobs, often used with Vercel.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://upstash.com/docs/qstash/overall/getstarted" rel="noopener noreferrer"&gt;QStash&lt;/a&gt; — Serverless message queue from Upstash, an alternative for Vercel deployments.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>In 2025, Apple still makes it hard to play your own MP3s, so I wrote my own app</title>
      <dc:creator>Oleg Pustovit</dc:creator>
      <pubDate>Thu, 22 May 2025 11:08:24 +0000</pubDate>
      <link>https://dev.to/opustovit/in-2025-apple-still-makes-it-hard-to-play-your-own-mp3s-so-i-wrote-my-own-app-7eh</link>
      <guid>https://dev.to/opustovit/in-2025-apple-still-makes-it-hard-to-play-your-own-mp3s-so-i-wrote-my-own-app-7eh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;In 2025, playing your own &lt;strong&gt;music on an iPhone is surprisingly hard&lt;/strong&gt;, unless you pay Apple or navigate a maze of limitations. So I built my own player from scratch, with &lt;strong&gt;full text search&lt;/strong&gt;, &lt;strong&gt;iCloud support&lt;/strong&gt;, and a &lt;strong&gt;local-first experience&lt;/strong&gt;. &lt;a href="https://github.com/nexo-tech/music-app" rel="noopener noreferrer"&gt;GitHub link&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why I Built My Own Audio Player
&lt;/h2&gt;

&lt;p&gt;Like many people, I've picked up too many subscriptions, some through Apple (iCloud, Apple Music), others got lost in random platforms (like Netflix, which I forgot I was still paying for). I actually used Apple Music regularly (and previously Spotify), but the streaming turned out to be more convenience than necessity. With a curated local library, I didn't lose much, just the lock-in.&lt;/p&gt;

&lt;p&gt;Initially I thought, I'd just keep using iCloud Music Library for cross-device music synchronization, but once I cancelled the Apple Music subscription, the sync stopped working. Turns out this feature is &lt;strong&gt;behind a paywall&lt;/strong&gt;. You can technically get it back via &lt;a href="https://support.apple.com/en-us/108935" rel="noopener noreferrer"&gt;&lt;em&gt;iTunes Match&lt;/em&gt; ($24.99/year)&lt;/a&gt;. Match just stores 256-kbps AAC copies online; your original files stay put unless you choose to replace them. On a modern Mac, you do all this in the Music app. Without either subscription, cloud sync is gone, and you're back to cable/Wi-Fi syncing.&lt;/p&gt;

&lt;p&gt;Frustrated with the lack of options, I went the &lt;strong&gt;builder route&lt;/strong&gt;. If I bought a computing device (iPhone in this case), what stops me from just building exactly what I need with code to use it? In this article, I want to share my full journey of frustrations towards creating a basic music player functionality: loading audio files, organizing and playing them back, but mostly, I wanted to remind myself, &lt;em&gt;this is still a general-purpose computer, I should be able to make it do what I want&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Apple (and Others) Offer Today
&lt;/h2&gt;

&lt;p&gt;Before writing my own app, I explored the official and third-party options for offline music playback.&lt;/p&gt;

&lt;h3&gt;
  
  
  Apple's Built-in Apps
&lt;/h3&gt;

&lt;p&gt;Apple technically lets you play music directly from iCloud via the Files app, but its functionality is not designed for music listening. It &lt;strong&gt;lacks essential features&lt;/strong&gt; such as playlist management, metadata sorting, or playback queues. While it supports music playback, it's very limited and overall &lt;a href="https://discussions.apple.com/thread/252762868" rel="noopener noreferrer"&gt;&lt;strong&gt;not a good user experience&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Third-Party Apps
&lt;/h3&gt;

&lt;p&gt;I went to the app store to look for cool apps that solve my problem, while there are many of them, many rely on &lt;strong&gt;subscription-based pricing&lt;/strong&gt;, a questionable model for an app that simply plays files users already own. There's one app that I liked, &lt;a href="https://apps.apple.com/us/app/doppler-mp3-flac-player/id1468459747" rel="noopener noreferrer"&gt;Doppler&lt;/a&gt;. I've played with it during a trial, but the UX is built around managing albums. The search wasn't that good, and the import functionality from iCloud was slow and hard to use on a large number of nested folders. The upside was, it had a single payment pricing model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going Builder Mode: My Technical Journey
&lt;/h2&gt;

&lt;p&gt;With that said, I decided to create my own ideal music player that solves my pain points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexible full-text search across iCloud folders, so I can select and import a folder with music or specific files quickly.&lt;/li&gt;
&lt;li&gt;Functionality in managing music at least on par with the official Music App: queue, playlist management, and sorting by albums, etc.&lt;/li&gt;
&lt;li&gt;Familiar and friendly interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Trying React Native First
&lt;/h3&gt;

&lt;p&gt;Initially, I avoided Swift because of my previous experience with it. A few years back, I liked the syntax (felt closer to TypeScript) and appreciated the Rust-like memory safety, but without native &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; at that time, writing concurrent code compared to Go or JS/TS felt clunky and boilerplate-heavy. That experience left me frustrated, so when I revisited this project, I initially reached for something more familiar.&lt;/p&gt;

&lt;p&gt;That said, I went with React Native or Expo, hoping to reuse my web development experience and plug in a player UI from existing templates. Building the playback UI was straightforward; there are numerous open-source examples and tutorial videos on building good-looking music players that fit my needs. I picked an existing &lt;a href="https://github.com/CodeWithGionatha-Labs/music-player" rel="noopener noreferrer"&gt;template project by Gionatha Sturba&lt;/a&gt;, because it looked to have every feature I need for my app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2jx4sm0eifvzutoc63.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2jx4sm0eifvzutoc63.webp" alt="Attempting to build an app with React Native/Expo" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accessing the file system and syncing cloud files hit major roadblocks: libraries like &lt;a href="https://docs.expo.dev/versions/latest/sdk/filesystem/" rel="noopener noreferrer"&gt;&lt;code&gt;expo-filesystem&lt;/code&gt;&lt;/a&gt; supported basic file picking, but recursive traversal over deeply nested iCloud directories &lt;strong&gt;often failed or even caused app crashes&lt;/strong&gt;. This made it clear that a &lt;strong&gt;JavaScript-based approach introduced more complexity&lt;/strong&gt; than just working with Apple's native APIs, even if it meant a steeper learning curve.&lt;/p&gt;

&lt;p&gt;iOS sandboxing prevents apps from reading files without explicit user permission, which meant React Native couldn't access external folders reliably. Switching to Swift gave me more control over iCloud file access and sandboxed permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Switching to SwiftUI
&lt;/h3&gt;

&lt;p&gt;I went with &lt;strong&gt;SwiftUI&lt;/strong&gt; instead of UIKit or storyboards because I wanted a &lt;strong&gt;clean and declarative UI&lt;/strong&gt; layer that would stay out of the way while I focused on domain logic and data synchronization. With modern features like async/await and integration with &lt;strong&gt;Swift Actors&lt;/strong&gt;, I found it easier to manage data flow and concurrency. SwiftUI also definitely made it easier to structure the app into isolated ViewModel components, which in turn helped me get better results from LLMs like OpenAI o1 and DeepSeek. LLMs could produce pure UI code or data binding code without introducing messy interdependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  App Architecture and Data Model
&lt;/h2&gt;

&lt;p&gt;Let's go over the architecture of the app I've created: I used SQLite for persistent data storage and approached the app architecture as a simple server application. I avoided CoreData because I needed tight control over schema, raw queries, and especially full-text search. SQLite's built-in FTS5 support let me add fast fuzzy search without pulling in heavy external search engines or building my own indexing layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three Main Screens
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The app consists of 3 screen/modes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Library import.&lt;/strong&gt; This is where you add your iCloud library folder. The app scans every folder for audio files and inserts every path into a SQLite database. This way, you can have full flexibility in searching, adding folders, and subfolders. Apple's native file picker is very clunky; you cannot select multiple directories that you searched by keyword and then also a bunch of files in one go. It simply is not designed to do that.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Library management.&lt;/strong&gt; This is where you can manage the added songs and organize playlists. For the most part, I've reflected the way Apple did that in their Music app, and it was good enough for my needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Player and playback.&lt;/strong&gt; This part of the application manages queue management (repeat, shuffle), etc., and play, stop, and next song functionality.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A simple user flow diagram is shown here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnexo.sh%2Fposts%2Fwhy-i-built-a-native-mp3-player-in-swiftui%2Fuser-flow-diagram.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnexo.sh%2Fposts%2Fwhy-i-built-a-native-mp3-player-in-swiftui%2Fuser-flow-diagram.svg" alt="User flow diagram" width="1564" height="934"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User flow in practice:&lt;/strong&gt; When the app launches with an empty library, it lands on the Sync tab, showing a big "Add iCloud Source" button. Pick a folder there, and the Sync screen displays a progress bar while it walks the tree. As soon as indexing finishes, it switches you to the Library tab, whose first screen lists &lt;strong&gt;Playlists / Artists / Albums / Songs&lt;/strong&gt;. Dive into any list, tap a track, and a Mini-Player pops up along the bottom; tap that mini-bar to open the full-screen Player with shuffle, repeat, queue reorder, and volume. Swipe or tap the close icon, and you're straight back to the Library while playback continues. Any time you need more music, jump back to Sync, hit the "+" in the nav bar, select another folder, and the import service merges new songs in the background, no restart required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend-Like Logic Layer
&lt;/h3&gt;

&lt;p&gt;Having a web/cloud background and shipped a lot of server code while working in startups, I went with a &lt;strong&gt;backend-like architecture&lt;/strong&gt; for the mobile app. The whole domain/logic layer was separated from the &lt;strong&gt;View and View-Model layer&lt;/strong&gt; because I had to nail the &lt;strong&gt;cloud syncing, metadata parsing&lt;/strong&gt; aspect of the app and having clean data access to a SQLite DB. &lt;em&gt;Here's an approximate layered architecture diagram that I used here&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnexo.sh%2Fposts%2Fwhy-i-built-a-native-mp3-player-in-swiftui%2Flayers.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnexo.sh%2Fposts%2Fwhy-i-built-a-native-mp3-player-in-swiftui%2Flayers.svg" alt="Layered architecture diagram" width="1304" height="951"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How the layers talk:&lt;/strong&gt; SQLite sits at the bottom, storing raw song rows and FTS indexes. Then repositories wrap the database and expose async APIs. On top of those live my &lt;strong&gt;domain actors&lt;/strong&gt;, Swift actors that own all business rules (import, search, queue logic) so state mutations stay &lt;strong&gt;thread-safe&lt;/strong&gt;. ViewModels subscribe to the actors, transform the data into UI-ready structs, and SwiftUI views simply render whatever they get. Nothing crosses layers directly, keeping iCloud sync, playback, and UI &lt;strong&gt;nicely decoupled&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Full Text Search with SQLite
&lt;/h2&gt;

&lt;p&gt;Like I previously mentioned, it's fortunate that you can import an SQLite version with FTS capabilities: starting around iOS 11, it's available out of the box &lt;strong&gt;without extra setup&lt;/strong&gt;. This made it easy to integrate fuzzy search into my music library &lt;strong&gt;without any third-party dependencies&lt;/strong&gt;. Additionally, I used the SQLite.swift library for regular queries (which works as a sort of query builder with compile-time safety); however, for FTS queries, I had to resort to regular SQL statements.&lt;/p&gt;

&lt;p&gt;SQLite's &lt;a href="https://sqlite.org/fts5.html" rel="noopener noreferrer"&gt;FTS5&lt;/a&gt; extension ended up being one of the most valuable pieces of the architecture. It let me query across file names and metadata like artist, album, and title without extra indexing infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up the FTS Tables
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Swift actor / repo&lt;/th&gt;
&lt;th&gt;FTS5 table&lt;/th&gt;
&lt;th&gt;Columns that get indexed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Library songs&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SQLiteSongRepository&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;songs_fts&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;artist&lt;/code&gt;, &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;album&lt;/code&gt;, &lt;code&gt;albumArtist&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source-browser paths&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SQLiteSourcePathSearchRepository&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;source_paths_fts&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;fullPath&lt;/code&gt;, &lt;code&gt;fileName&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I used two FTS5 tables: one for indexed songs (artist/title/album) and one for file paths during folder import. Both tables live next to the primary rows in plain‐old B-tree tables (&lt;code&gt;songs&lt;/code&gt;, &lt;code&gt;source_paths&lt;/code&gt;). FTS is &lt;strong&gt;read-only for the UI&lt;/strong&gt;; all writes happen inside the repositories so nothing slips through the cracks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating the search index
&lt;/h4&gt;

&lt;p&gt;SQLite's built-in FTS5 makes quick searches easy. Here's a simple table definition I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"""
CREATE VIRTUAL TABLE IF NOT EXISTS songs_fts USING fts5(
  songId UNINDEXED,
  artist, title, album, albumArtist,
  tokenize='unicode61'
);
"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I used &lt;code&gt;unicode61&lt;/code&gt; tokenizer to ensure that a wide variety of characters are handled. Non-searchable keys are flagged with &lt;code&gt;UNINDEXED&lt;/code&gt;, so they don't bloat the term dictionary.&lt;/p&gt;

&lt;h4&gt;
  
  
  Updating data reliably
&lt;/h4&gt;

&lt;p&gt;To keep things simple and safe, I wrapped updates and inserts in transactions. This ensures the search index never gets out of sync, even if the app crashes or gets interrupted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;upsertSong&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="nv"&gt;song&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Song&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;throws&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transaction&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// insert or update main song data&lt;/span&gt;
        &lt;span class="c1"&gt;// insert or update search index data&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Querying with Fuzzy Search
&lt;/h3&gt;

&lt;p&gt;For user-friendly search, I add wildcard support automatically. If you type "lumine," it searches for "lumine*" internally, giving instant results even with partial queries.&lt;/p&gt;

&lt;p&gt;I also leverage SQLite's built-in smart ranking (&lt;code&gt;bm25&lt;/code&gt;) to return more relevant results without extra complexity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;songs&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;songs_fts&lt;/span&gt; &lt;span class="n"&gt;fts&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;songId&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;songs_fts&lt;/span&gt; &lt;span class="k"&gt;MATCH&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;bm25&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;songs_fts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="k"&gt;OFFSET&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Overall, using raw SQLite gave me the flexibility I needed: predictable schema, local-first access, and powerful full-text search, without introducing any network dependencies or external services. This approach was ideal for an app designed to be private and offline-first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with iOS Files and Bookmarks
&lt;/h2&gt;

&lt;p&gt;On iOS, apps can store persistent bookmarks to file locations, but &lt;strong&gt;security-scoped bookmarks&lt;/strong&gt;, which grant extended access to files outside the app's sandbox, are only available on &lt;strong&gt;macOS&lt;/strong&gt;. iOS apps can use regular bookmarks to remember file paths and request access again through the document picker, but that access isn’t guaranteed to persist silently. See &lt;a href="https://developer.apple.com/documentation/foundation/nsurl#Bookmarks-and-Security-Scope" rel="noopener noreferrer"&gt;Apple's bookmark documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To mitigate this, I implemented a fallback mechanism that copies files into the &lt;strong&gt;app's own sandboxed container&lt;/strong&gt;. This avoids the fragile lifecycle of security-scoped bookmarks that can silently break if iOS resets the permissions. By copying files proactively in the background, while the bookmark is valid, there's no risk in accessing invalid audio-file references.&lt;/p&gt;

&lt;p&gt;This approach also improves indexing speed. I can scan the folder structure once (while access is active), import only relevant audio files, and safely traverse deeply nested directories. But reliably playing back individual audio files from external locations, especially after device restarts, &lt;em&gt;remains an unsolved problem to me&lt;/em&gt;. This highlights how &lt;strong&gt;under-supported&lt;/strong&gt; this use case is, even for native apps, and how complex it still is to &lt;strong&gt;handle file access reliably on iOS&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Playback and UI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Metadata Parsing
&lt;/h3&gt;

&lt;p&gt;To parse metadata from audio files, I used Apple's &lt;strong&gt;AVFoundation framework&lt;/strong&gt;, specifically the &lt;strong&gt;AVURLAsset&lt;/strong&gt; class, which allows inspection of media file metadata, such as title, album artist, etc. While metadata parsing is handled by the native SDK, certain fields like track numbers you have to manually look up from ID3 tags. I relied on &lt;a href="https://github.com/TastemakerDesign/Warper/blob/2af8c07ad8422f4dc3a539177d3a76ee8502e632/plugins/flutter_media_metadata/ios/Classes/Id3MetadataRetriever.swift" rel="noopener noreferrer"&gt;GitHub search&lt;/a&gt; to find examples, since the official documentation lacked coverage for edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Audio Playback with AVFoundation
&lt;/h3&gt;

&lt;p&gt;After the library is indexed, implementing an audio player feels pretty simple: you just have to initialize an instance of &lt;code&gt;AVAudioPlayer&lt;/code&gt; and let the audio play. Additionally, for quality-of-life features: playing music from the control center, I had to implement the &lt;code&gt;AVAudioPlayerDelegate&lt;/code&gt; protocol and also hooked into Apple's &lt;code&gt;MPRemoteCommandCenter&lt;/code&gt;, which lets developers respond to system-level playback controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflections: Apple, Developer Lock-In, and the Future
&lt;/h2&gt;

&lt;p&gt;Here's what stood out during development:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bad
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Xcode's limitations remain frustrating.&lt;/strong&gt; Real-time SwiftUI previews are definitely a step forward, but the overall development experience still isn't on par with what Flutter offered five years ago: tight VSCode integration, real-time simulator reloads, and familiar debugging tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lack of editor flexibility.&lt;/strong&gt; Setting up Language Server Protocol (LSP) support for Swift in Neovim or VSCode requires extra tooling like &lt;a href="https://github.com/SolaWing/xcode-build-server" rel="noopener noreferrer"&gt;&lt;code&gt;xcode-build-server&lt;/code&gt;&lt;/a&gt;, and still doesn't fully match the developer experience of web-first ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some corners of Apple's SDK still live in Objective-C land.&lt;/strong&gt; Spotlight file search, for instance, is only exposed through &lt;code&gt;NSMetadataQuery&lt;/code&gt;, which uses Key-Value Observing (KVO) and string keys, no Swift-friendly wrapper yet. Documentation is often sparse, which steepens the learning curve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SwiftUI's declarative UI is great, but debugging iCloud interactions still requires manual mocks.&lt;/strong&gt; SwiftUI previews can't emulate full app behaviors involving iCloud entitlements, so you have to mock cloud interactions manually, a minor annoyance but notable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Good
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Async/await.&lt;/strong&gt; Finally, I can write I/O-bound concurrent code like an imperative one with no annoying callbacks. That's a big win, and I greatly appreciate how easy it is to write even sync code into Actors and call it like you do in JavaScript ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plethora of native libs.&lt;/strong&gt; Yes, you're not limited by open source bindings like in React Native/Flutter ecosystems. Here you have much more freedom in developing something "more serious" than your company/product website replacement (because of poor mobile-first experience). Many Apple's APIs are available with examples, which made it easy to get started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SwiftUI&lt;/strong&gt; itself. Yes, the React-style approach to building UIs gives more productivity and space for explorations. It's just great that Apple adopted it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary: Building Should Be Easier
&lt;/h3&gt;

&lt;p&gt;After 1.5 weeks of hacking around, I was able to get the piece of software which exactly satisfies my needs: a &lt;strong&gt;local/offline music player&lt;/strong&gt; that can import audio files from cloud storage.&lt;/p&gt;

&lt;p&gt;But developers quickly realize they can't easily deploy apps to their own devices these days and forget about it: apps only run for &lt;a href="https://developer.apple.com/support/compare-memberships" rel="noopener noreferrer"&gt;&lt;strong&gt;7 days without a dev certificate&lt;/strong&gt;&lt;/a&gt;, and after that, you have to rebuild it, unless you paid $99 to Apple to enroll in the development program. &lt;/p&gt;

&lt;p&gt;Even after the &lt;strong&gt;DMA Act in the EU&lt;/strong&gt;, sideloading still isn't fully open. EU users can now install apps from third-party marketplaces directly from a developer's site, but only if that developer still enrolled in Apple's $99/year program and agrees to Apple's Alternative Terms. For personal/hobbyist use, this still doesn’t remove the 7-day dev build limitation.&lt;/p&gt;

&lt;p&gt;This makes ultimately no sense. An innovative technology company actively puts roadblocks into democratized application development. Even Progressive Web Applications (PWAs) &lt;a href="https://brainhub.eu/library/pwa-on-ios" rel="noopener noreferrer"&gt;face notable limitations on iOS&lt;/a&gt;: even after Apple's 16-18.x updates, iOS PWAs still run inside Safari's sandbox. They get WebGL2 and web-push, but they don't get Web Bluetooth/USB/NFC, Background Sync, or more than ~50MB of guaranteed storage. WebGL runs through Metal shim, so real-world frame-rates often trail native Metal apps; this is good enough for UI, but not for AAA 3D games.&lt;/p&gt;

&lt;p&gt;Nowadays, AI has reduced the complexity of modern software development by allowing anyone to tackle unknown technologies by providing all the necessary knowledge in an accessible way. You can clearly see how web development got more interest from non-technical people who have a way to build their ideas without specializing in a plethora of technologies. But when it comes to mobile apps, you just have to play by the artificial rules. &lt;em&gt;Even if you built it yourself, for yourself, Apple still gets the final say&lt;/em&gt; before you can run it for more than a week. The same company that once empowered independent developers now imposes &lt;strong&gt;tight restrictions that hinder personal app development&lt;/strong&gt; and distribution. AI has made it easier than ever to build new tools, unless you're building for iOS, where the gate is still locked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://support.apple.com/en-us/HT204146" rel="noopener noreferrer"&gt;iTunes Match – Apple Support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.apple.com/documentation/foundation/nsurl#1664002" rel="noopener noreferrer"&gt;Security-Scoped Bookmarks – Apple Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sqlite.org/fts5.html" rel="noopener noreferrer"&gt;FTS5 – SQLite Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apps.apple.com/us/app/doppler-music-player/id1500875779" rel="noopener noreferrer"&gt;Doppler Music Player – App Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.expo.dev/versions/latest/sdk/filesystem/" rel="noopener noreferrer"&gt;Expo FileSystem Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.apple.com/programs/" rel="noopener noreferrer"&gt;Apple Developer Program Info (7-day builds)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discussions.apple.com/thread/252762868?sortBy=rank" rel="noopener noreferrer"&gt;Apple Community: Files App &amp;amp; MP3 Playback&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ios</category>
      <category>mobile</category>
      <category>swift</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Microservices Are a Tax Your Startup Probably Can’t Afford</title>
      <dc:creator>Oleg Pustovit</dc:creator>
      <pubDate>Wed, 07 May 2025 20:59:32 +0000</pubDate>
      <link>https://dev.to/opustovit/microservices-are-a-tax-your-startup-probably-cant-afford-2441</link>
      <guid>https://dev.to/opustovit/microservices-are-a-tax-your-startup-probably-cant-afford-2441</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Why splitting your codebase too early can quietly destroy your team’s velocity — and what to do instead&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a startup, &lt;strong&gt;your survival depends on how quickly you can iterate, ship features, and deliver value to end-users&lt;/strong&gt;. This is where the foundational architecture of your startup plays a big role; additionally, things like your tech stack and choice of programming language directly affect your team’s velocity. The wrong architecture, especially premature microservices, can substantially hurt productivity and contribute to missed goals in delivering software.&lt;/p&gt;

&lt;p&gt;I've had this experience when working on greenfield projects for early-stage startups, where questionable decisions were made in terms of software architecture that led to half-finished services and &lt;em&gt;brittle, over-engineered and broken local setups&lt;/em&gt;, and &lt;strong&gt;demoralized teams&lt;/strong&gt; who struggle maintaining unnecessary complexity.&lt;/p&gt;

&lt;p&gt;Before diving into specific pitfalls, here’s what you’re actually signing up for when introducing microservices prematurely:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microservices Early On: What You’re Paying For&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pain Point&lt;/th&gt;
&lt;th&gt;Real-World Manifestation&lt;/th&gt;
&lt;th&gt;Developer Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Deployment Complexity&lt;/td&gt;
&lt;td&gt;Orchestrating 5+ services for a single feature&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Hours lost per release&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Local Dev Fragility&lt;/td&gt;
&lt;td&gt;Docker sprawl, broken scripts, platform-specific hacks&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Slow onboarding, frequent breakage&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI/CD Duplication&lt;/td&gt;
&lt;td&gt;Multiple pipelines with duplicated logic&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Extra toil per service&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-Service Coupling&lt;/td&gt;
&lt;td&gt;"Decoupled" services tightly linked by shared state&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Slower changes, coordination tax&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Observability Overhead&lt;/td&gt;
&lt;td&gt;Distributed tracing, logging, monitoring&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Weeks to instrument properly&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test Suite Fragmentation&lt;/td&gt;
&lt;td&gt;Tests scattered across services&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Brittle tests, low confidence&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's unpack why microservices often backfire early on, where they genuinely help, and how to structure your startup's systems for speed and survival.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monoliths Are Not the Enemy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1owafa8e4dzeaxg9674.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1owafa8e4dzeaxg9674.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're building some SaaS product, even a simple SQL database wrapper eventually may bring a lot of internal complexity in the way your business logic works; additionally, you can get to various integrations and background tasks that let transform one set of data to another.&lt;/p&gt;

&lt;p&gt;With time, sometimes unnecessary features, it's inevitable that your app may grow messy. The great thing about monoliths is: they still work. &lt;strong&gt;Monoliths, even when messy, keep your team focused on what matters most&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Staying alive&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delivering customer value&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest advantage of monoliths is their simplicity in deployment. Generally, such projects are built around existing frameworks — it could be Django for Python, ASP.Net for C#, Nest.js for Node.js apps, etc. When sticking to monolithic architecture, you get the biggest advantage over fancy microservices — a wide support of the open source community and project maintainers who primarily designed those frameworks to work as a single process, monolithic app.&lt;/p&gt;

&lt;p&gt;At one real-estate startup where I led the front-end team, and occasionally consulted the backend team on technology choices, we had an interesting evolution of a Laravel-based app. What started as a small dashboard for real-estate agents to manage deals gradually grew into a much larger system.&lt;/p&gt;

&lt;p&gt;Over time, it evolved into a feature-rich suite that handled hundreds of gigabytes of documents and integrated with dozens of third-party services. Yet, it remained built on a fairly basic PHP stack running on Apache.&lt;/p&gt;

&lt;p&gt;The team leaned heavily on best practices recommended by the Laravel community. That discipline paid off, we were able to scale the application’s capabilities significantly while still meeting the business’s needs and expectations.&lt;/p&gt;

&lt;p&gt;Interestingly, we never needed to decouple the system into microservices or adopt more complex infrastructure patterns. We avoided a lot of accidental complexity that way. The simplicity of the architecture gave us leverage. This echoes what others have written — like Basecamp’s take on the &lt;a href="https://signalvnoise.com/svn3/the-majestic-monolith/" rel="noopener noreferrer"&gt;“Majestic Monolith”&lt;/a&gt;, which lays out why simplicity is a superpower early on.&lt;/p&gt;

&lt;p&gt;People often point out that it's hard to make monoliths scalable, but it's bad modularization &lt;em&gt;inside&lt;/em&gt; the monolith that may bring such problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: A well-structured monolith keeps your team focused on shipping, not firefighting.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  But Isn’t Microservices “Best Practice”?
&lt;/h2&gt;

&lt;p&gt;A lot of engineers reach for microservices early, thinking they’re “the right way.” And sure — at scale, they can help. But in a startup, that same complexity turns into drag.&lt;/p&gt;

&lt;p&gt;Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, &lt;strong&gt;Segment&lt;/strong&gt; eventually &lt;a href="https://segment.com/blog/goodbye-microservices/" rel="noopener noreferrer"&gt;reversed their microservice split&lt;/a&gt; for this exact reason — too much cost, not enough value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Microservices are a scaling tool — not a starting template.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Microservices Go Wrong (Especially Early On)
&lt;/h2&gt;

&lt;p&gt;In one early-stage team I advised, the decision to split services created more PM-engineering coordination overhead than technical gain. Architecture shaped not just code, but how we planned, estimated, and shipped. That organizational tax is easy to miss — until it’s too late.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xh9pc04c4f2pvcj91nr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xh9pc04c4f2pvcj91nr.png" alt=" " width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Diagram:&lt;/strong&gt; Coordination overhead grows linearly with services — and exponentially when you add product managers, deadlines, and misaligned timelines.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here are the most common anti-patterns that creep in early.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Arbitrary Service Boundaries
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqgfime75g7jhzx1gdq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqgfime75g7jhzx1gdq4.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In theory, you often see suggestions on splitting your applications by business logic domain — users service, products service, orders service, and so on. This often borrows from Domain-Driven Design or Clean Architecture concepts — which make sense at scale, but in early-stage products, they can ossify structure prematurely, before the product itself is stable or validated. You end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared databases&lt;/li&gt;
&lt;li&gt;Cross-service calls for simple workflows&lt;/li&gt;
&lt;li&gt;Coupling disguised as "separation"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At one project, I watched a team separating user, authentication, and authorization into separate services, which led to deployment complexity and difficulties in service coordination for any API operation they were building.&lt;/p&gt;

&lt;p&gt;In reality, business logic doesn't directly map to service boundaries. Premature separation can make the system more fragile and often times difficult to introduce changes quickly.&lt;/p&gt;

&lt;p&gt;Instead, isolate bottlenecks surgically — based on real scaling pain, not theoretical elegance.&lt;/p&gt;

&lt;p&gt;When I’ve coached early-stage teams, we’ve sometimes used internal flags or deployment toggles to simulate future service splits — without the immediate operational burden. This gave product and engineering room to explore boundaries organically, before locking in premature infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Don’t split by theory — split by actual bottlenecks.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Repository and Infrastructure Sprawl
&lt;/h3&gt;

&lt;p&gt;When working on the application, typically a next set of things is important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code style consistency (linting)&lt;/li&gt;
&lt;li&gt;Testing infrastructure, including integration testing&lt;/li&gt;
&lt;li&gt;Local environment configuration&lt;/li&gt;
&lt;li&gt;Documentation&lt;/li&gt;
&lt;li&gt;CI/CD configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When dealing with microservices, you need to multiply those requirements by the number of services. If your project is structured as a monorepo, you can simplify your life by having a central CI/CD configuration (when working with GitHub Actions or GitLab CI). Some teams separate microservices into separate repositories, which makes it way harder to maintain the code consistency and the same set of configurations without extra effort or tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkmmq0mudzhtexo2xfhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkmmq0mudzhtexo2xfhz.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a three-person team, this is brutal. Context switching across repositories and tooling adds up to the development time of every feature that is shipped.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mitigating issues by using monorepos and a single programming language
&lt;/h4&gt;

&lt;p&gt;There are various ways to mitigate this problem. For early projects, the single most important thing is — keeping your code in a monorepo. This ensures that there's a single version of code that exists on prod, and it's much easier to coordinate code reviews and collaborate for smaller teams.&lt;/p&gt;

&lt;p&gt;For Node.js projects, I strongly recommend using a monorepo tool like &lt;code&gt;nx&lt;/code&gt; or &lt;code&gt;turborepo&lt;/code&gt;. Both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplify CI/CD config across subprojects&lt;/li&gt;
&lt;li&gt;Support dependency graph-based build caching&lt;/li&gt;
&lt;li&gt;Let you treat internal services as TypeScript libraries (via ES6 imports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools save time otherwise spent writing glue code or reinventing orchestration. That said, they come with real tradeoffs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex dependency trees can grow fast&lt;/li&gt;
&lt;li&gt;CI performance tuning is non-trivial&lt;/li&gt;
&lt;li&gt;You may need faster tooling (like bun) to keep build times down&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize: Tooling like &lt;code&gt;nx&lt;/code&gt; or &lt;code&gt;turborepo&lt;/code&gt; gives small teams monorepo velocity — if you’re willing to invest in keeping them clean.&lt;/p&gt;

&lt;p&gt;When developing &lt;code&gt;go&lt;/code&gt;-based microservices, a good idea early in the development is to use a single &lt;code&gt;go&lt;/code&gt; workspace with the &lt;code&gt;replace&lt;/code&gt; directive in &lt;code&gt;go.mod&lt;/code&gt;. Eventually, as the software scales, it's possible to effortlessly separate &lt;code&gt;go&lt;/code&gt; modules into separate repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: A monorepo with shared infra buys you time, consistency, and sanity.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Broken Local Dev = Broken Velocity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If it takes three hours, a custom shell script, and a Docker marathon just to run your app locally, you've already lost velocity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Early projects often suffer from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing documentation&lt;/li&gt;
&lt;li&gt;Obsolete dependencies&lt;/li&gt;
&lt;li&gt;OS-specific hacks (hello, Linux-only setups)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my experience, when I received projects from past development teams, they were often developed for a single operating system. Some devs preferred building on macOS and never bothered supporting their shell scripts on Windows. In my past teams, I had engineers working on Windows machines, and often it required rewriting shell scripts or fully understanding and reverse engineering the process of getting the local environment running. With time, we standardized environment setup across dev OSes to reduce onboarding friction — a small investment that saved hours per new engineer. It was frustrating — but it taught a lasting lesson on how important it is to get the code running on any laptop your new developer may be using.&lt;/p&gt;

&lt;p&gt;At another project, a solo dev had created a fragile microservice setup, that the workflow of running Docker containers mounted to a local file system. Of course, you pay a little price for running processes as containers when your computer runs Linux.&lt;/p&gt;

&lt;p&gt;But onboarding a new front-end dev with an older Windows laptop turned into a nightmare. They had to spin up ten containers just to view the UI. Everything broke — volumes, networking, container compatibility — and the setup very poorly documented. This created a major friction point during onboarding.&lt;/p&gt;

&lt;p&gt;We ended up hacking together a Node.js proxy that mimicked the nginx/Docker configuration without containers. It wasn’t elegant, but it let the dev get unblocked and start contributing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxujof87iw9evtdlp8oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxujof87iw9evtdlp8oy.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: If your app only runs on one OS, your team’s productivity is one laptop away from disaster.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Ideally, aim for &lt;code&gt;git clone &amp;lt;repo&amp;gt; &amp;amp;&amp;amp; make up&lt;/code&gt; to have the project running locally. If it's not possible, then maintaining an up-to-date README file with instructions for Windows/macOS/Linux is a must. Nowadays, there are some programming languages and toolchains that don't work well on Windows (like OCaml), but the modern widely popular stack runs just fine on every widely used operating system; by limiting your local setup to a single operating system, it's often a symptom of under-investment in DX.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Technology Mismatch
&lt;/h3&gt;

&lt;p&gt;Beyond architecture, your tech stack also shapes how painful microservices become — not every language shines in a microservice architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js and Python:&lt;/strong&gt; Great for rapid iteration, but managing build artifacts, dependency versions, and runtime consistency across services gets painful fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go:&lt;/strong&gt; Compiles to static binaries, fast build times, and low operational overhead. More natural fit when splitting is truly needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's very important to pick the right technical stack early on. If you look for performance, maybe look for the JVM and its ecosystem and ability to deploy artifacts at scale and run them in microservice-based architectures. If you do very fast iterations and prototype quickly without worrying about scaling your deployment infrastructure — you're good with something like Python.&lt;/p&gt;

&lt;p&gt;It's quite often for teams to realise that there are big issues with their choice of technology that wasn't apparent initially, and they had to pay the price of rebuilding the back-end in a different programming language (like &lt;a href="https://blog.khanacademy.org/go-services-one-goliath-project/?utm_source=blog.quastor.org&amp;amp;utm_medium=referral&amp;amp;utm_campaign=khan-academy-s-migration-from-python-to-go" rel="noopener noreferrer"&gt;those guys&lt;/a&gt; were forced to do something about legacy Python 2 codebase and migrated to Go).&lt;/p&gt;

&lt;p&gt;But on the contrary, if you really need to, you can bridge multiple programming languages with protocols like &lt;strong&gt;gRPC&lt;/strong&gt; or async message communication. And it's often the way to go about things. When you get to the point that you want to enrich your feature set with Machine Learning functionality or ETL-based jobs, you would just separately build your ML-based infrastructure in Python, due to its rich ecosystem of domain-specific libraries, that naturally any other programming language lacks. But such decisions should be made when there's enough head count to justify this venture; otherwise, the small team will be eternally drawn into the endless complexity of bridging multiple software stacks together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Match the tech to your constraints, not your ambition.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Hidden Complexity: Communication and Monitoring
&lt;/h3&gt;

&lt;p&gt;Microservices introduce an invisible web of needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service discovery&lt;/li&gt;
&lt;li&gt;API versioning&lt;/li&gt;
&lt;li&gt;Retries, circuit breakers, fallbacks&lt;/li&gt;
&lt;li&gt;Distributed tracing&lt;/li&gt;
&lt;li&gt;Centralized logging and alerting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a monolith, a bug might be a simple stack trace. In a distributed system, it's "why does service A fail when B’s deployment lags C by 30 seconds?"&lt;br&gt;
You would have to thoroughly invest in your observability stack. To do it "properly", it requires instrumenting your applications in specific ways, e.g. integrating OpenTelemetry to support tracing, or relying on your cloud provider's tools like AWS XRay if you go with a complex serverless system. But at this point, you have to completely shift your focus from application code towards building complex monitoring infrastructure that to validate whether your architecture is &lt;strong&gt;actually&lt;/strong&gt; functioning in production.&lt;/p&gt;

&lt;p&gt;Of course, some of the observability instrumentation is needed to be performed on monolith apps, but it's way simpler than doing that in terms of the number of services in a consistent way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Understand that &lt;strong&gt;distributed systems &lt;em&gt;aren't free.&lt;/em&gt;&lt;/strong&gt; They're a commitment to a whole new class of engineering challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Microservices &lt;em&gt;Do&lt;/em&gt; Make Sense
&lt;/h2&gt;

&lt;p&gt;Despite the mentioned difficulties with microservices, there are times where service-level decoupling actually is very beneficial. There are cases where it definitely helps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workload Isolation&lt;/strong&gt;: a common example for that would be in AWS best practices on using S3 event notifications — when an image gets loaded to S3, trigger an image resizing/OCR process, etc. Why it is useful: we can decouple obscure data processing libraries in a self-isolated service and make it API focused solely on image processing and generating output from the uploaded data. Your upstream clients that upload data to S3 aren't coupled with this service, and there's less overhead in instrumenting such a service because of its relative simplicity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Divergent Scalability Needs:&lt;/strong&gt; — Imagine you are building an AI product. One part of the system (&lt;strong&gt;web API&lt;/strong&gt;) that triggers ML workloads and shows past results isn't resource intensive, it's lightweight, because it interacts mostly with the database. On the contrary, ML model runs on GPUs is actually heavy to run and requires special machines with GPU support with additional configuration. By splitting these parts of the application into separate services running on different machines, you can scale them independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different Runtime Requirements:&lt;/strong&gt; — Let’s say you've got some legacy part of code written in C++. You have 2 choices — magically convert it to your core programming language or find ways to integrate it with a codebase. Depending on the complexity of that legacy app, you would have to write glue code, implementing additional networking/protocols to establish interactions with that service, but the bottom line is — you will likely have to separate this app as a separate service due to runtime incompatibilities. I would say even more, you could write your main app in C++ as well, but because of different compiler configurations and library dependencies, you wouldn't be able to easily compile things together as a single binary.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Large-scale engineering orgs have wrestled with similar challenges. For instance, Uber's engineering team &lt;a href="https://www.uber.com/en-HR/blog/microservice-architecture/" rel="noopener noreferrer"&gt;documented their shift to a domain-oriented microservice architecture&lt;/a&gt; — not out of theoretical purity, but in response to real complexity across teams and scaling boundaries. Their post is a good example of how microservices can work when you have the organizational maturity and operational overhead to support them.&lt;/p&gt;

&lt;p&gt;At one project, that also happens to be a real-estate one, we had code from a previous team that runs Python-based analytics workloads that loads data into MS-SQL db, we found that it would be a waste to build on top of it a Django app. The code had different runtime dependencies and was pretty self-isolated, so we kept it separate and only revisited it when something wasn't working as expected. This worked for us even for a small team, because this analytics generation service was a part that required rare changes or maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Use microservices when workloads diverge — not just because they sound clean.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Guidance for Startups
&lt;/h2&gt;

&lt;p&gt;If you're shipping your first product, here's the playbook I'd recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start monolithic.&lt;/strong&gt; Pick a common framework and focus on getting the features done. All known frameworks are more than good enough to build some API or website and serve the users. Don't follow the hype, stick to the boring way of doing things; you can thank yourself later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single repo.&lt;/strong&gt; Don't bother splitting your code into multiple repositories. I’ve worked with founders who wanted to separate repos to reduce the risk of contractors copying IP — a valid concern. But in practice, it added more friction than security: slower builds, fragmented CI/CD, and poor visibility across teams. The marginal IP protection wasn’t worth the operational drag, especially when proper access controls inside a monorepo were easier to manage. For early-stage teams, clarity and speed tend to matter more than theoretical security gains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dead-simple local setup.&lt;/strong&gt; Make &lt;code&gt;make up&lt;/code&gt; work. If it takes more, be very specific on the steps, record a video/Loom, and add screenshots. If your code is going to be run by an intern or junior dev, they'll likely hit a roadblock, and you’ll spend time explaining how to troubleshoot an issue. I found that documenting every possible issue for every operating system eliminates time spent clarifying why certain things in a local setup didn't work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest early in CI/CD.&lt;/strong&gt; Even if it's a simple HTML that you could just &lt;code&gt;scp&lt;/code&gt; to a server manually, you could automate this and rely on source control with CI/CD to do it. When the setup is properly automated, you just forget about your continuous integration infrastructure and focus on features. I've seen many teams and founders when working with outsourced teams often be cheap on CI/CD, and that results in the team being demoralized and annoyed by manual deployment processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Split surgically.&lt;/strong&gt; Only split when it clearly solves a painful bottleneck. Otherwise, invest in modularity and tests inside the monolith — it’s faster and easier to maintain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And above all: &lt;strong&gt;optimize for developer velocity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Velocity is your startup’s oxygen.&lt;/strong&gt; Premature microservices leak that oxygen slowly — until one day, you can't breathe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Start simple, stay pragmatic, and split only when you must.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  If you go with a microservice-based approach
&lt;/h2&gt;

&lt;p&gt;I had micro-service-based projects created earlier than they should have been done, and here are the next recommendations that I could give on that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate your technical stack&lt;/strong&gt; that powers your micro-service-based architecture. Invest in developer experience tooling. When you have service-based separation, you now need to think about automating your microservice stack, automating config across both local and production environments. In certain projects, I had to build a separate CLI that does administrative tasks on the monorepository. One project I had contained 15-20 microservice deployments, and for the local environment, I had to create a cli-tool for generating docker-compose.yml files dynamically to achieve seamless one-command start-up for the regular developer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focus on reliable communication protocols&lt;/strong&gt; around service communication. If it's async messaging, make sure your message schemas are consistent and standardized. If it's REST, focus on OpenAPI documentation. Inter-service communication clients must implement many things that don't come out-of-the-box: retries with exponential backoff, timeouts. A typical bare-bones gRPC client requires you to manually factor those additional things to make sure you don't suffer from transient errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensure that your unit, integration testing, and end-to-end testing setup&lt;/strong&gt; is stable and scales with the amount of service-level separations you introduce into your codebase.&lt;/li&gt;
&lt;li&gt;On smaller projects that use micro-service-based workloads, you would likely default to a shared library with common helpers for instrumenting your observability, communication code in a consistent way. An important consideration here — &lt;strong&gt;keep your shared library as small as possible&lt;/strong&gt;. Any major change forces a rebuild across all dependent services — even if unrelated.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr8uxspm7m5ba0nhl6l1.png" alt=" " width="800" height="374"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Look into observability earlier on.&lt;/strong&gt; Add structured-JSON logs and create various correlation IDs for debugging things once your app is deployed. Even basic helpers that output rich logging information (until you instrumented your app with proper logging/tracing facilities) often save time figuring out flaky user flows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize: if you're still going for microservices, you should beforehand understand the tax you're going to pay in terms of additional development time and maintenance to make the setup workable for every engineer in your team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: If you embrace complexity, invest fully in making it manageable.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Premature microservices are a tax you can’t afford. Stay simple. Stay alive.&lt;/strong&gt; Split only when the pain makes it obvious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Survive first. Scale later. Choose the simplest system that works — and earn every layer of complexity you add.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://martinfowler.com/bliki/MonolithFirst.html" rel="noopener noreferrer"&gt;Monolith First&lt;/a&gt; — Martin Fowler&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://signalvnoise.com/svn3/the-majestic-monolith" rel="noopener noreferrer"&gt;The Majestic Monolith&lt;/a&gt; — DHH / Basecamp&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://segment.com/blog/goodbye-microservices" rel="noopener noreferrer"&gt;Goodbye Microservices: From 100s of problem children to 1 superstar&lt;/a&gt; — Segment Eng.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://shopify.engineering/deconstructing-monolith-designing-software-maximizes-developer-productivity" rel="noopener noreferrer"&gt;Deconstructing the Monolith&lt;/a&gt; — Shopify Eng.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.uber.com/blog/microservice-architecture/" rel="noopener noreferrer"&gt;Domain‑Oriented Microservice Architecture&lt;/a&gt; — Uber Eng.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.khanacademy.org/go-services-one-goliath-project/" rel="noopener noreferrer"&gt;Go + Services = One Goliath Project&lt;/a&gt; — Khan Academy&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>microservices</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to Set Up a Free Oracle Cloud VM for Web Development (2025 Guide)</title>
      <dc:creator>Oleg Pustovit</dc:creator>
      <pubDate>Mon, 28 Apr 2025 08:26:32 +0000</pubDate>
      <link>https://dev.to/opustovit/how-to-set-up-a-free-oracle-cloud-vm-for-web-development-2025-guide-8hm</link>
      <guid>https://dev.to/opustovit/how-to-set-up-a-free-oracle-cloud-vm-for-web-development-2025-guide-8hm</guid>
      <description>&lt;p&gt;As a web developer, having access to a persistent, virtual machine in the cloud is often useful for testing and development. While many cloud-based providers offer limited free tiers (e.g. AWS, GCP, or Azure), Oracle Cloud stands out by providing a true "always free" cloud VM.&lt;/p&gt;

&lt;p&gt;In this article, we'll look into setting up Oracle VM (under their always free tier). There are multiple advantages to using it instead of limited free offerings from services like AWS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's always free without 1-year limits on usage (as of April 2025). However, Oracle may always change their policies around free virtual machines they provide, so it's a good idea to check if nothing has changed with free accounts.&lt;/li&gt;
&lt;li&gt;The specs are very generous. You receive a 4 vCPU VM with 24 GB RAM and 200GB of storage. This makes it a compelling choice for a remote development workstation. The only catch? It runs on ARM. But considering that consumer desktops widely adopted ARM processor architectures in recent years, it’s hardly a dealbreaker.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Registering OCI account (and giving them your credit card)
&lt;/h2&gt;

&lt;p&gt;Start by heading to the &lt;a href="https://www.oracle.com/cloud/" rel="noopener noreferrer"&gt;Oracle Cloud website&lt;/a&gt; and clicking “Sign Up.” In the account information section, enter your name, email, and country.&lt;/p&gt;

&lt;p&gt;After email verification, you'll enter your password and set up "Home Region".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft57jfg7a6s8oa4nts7i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft57jfg7a6s8oa4nts7i0.png" alt="setting up region" width="776" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's important to specify the region that supports Ampere ARM instances (there are ones that don't). I used &lt;strong&gt;France Central (Paris)&lt;/strong&gt; for proximity (to have lower latency) and ARM support.&lt;/p&gt;

&lt;p&gt;The next step would be verifying your identity by providing a payment method — enter your credit card information. You won't be charged; this is solely for identity verification. Make sure the resources you create in your Oracle Cloud account stay always within "Always Free" limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up an instance
&lt;/h2&gt;

&lt;p&gt;Once the account setup is complete, go to &lt;a href="https://cloud.oracle.com/compute/instances" rel="noopener noreferrer"&gt;Instances&lt;/a&gt; in your Oracle Cloud portal and click "Create Instance". Choose a placement that supports Ampere (ARM) processors — otherwise, the  Free Tier shapes won’t appear.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmchkeqif7sbj8sv2e25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmchkeqif7sbj8sv2e25.png" alt="placement setup" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Selecting an operating system and instance shape
&lt;/h3&gt;

&lt;p&gt;The next step would be selecting an operating system your VM will run on. Ubuntu is a safe bet for general-purpose development, but Oracle Linux works best with Oracle's ecosystem and general server workloads.&lt;/p&gt;

&lt;p&gt;Below, choose your instance shape. This is where you specify your instance type, number of OCPUs, and RAM. The Free tier allows up to 4 OCPUs and 24 GB RAM (as of April 2025), so be sure to pick a shape that fits within those limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsfcyhdfvwhhrwqw55y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsfcyhdfvwhhrwqw55y4.png" alt="instance shape set up" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Optionally, you may provide an initialization script. This is similar to "User Data" in AWS EC2 instances (if you're familiar with those). Since this is the only instance that is going to be created for experimentation, you can always initialize and configure the instance manually.&lt;/p&gt;

&lt;p&gt;Press "Next" once everything is configured, and the next step "Security" will be selected. You can freely press next and go to the Networking setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking setup and SSH
&lt;/h3&gt;

&lt;p&gt;For simplicity’s sake, use the default virtual network interface card (VNIC). Next, configure SSH access by uploading your public key (.pub). This enables secure remote login to your VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3o65sr32hfhd1znx4sq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3o65sr32hfhd1znx4sq.png" alt="public key configuration" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to configure your storage. Oracle includes 200GB in the free tier. To avoid potential issues later, set a custom boot volume size to 200GB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49t5oqfjg4ektm9b36xa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49t5oqfjg4ektm9b36xa.png" alt="boot volume config" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, review all the settings and create the instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operating newly created instance
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj9tvr9lrtpiblvlsk41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj9tvr9lrtpiblvlsk41.png" alt="live instance preview" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The instance is created and you can see it in your list in the Cloud Portal. Once the instance shows a "Running", it's ready to use and it's possible to connect to it. Locate its public IP address and connect via SSH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &amp;lt;user&amp;gt;@&amp;lt;server IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exposing additional ports
&lt;/h3&gt;

&lt;p&gt;By default, ports 80 (HTTP) and 443 (HTTPS) are blocked, which prevents public access to web applications. There are 2 things that are necessary to perform to make sure there’s connectivity through additional ports (besides SSH):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Updating ingress rules in the security list of the Virtual Cloud Network (VCN).&lt;/li&gt;
&lt;li&gt;Allow instance &lt;code&gt;iptables&lt;/code&gt; rules that may be configured by default.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Updating ingress rules
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh43ot8p2ojrllihsglgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh43ot8p2ojrllihsglgz.png" alt="subnet configuration" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://cloud.oracle.com/networking/vcns" rel="noopener noreferrer"&gt;virtual cloud networks&lt;/a&gt; in your Oracle Cloud portal, select the default VCN that is used by the newly created instance. Then navigate to the Subnets tab and select the default created subnet. Click on the Security tab and navigate to the default security list that is created for that subnet. There it's necessary to open the "Security rules" tab and that's where it's possible to add new ingress rules. Let's create one for allowing incoming HTTPS traffic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudveoqgfo1vonwmofqny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudveoqgfo1vonwmofqny.png" alt="ingress rules config" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it's saved, you can test your connection by running a simple web app or simply by connecting with utilities like &lt;strong&gt;netcat&lt;/strong&gt; (&lt;code&gt;nc&lt;/code&gt;) or &lt;code&gt;telnet&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nc &lt;span class="nt"&gt;-zv&lt;/span&gt; &amp;lt;IP address&amp;gt;443
&lt;span class="c"&gt;# or&lt;/span&gt;
telnet &amp;lt;IP address&amp;gt;443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it connects, then you're good; otherwise, let's look into &lt;code&gt;iptables&lt;/code&gt; configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring &lt;code&gt;iptables&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;In case the instance still blocks traffic (even after opening access to ports through Oracle VCN security list), it's necessary to look into internal firewall settings. This is where &lt;code&gt;iptables&lt;/code&gt; comes in. &lt;/p&gt;

&lt;p&gt;On Ubuntu, optionally it's possible to configure the firewall through &lt;code&gt;ufw&lt;/code&gt; (Uncomplicated firewall), which works as a simplified interface for managing &lt;code&gt;iptables&lt;/code&gt; rules. Without prior configuration, an instance may come with a pre-configured default restrictive rule set. &lt;/p&gt;

&lt;p&gt;First, check if &lt;code&gt;iptables&lt;/code&gt; is actively filtering traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-L&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't see &lt;code&gt;ACCEPT&lt;/code&gt; rules for port 443, you need to manually add it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Allow incoming traffic on port 443&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport443&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;span class="c"&gt;# Optionally, allow HTTP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-p&lt;/span&gt; tcp &lt;span class="nt"&gt;--dport80&lt;/span&gt; &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;span class="c"&gt;# Allow established connections (if not already allowed)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-m&lt;/span&gt; state &lt;span class="nt"&gt;--state&lt;/span&gt; ESTABLISHED,RELATED &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;span class="c"&gt;# Allow loopback interface (optional but recommended)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="nt"&gt;-i&lt;/span&gt; lo &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter the following command to persist the rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;iptables-persistent
&lt;span class="nb"&gt;sudo &lt;/span&gt;netfilter-persistent save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can use &lt;code&gt;ufw&lt;/code&gt; with an equivalent set of terminal commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 443/tcp
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 80/tcp
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw &lt;span class="nb"&gt;enable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What it’s like using a free VM as your dev playground
&lt;/h2&gt;

&lt;p&gt;I've been using this VM for nearly a year for development and running test web applications. I've accessed it from an iPad using Blink (with Neovim), and from a desktop via VSCode's Remote SSH extension. To make whole user experience smoother, I set up a Mosh server — this offers persistent terminal sessions even over unstable connections.&lt;/p&gt;

&lt;p&gt;The biggest upside in this setup is that the same terminal is available from any machine (given the fact that there are credentials). Oracle's free VM is a solid playground for deployment tests, CLI experimentation and even lightweight app hosting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Releated resources:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm" rel="noopener noreferrer"&gt;Oracle always free resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/learn/lab_virtual_network/index.html" rel="noopener noreferrer"&gt;Configuring VCN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mosh.org/" rel="noopener noreferrer"&gt;Mosh (mobile shell)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
    <item>
      <title>Deploying a Hugo Blog to GitHub Pages with Actions</title>
      <dc:creator>Oleg Pustovit</dc:creator>
      <pubDate>Thu, 10 Apr 2025 14:58:04 +0000</pubDate>
      <link>https://dev.to/opustovit/deploying-a-hugo-blog-to-github-pages-with-actions-13pa</link>
      <guid>https://dev.to/opustovit/deploying-a-hugo-blog-to-github-pages-with-actions-13pa</guid>
      <description>&lt;p&gt;After years of building software and writing technical documentation, I decided to create a blog to share my experience and help others on similar journeys. This article walks through how I set up my technical blog, aimed at those new to blogging or Hugo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the platform
&lt;/h2&gt;

&lt;p&gt;For me, it's important to have as minimal a solution as possible. While building my own Markdown-to-HTML publishing script is an idea for the future, my immediate priority was to get the blog online quickly.&lt;/p&gt;

&lt;p&gt;Nowadays, I don't think there's a difficulty in creating a personal website. There is a wide variety of options that aid in this endeavor. I'll start with listing alternatives that come to my mind: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CMS&lt;/strong&gt; (e.g. Wordpress, WordPress, Content Hub, Joomla, etc.): While platforms like WordPress are powerful, they felt excessive for a static content blog. I wanted something lightweight and flexible without being bound to a dynamic CMS stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jekyll&lt;/strong&gt;: This software looks to be perfect and widely used by other developers to host blogs, but due to my lack of experience with Ruby, I chose not to use this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hugo&lt;/strong&gt;. Hugo is written in Go and uses the familiar syntax of Go templates (if you happen to code a lot in Go), while rendering pages in Markdown (similarly to Jekyll). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;11ty&lt;/strong&gt;, &lt;strong&gt;Astro&lt;/strong&gt;, &lt;strong&gt;Hexo&lt;/strong&gt;, and other Node.js-based alternatives. It's a matter of preference, but personally, I decided to minimize the usage of Node.js tooling. While there are many powerful tools, the Node.js ecosystem is notorious for rapidly changing, that often led me to not being able to run the old projects that naturally had many old dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up Hugo
&lt;/h2&gt;

&lt;p&gt;I chose Hugo as my blogging platform. Having produced a substantial amount of documentation on my past software-related projects, I feel very confident using Markdown and a terminal-based text editor for my writing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using a GitHub repository
&lt;/h3&gt;

&lt;p&gt;Previously, I had already created a GitHub Pages website with placeholder files and connected a domain to it, so to populate files in an existing repo, you need to enter the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;hugo new site &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--force&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will populate the current repository directory with files that are necessary to run the Hugo website. After that, it was necessary to set up the theme and other parameters in the &lt;code&gt;hugo.toml&lt;/code&gt; file, and the site can run. After everything is set, it's possible to run the server by typing the command: &lt;code&gt;hugo server&lt;/code&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Running server in development mode
&lt;/h3&gt;

&lt;p&gt;At this point, the website is available from &lt;code&gt;localhost&lt;/code&gt;. Since I develop on a remote cloud VM, accessing the local Hugo server via &lt;code&gt;localhost&lt;/code&gt; wasn't possible. It was necessary to safely expose the localhost instance to the outside world - for such needs, a reverse proxy is used. &lt;/p&gt;

&lt;p&gt;While load balancers and reverse proxies like Nginx are quite common and popular, I chose &lt;strong&gt;Caddy&lt;/strong&gt; to serve my dev website because it sets up SSL certificates (via Let’s Encrypt) with no effort. Configuring &lt;strong&gt;Caddy&lt;/strong&gt; is done with &lt;code&gt;Caddyfile&lt;/code&gt;, where for the domain of interest you write a &lt;code&gt;reverse_proxy&lt;/code&gt; statement with the necessary port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test-blog-domain.com {
    reverse_proxy localhost:1313
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After starting caddy with the configuration above, the development website will be available from &lt;code&gt;https://test-blog-domain.com&lt;/code&gt; (given that an &lt;code&gt;A&lt;/code&gt; DNS record for &lt;code&gt;test-blog-domain.com&lt;/code&gt; is filled with a public IP address of the VM).&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding a theme
&lt;/h3&gt;

&lt;p&gt;Hugo has a number of free themes that are publicly available on GitHub. What is necessary to do to install one is to clone a repository with a theme and then update the &lt;code&gt;theme&lt;/code&gt; parameter in &lt;code&gt;hugo.toml&lt;/code&gt;. I chose a theme called &lt;a href="https://github.com/monkeyWzr/hugo-theme-cactus" rel="noopener noreferrer"&gt;&lt;code&gt;cactus&lt;/code&gt;&lt;/a&gt;. After installation, I've got a build error complaining that Google Analytics async template is not found:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error: error building site: render: failed to render pages: render of &lt;span class="s2"&gt;"/"&lt;/span&gt; failed: &lt;span class="s2"&gt;"/home/user/projects/nexo-tech.github.io/themes/cactus/layouts/_default/baseof.html:3:3"&lt;/span&gt;: execute of template failed: template: index.html:3:3: executing &lt;span class="s2"&gt;"index.html"&lt;/span&gt; at &amp;lt;partial &lt;span class="s2"&gt;"head.html"&lt;/span&gt; .&amp;gt;: error calling partial: execute of template failed: html/template:partials/head.html:47:16: no such template &lt;span class="s2"&gt;"_internal/google_analytics_async.html"&lt;/span&gt;
make: &lt;span class="k"&gt;***&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;Makefile:2: up] Error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A fix for such an issue could be found on &lt;a href="https://github.com/monkeyWzr/hugo-theme-cactus/pull/152/commits/eb4a01644555170808da009285cd805719d34f4c" rel="noopener noreferrer"&gt;&lt;code&gt;github&lt;/code&gt;&lt;/a&gt;. The Hugo community is active, and many issues — including this Google Analytics error — have existing patches or discussions on GitHub. &lt;/p&gt;

&lt;p&gt;After fixing other deprecation warnings, the site started working: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpsmn4tenlzudi7h9nuaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpsmn4tenlzudi7h9nuaq.png" alt="A front page of my blog is shown" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a website to CDN: GitHub Pages
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sp0sfrk739kv6esrx19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sp0sfrk739kv6esrx19.png" alt="Deployment diagram" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are numerous ways to deploy a static website, and in most cases, it's required to have a hosting or server. Usually, those are not free or have a restricted plan; nevertheless, there are exceptions to this, like GitHub Pages. It's possible to serve a static content from a particular branch of a GitHub repository or use pre-built GitHub actions that are based on creating build artifacts and deploying them in a custom way. Knowing that, personal GitHub accounts are very limited on storage artifact space, and it's tedious to manage that storage space, I opted for the simpler solution where static website assets will be updated in a pre-defined git branch (&lt;code&gt;gh-pages&lt;/code&gt;). Luckily, there are &lt;strong&gt;actions&lt;/strong&gt; specifically for Hugo exactly for this purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/peaceiris/actions-hugo" rel="noopener noreferrer"&gt;&lt;code&gt;actions-hugo&lt;/code&gt;&lt;/a&gt; by &lt;strong&gt;Shohei Ueda&lt;/strong&gt;. A simple way to set up Hugo in a GitHub actions environment&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/peaceiris/actions-gh-pages" rel="noopener noreferrer"&gt;&lt;code&gt;actions-gh-pages&lt;/code&gt;&lt;/a&gt; Also by Shohei Ueda, this action pushes static assets to the specified branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the code of a GitHub Actions workflow that will deploy Hugo to gh-pages. Note that if there's a need for a custom domain, a CNAME file needs to be copied to the &lt;code&gt;public&lt;/code&gt; directory before running the &lt;code&gt;gh-pages&lt;/code&gt; action. Furthermore, workflow permissions of your repository must be set to "Read and write" (could be found in &lt;strong&gt;Settings &amp;gt; Actions &amp;gt; General&lt;/strong&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Deploy Hugo&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;  
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout repo&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Hugo&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peaceiris/actions-hugo@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;hugo-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;latest'&lt;/span&gt;
          &lt;span class="na"&gt;extended&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build site&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hugo --minify&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add CNAME file&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cp CNAME public/CNAME&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to GitHub Pages&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peaceiris/actions-gh-pages@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;github_token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;publish_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./public&lt;/span&gt;
          &lt;span class="na"&gt;publish_branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gh-pages&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the deployment passes, the site should be uploaded to GitHub CDN. Be sure to set up GitHub Pages to the branch that contains built artifacts in case the website doesn't work. &lt;/p&gt;

&lt;p&gt;With Hugo set up and deployed, I can now focus on what matters — sharing technical insights from my experience. I hope this guide helps others looking to build a simple and reliable blog for their work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nexo.sh" rel="noopener noreferrer"&gt;The published blog website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nexo-tech/nexo-tech.github.io" rel="noopener noreferrer"&gt;Repository for this website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gohugo.io/getting-started/quick-start/#publish-the-site" rel="noopener noreferrer"&gt;Hugo quick start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site#dns-records-for-your-custom-domain" rel="noopener noreferrer"&gt;Configuring DNS settings for GitHub Pages&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://themes.gohugo.io/themes/hugo-theme-cactus/" rel="noopener noreferrer"&gt;Cactus theme for Hugo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gohugo.io/templates/embedded/#google-analytics" rel="noopener noreferrer"&gt;Google Analytics setup in Hugo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://caddyserver.com/docs/quick-starts/reverse-proxy" rel="noopener noreferrer"&gt;Caddy reverse proxy quick-start&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>hudo</category>
      <category>devops</category>
      <category>webdev</category>
      <category>staticwebsite</category>
    </item>
  </channel>
</rss>
