<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aleksy Bohdziul</title>
    <description>The latest articles on DEV Community by Aleksy Bohdziul (@aleksy_bohdziul_ac0e13dd8).</description>
    <link>https://dev.to/aleksy_bohdziul_ac0e13dd8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aleksy_bohdziul_ac0e13dd8"/>
    <language>en</language>
    <item>
      <title>Partial Catchment Delineation for Inundation Modeling</title>
      <dc:creator>Aleksy Bohdziul</dc:creator>
      <pubDate>Thu, 19 Feb 2026 07:34:53 +0000</pubDate>
      <link>https://dev.to/u11d/partial-catchment-delineation-for-inundation-modeling-275</link>
      <guid>https://dev.to/u11d/partial-catchment-delineation-for-inundation-modeling-275</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;A System-Oriented Approach to Scalable Hydrological Feature Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Traditional watershed delineation wasn't designed for machine learning at scale. The standard approach treats each watershed as a complete, self-contained unit, which makes sense when you're studying individual rivers. But it creates real problems when you need to train models across hundreds of locations.&lt;/p&gt;

&lt;p&gt;We ran into this on a recent flood prediction project. The study area had almost no hydrological data, no stream gauges, no riverbed surveys, nothing you'd typically use for hydraulic modeling. So we trained an LSTM model to predict discharge instead. It worked, but there was a catch: the model could only predict flow at watershed outlets. One outlet per watershed meant we didn't have enough training points.&lt;/p&gt;

&lt;p&gt;The obvious solution was to create more outlets by subdividing watersheds along the river. But traditional catchment boundaries overlap heavily when you do this, which breaks parallel processing and makes selective updates nearly impossible. We needed many independent spatial units for the ML model, but we also needed to preserve the downstream flow aggregation that hydrological modeling depends on.&lt;/p&gt;

&lt;p&gt;Our solution was to delineate watersheds at the reach level instead. Each reach gets its own partial catchment, smaller units that remain hydrologically valid but can be computed and updated independently. It's a compromise between what the data pipeline needs and what the hydrology requires.&lt;/p&gt;

&lt;p&gt;Getting the geometry right was only part of the problem. We tested multiple DEM sources, modified the river network repeatedly, and needed to recompute catchments constantly during development. Treating delineation as a one-time preprocessing step wasn't viable. We needed to version every intermediate result, recompute selectively when inputs changed, and compare outputs across iterations.&lt;/p&gt;

&lt;p&gt;This pushed us toward Dagster's asset model. Instead of treating catchments as temporary pipeline outputs, we manage them as persistent spatial assets with explicit dependencies and lineage tracking.&lt;/p&gt;

&lt;p&gt;The following sections cover the hydrological rationale, the technical implementation, and how asset orchestration made this approach practical for production use.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Catchment Delineation in the Context of Inundation Modeling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Inundation modeling relies on an accurate representation of how water accumulates and propagates through a river network. Traditionally, this begins with watershed delineation derived from a digital elevation model, followed by hydraulic simulation over the resulting domain. When applied to large regions, however, this workflow introduces practical limitations. Entire catchments must be processed as single units, even when only a small portion of the river network is relevant for a given prediction or model update.&lt;/p&gt;

&lt;p&gt;From a data engineering perspective, this creates an undesirable coupling between upstream and downstream regions. A change in DEM preprocessing, stream burning strategy, or river vector alignment forces recomputation of large spatial extents, even when the change is localized. This coupling becomes a bottleneck when experimenting with multiple configurations or when operating a system that must adapt continuously to new data.&lt;/p&gt;

&lt;p&gt;The core insight behind partial catchment delineation is that hydraulic dependency flows downstream, but computational dependency does not need to. By separating catchments into smaller, reach-aligned units, it becomes possible to preserve hydrological correctness while dramatically improving computational flexibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Core Idea: Reach-Level and Progressive Catchments&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We segmented each river into short reaches (100 m) and delineated a catchment for each reach. From those building blocks we constructed larger, progressively downstream catchments.&lt;/p&gt;

&lt;p&gt;The method introduced here distinguishes between two complementary spatial constructs: reach-level catchments and progressive catchments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reach Catchments (non-overlapping)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Reach-level catchments are defined for individual river segments, bounded upstream by the nearest confluence and downstream by the segment's outlet. These units do not overlap and collectively partition the drainage area of the river network. Their non-overlapping nature makes them well suited for parallel processing, independent feature extraction, and localized recomputation.&lt;/p&gt;

&lt;p&gt;Visualize the landscape divided into narrow, adjacent drainage areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each reach catchment drains only to its own 100 m river segment&lt;/li&gt;
&lt;li&gt;None of them include upstream contributions&lt;/li&gt;
&lt;li&gt;Their boundaries tile the basin without overlaps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmhs84we4q135s0n9nr4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmhs84we4q135s0n9nr4.gif" alt="Reach catchments" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Progressive Catchments (overlapping by design)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Progressive catchments, by contrast, represent the cumulative upstream area contributing to a given river reach. Each progressive catchment is constructed by aggregating all upstream reach-level catchments along the river network. This structure mirrors traditional hydrological reasoning and provides a direct bridge to downstream hydraulic modeling.&lt;/p&gt;

&lt;p&gt;Now start combining those catchments as you move downstream:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Progressive Catchment 1 = Reach 1&lt;/li&gt;
&lt;li&gt;Progressive Catchment 2 = Reach 1 + Reach 2&lt;/li&gt;
&lt;li&gt;Progressive Catchment 3 = Reach 1 + Reach 2 + Reach 3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Visually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the first progressive catchment is small and upstream&lt;/li&gt;
&lt;li&gt;each subsequent one contains the previous&lt;/li&gt;
&lt;li&gt;downstream catchments fully envelop upstream ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why we call them progressive: each one represents the basin area contributing flow up to that point along the river.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjs1waprdtg2luvsg64z.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjs1waprdtg2luvsg64z.gif" alt="Progressive catchments&amp;lt;br&amp;gt;
" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By maintaining both representations explicitly, the system can operate at two levels simultaneously. Reach-level catchments support scalable computation and machine learning workflows, while progressive catchments preserve the physical context required for inundation modeling.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Why not delineate progressive catchments directly?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We could have. It would actually be simpler than whatever we're actually doing. But:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We also needed the reach catchments for inundation simulation later&lt;/li&gt;
&lt;li&gt;Running delineation logic twice felt like a smell&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we delineate once, and compose later.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Tributaries&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Where a tributary joins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;its reach catchments are merged into the progressive catchment only after the confluence&lt;/li&gt;
&lt;li&gt;upstream progressive catchments on the main stem remain unaffected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reach catchments are spatial building blocks&lt;/li&gt;
&lt;li&gt;progressive catchments are cumulative assemblies of those blocks&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What the Two Catchment Types Are Used For&lt;/strong&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Progressive catchments → model training&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Our LSTM predicts discharge at outlet points. For each progressive catchment, we derive features such as precipitation, temperature, humidity, pressure, the aridity index, and others.&lt;/p&gt;

&lt;p&gt;All inputs are provided as raster datasets. Catchment geometries are used as spatial masks to extract and aggregate pixel values.&lt;/p&gt;

&lt;p&gt;This workflow requires repeated spatial joins, raster masking, and temporal aggregation over large geospatial datasets. We implement and orchestrate these pipelines using Dagster, which allows us to manage dependencies, partition computations, and scale processing across large spatial-temporal datasets.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Reach catchments → inundation mapping&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each reach gets its own discharge estimate (derived from differences between progressive catchments), which later feeds the inundation simulation.&lt;/p&gt;

&lt;p&gt;Once reach-level and progressive catchments are established, they become the foundation for feature extraction. Terrain attributes, land cover statistics, soil properties, and hydrological indices can be computed independently for each reach-level catchment. These features serve as inputs to machine learning models predicting discharge or inundation extent.&lt;/p&gt;

&lt;p&gt;Progressive catchments then provide a natural mechanism for aggregating upstream contributions. Features derived at the reach level can be accumulated downstream in a controlled, traceable manner. This separation simplifies both training and inference: models operate on consistent, non-overlapping units, while hydraulic context is reintroduced through aggregation.&lt;/p&gt;

&lt;p&gt;At this stage, the delineation method transitions from a GIS exercise into a data orchestration problem. Each derived feature depends on specific preprocessing choices, spatial units, and upstream dependencies. Managing these relationships manually quickly becomes infeasible.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;DEM (Digital Elevation Model) Preprocessing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Implementing partial catchment delineation at high spatial resolution exposes a range of practical challenges. Reliable catchment delineation depends far more on DEM preprocessing than on the delineation algorithm itself.&lt;/p&gt;

&lt;p&gt;High-resolution DEMs (1 m × 1 m in our case) amplify artifacts that are negligible at coarser scales, including spurious sinks, artificial barriers, and noise-induced flow paths. Stream burning and sink filling become necessary, but their parameters introduce additional degrees of freedom that affect downstream results.&lt;/p&gt;

&lt;p&gt;Below we summarize the preprocessing steps that proved essential for stable and repeatable results.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Depression filling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Raw DEMs frequently contain spurious sinks caused by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;measurement noise&lt;/li&gt;
&lt;li&gt;vegetation and built structures&lt;/li&gt;
&lt;li&gt;interpolation artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Left untreated, these sinks interrupt downstream connectivity and lead to fragmented or incomplete catchments. We therefore applied depression filling prior to any flow calculations.&lt;/p&gt;

&lt;p&gt;Our goal was not to aggressively flatten terrain, but to ensure continuous drainage paths with minimal elevation modification. Priority-flood-style algorithms worked well in practice and preserved overall terrain structure.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Stream burning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even after sink removal, we observed inconsistencies between modeled flow paths and known river locations. To address this, we burned the vector river network into the DEM by lowering elevations along river centerlines.&lt;/p&gt;

&lt;p&gt;Aligning raster-based flow accumulation with vector river networks proved particularly sensitive. Small positional discrepancies between datasets can lead to misaligned pour points, fragmented catchments, or unrealistic drainage patterns. These issues are not purely geometric; they directly influence the stability and reproducibility of downstream features.&lt;/p&gt;

&lt;p&gt;This step serves two purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it enforces hydrologically plausible drainage paths&lt;/li&gt;
&lt;li&gt;it reduces sensitivity to small elevation errors in flat or low-gradient terrain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stream burning significantly improved watershed stability, especially near confluences and in wide floodplains where DEM gradients are weak.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Flow accumulation and its limitations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We initially experimented with flow accumulation to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identify channelized flow paths&lt;/li&gt;
&lt;li&gt;snap pour points automatically to areas of high contributing area&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, the high spatial resolution of the DEM (1 m × 1 m) introduced significant noise into flow accumulation outputs. Minor elevation perturbations resulted in fragmented or unrealistic accumulation patterns, making automated snapping unreliable.&lt;/p&gt;

&lt;p&gt;As a result, we limited the use of flow accumulation and instead relied more heavily on burned-in river vectors and explicit reach endpoints for pour point placement.&lt;/p&gt;

&lt;p&gt;During later experiments we found that using D-infinity for calculating flow paths rather than D8 significantly improved flow accumulation calculation, but due to it being discovered too late, we weren't able to implement it before the end of first phase of the project.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Spatial alignment issues&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;During development we discovered small but still significant horizontal offsets between the DEM and the river vector dataset, on the order of a few meters. At some point we discovered that our river geometries were offset by a couple of meters relative to the DEM.&lt;/p&gt;

&lt;p&gt;These discrepancies led to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pour points falling outside effective drainage paths&lt;/li&gt;
&lt;li&gt;unstable catchment boundaries&lt;/li&gt;
&lt;li&gt;inconsistent results across neighboring reaches&lt;/li&gt;
&lt;li&gt;weird catchment boundaries&lt;/li&gt;
&lt;li&gt;several hours of existential doubt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While stream burning mitigated some of these effects, resolving DEM-vector alignment remains an important area for future improvement. This is still on our list of things to fix properly. For now, burning rivers into DEM somewhat alleviated the issue.&lt;/p&gt;

&lt;p&gt;Rather than attempting to eliminate these uncertainties entirely, we treated them as explicit dimensions of experimentation. Different preprocessing strategies were preserved as separate artifacts, allowing their effects to be compared systematically. This approach only becomes feasible when intermediate results are treated as first-class entities rather than overwritten pipeline outputs.&lt;/p&gt;

&lt;p&gt;Overall, careful DEM preprocessing proved essential not only for hydrologic correctness, but also for producing geometries stable enough to support downstream machine-learning workflows.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Implementation Outline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Below is a cleaned-up, simplified sketch of the workflow. The real code is longer, louder, and contains more comments written at 2 a.m.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1. Load DEM and preprocess
filled_dem = fill_depressions(dem)
burned_dem = burn_streams(filled_dem, river_lines)
flow_dir   = d8_flow_direction(burned_dem)

# 2. Split rivers into fixed-length reaches
reaches = split_lines(river_lines, segment_length=100)

# 3. Create pour points at reach outlets
pour_points = reaches.geometry.apply(get_downstream_endpoint)

# 4. Delineate reach catchments
reach_catchments = delineate_watersheds(
    flow_dir=flow_dir,
    pour_points=pour_points
)

# 5. Build progressive catchments
progressive = []
current = None
for reach in ordered_downstream(reach_catchments):
    current = reach if current is None else union(current, reach)
    progressive.append(current)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The devil, as always, lives in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tributary joins&lt;/li&gt;
&lt;li&gt;reach ordering&lt;/li&gt;
&lt;li&gt;and spatial indexing performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Joining tributaries means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identifying parent-child relationships between reaches&lt;/li&gt;
&lt;li&gt;merging reach catchments in the correct downstream order&lt;/li&gt;
&lt;li&gt;avoiding double-counting areas&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Asset-Based Orchestration of Spatial Dependencies&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To make this workflow operational, we modeled reach-level catchments, progressive catchments, and derived features as explicit assets within Dagster. Each asset represents a durable spatial artifact with well-defined dependencies on upstream inputs. Changes in DEM preprocessing, river network alignment, or feature definitions propagate through the asset graph in a controlled way.&lt;/p&gt;

&lt;p&gt;This asset-oriented approach allows recomputation to be both selective and explainable. When a preprocessing parameter changes, only the affected reach-level catchments and their downstream aggregates are recomputed. Historical artifacts remain available for comparison, enabling systematic evaluation of alternative configurations.&lt;/p&gt;

&lt;p&gt;Dagster's lineage tracking plays a critical role here. Each feature can be traced back through the chain of spatial transformations that produced it, providing transparency during debugging and model validation. Rather than reasoning about pipeline execution order, the system reasons about data state and dependency.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Operational Implications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Treating partial catchment delineation as an orchestrated asset graph changes the operational profile of inundation modeling workflows. Iteration becomes cheaper because recomputation is localized. Failures become easier to diagnose because dependencies are explicit. Experimentation becomes safer because previous states are preserved rather than overwritten.&lt;/p&gt;

&lt;p&gt;Perhaps most importantly, this approach aligns hydrological reasoning with modern data platform design. Physical dependencies are respected, but they no longer dictate computational coupling. The system can evolve incrementally, accommodating new data sources, preprocessing strategies, and modeling approaches without requiring full recomputation of the spatial domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Lessons Learned&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DEM preprocessing is very important&lt;/li&gt;
&lt;li&gt;1 m DEMs are great until you compute derivatives&lt;/li&gt;
&lt;li&gt;River vectors and DEMs rarely agree — believe neither blindly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Segmenting rivers into reach-level catchments gave us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more training points&lt;/li&gt;
&lt;li&gt;spatially consistent features&lt;/li&gt;
&lt;li&gt;and a clean bridge between ML discharge prediction and inundation modeling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Partial catchment delineation proved valuable not because it produced a single optimal representation of a watershed, but because it enabled a shift in how spatial dependencies are managed at scale. By decomposing watersheds into reach-level units and reconstructing downstream context through progressive aggregation, we gained a representation that supports both hydrological correctness and computational scalability.&lt;/p&gt;

&lt;p&gt;The effectiveness of this approach ultimately depended on its orchestration. Without an asset-oriented framework, the complexity introduced by multiple delineation strategies and iterative experimentation would quickly become unmanageable. By modeling spatial artifacts explicitly and preserving their lineage, we were able to integrate hydrology, machine learning, and geospatial preprocessing into a coherent, production-ready system.&lt;/p&gt;

&lt;p&gt;If nothing else, this workflow taught us humility, patience, and how many ways water can refuse to flow downhill.&lt;/p&gt;

&lt;p&gt;While this article focused on inundation modeling, the underlying pattern extends to any domain where high-resolution geospatial data meets iterative, data-driven workflows. Partial decomposition of space, combined with asset-based orchestration, offers a practical path toward scalable and trustworthy spatial modeling systems.&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to Debug a Node.js App on AWS ECS Fargate Using Port Forwarding (Step-by-Step Guide)</title>
      <dc:creator>Aleksy Bohdziul</dc:creator>
      <pubDate>Tue, 18 Nov 2025 23:00:00 +0000</pubDate>
      <link>https://dev.to/u11d/how-to-port-forward-to-an-ecs-fargate-task-to-debug-your-nodejs-46pn</link>
      <guid>https://dev.to/u11d/how-to-port-forward-to-an-ecs-fargate-task-to-debug-your-nodejs-46pn</guid>
      <description>&lt;p&gt;At some point in their lives, every software engineer eventually faces the task of debugging an app live on a remote server. If that app happens to be running on ECS Fargate, getting into that container safely is possible, but not immediately obvious.&lt;/p&gt;

&lt;p&gt;Full disclosure: I haven’t done this exact thing before, but I have used the same port-forwarding trick to peek into our RDS instances via an ECS task acting as a jumpbox. So yes, I tested these instructions, and yes, they actually work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Here’s what we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Node.js app deployed to ECS Fargate&lt;/li&gt;
&lt;li&gt;Inside a private VPC subnet&lt;/li&gt;
&lt;li&gt;Without SSH access (Fargate doesn’t do that)&lt;/li&gt;
&lt;li&gt;VS Code (or some other IDE with Node.js inspector support)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Enable Node.js inspector
&lt;/h2&gt;

&lt;p&gt;First, make sure your ECS task’s running Node.js process with the inspector enabled. This can be achieved by passing a flag to the &lt;code&gt;node&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;--inspect&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;127.0.0.1:9229 server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or by adding the inspect flag to the &lt;code&gt;NODE_OPTIONS&lt;/code&gt; ****environmental variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NODE_OPTIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--inspect=127.0.0.1:9229"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we’re using ECS, adding flag to the &lt;code&gt;node&lt;/code&gt; command would require modifying &lt;code&gt;Dockerfile&lt;/code&gt; or overriding command in the task definition, so I’d suggest the environmental variable approach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"containerDefinitions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"essential"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NODE_OPTIONS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--inspect=127.0.0.1:9229"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; 
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Enable ECS Exec and SSM Access
&lt;/h2&gt;

&lt;p&gt;Before you can connect, your ECS task needs permission and execution enabled:&lt;/p&gt;

&lt;p&gt;Attach this IAM policy to your ECS task execution role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable ECS Exec on your service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs update-service &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--service&lt;/span&gt; my-service &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-execute-command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re changing it through the console, it’s hidden under Troubleshooting → Enable Execute Command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Find Your Running Task
&lt;/h2&gt;

&lt;p&gt;List your tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs list-tasks &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll get something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"taskArns"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:ecs:us-east-1:123456789012:task/my-cluster/abc123def456ghi789"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last part of the ARN (&lt;code&gt;abc123def456ghi789&lt;/code&gt;) is your &lt;strong&gt;Task ID&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Set Up Port Forwarding via SSM
&lt;/h2&gt;

&lt;p&gt;Here’s where the real black magic starts. AWS won’t just give you a SSM target ID for the Fargate task, you have to construct it yourself using this template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecs:&amp;lt;cluster_name&amp;gt;_&amp;lt;task_id&amp;gt;_&amp;lt;container_runtime_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the container runtime ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs describe-tasks &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--task&lt;/span&gt; &amp;lt;task_arn&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"tasks[].containers"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll get a massive JSON blob, and somewhere in that mess hides the &lt;code&gt;runtimeId&lt;/code&gt; field you actually care about. There might be a few of them actually, one for each container in the task. And yes, the &lt;code&gt;runtimeId&lt;/code&gt; contains task id in it.&lt;/p&gt;

&lt;p&gt;Then start the port forwarding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ssm start-session &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"ecs:my-cluster_cb39a47ef2ef45f4b947236bf00aeadd_cb39a47ef2ef45f4b947236bf00aeadd-3935363592"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--document-name&lt;/span&gt; AWS-StartPortForwardingSessionToRemoteHost &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="s1"&gt;'{"host":["127.0.0.1"],"portNumber":["9229"],"localPortNumber":["9229"]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens a local port (&lt;code&gt;9229&lt;/code&gt;) and forwards traffic securely to the Node.js process in your container.&lt;/p&gt;

&lt;p&gt;💡 Pro tip: We’re forwarding to &lt;code&gt;127.0.0.1&lt;/code&gt; here, but you can forward to any IP or hostname accessible from the ECS task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Connect VS Code Debugger
&lt;/h2&gt;

&lt;p&gt;In &lt;code&gt;.vscode/launch.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"request"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"attach"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Attach to Fargate"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9229&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"localRoot"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${workspaceFolder}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"remoteRoot"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/usr/src/app"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s all.&lt;/p&gt;

&lt;p&gt;If you don’t know the &lt;code&gt;remoteRoot&lt;/code&gt; , then you can get into the container and look around to check it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs execute-command &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--task&lt;/span&gt; &amp;lt;task_arn&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"/bin/sh"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--container&lt;/span&gt; &amp;lt;container name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Alternative: Chrome debugger (chrome://inspect). Usually works, but tends to have issues with source maps.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TL;DR Cheat Sheet
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Make sure your Node.js app runs with --inspect flag&lt;/span&gt;
&lt;span class="nv"&gt;NODE_OPTIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--inspect=127.0.0.1:9229"&lt;/span&gt;

&lt;span class="c"&gt;# 2. Give ECS task SSM permissions&lt;/span&gt;
&lt;span class="c"&gt;# 3. Enable ECS Exec&lt;/span&gt;
aws ecs update-service —service my-service —cluster my-cluster —enable-execute-command

&lt;span class="c"&gt;# 4. Get you task arn&lt;/span&gt;
aws ecs list-tasks &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster

&lt;span class="c"&gt;# 5. Get your container runtime id&lt;/span&gt;
aws ecs describe-tasks &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--cluster&lt;/span&gt; &amp;lt;cluster&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-2 &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--task&lt;/span&gt; &amp;lt;task_arn&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"tasks[].containers"&lt;/span&gt;

&lt;span class="c"&gt;# 6. Start port forwarding&lt;/span&gt;
aws ssm start-session &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--target&lt;/span&gt; ecs:&amp;lt;cluster&amp;gt;_&amp;lt;task&amp;gt;_&amp;lt;runtime_id&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--document-name&lt;/span&gt; AWS-StartPortForwardingSessionToRemoteHost &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="s1"&gt;'{"host":["127.0.0.1"],"portNumber":["9229"],"localPortNumber":["9229"]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then attach your VS Code debugger to &lt;code&gt;localhost:9229&lt;/code&gt; and voila.&lt;/p&gt;

</description>
      <category>node</category>
      <category>ecs</category>
      <category>aws</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Securely Connect to Medusa.js Production Database on AWS?</title>
      <dc:creator>Aleksy Bohdziul</dc:creator>
      <pubDate>Fri, 29 Aug 2025 11:00:54 +0000</pubDate>
      <link>https://dev.to/u11d/how-to-securely-connect-to-medusajs-production-database-on-aws-5efh</link>
      <guid>https://dev.to/u11d/how-to-securely-connect-to-medusajs-production-database-on-aws-5efh</guid>
      <description>&lt;p&gt;Let’s imagine something for a second.&lt;/p&gt;

&lt;p&gt;You're minding your own business, managing AWS infrastructure for a client with a pretty standard e-commerce setup: a &lt;a href="https://medusajs.com/" rel="noopener noreferrer"&gt;Medusa.js&lt;/a&gt; backend, a &lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt; storefront, and most importantly for this story, a &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; &lt;a href="https://docs.aws.amazon.com/rds/" rel="noopener noreferrer"&gt;RDS&lt;/a&gt; instance safely stashed away in a private subnet where nothing from the outside world can touch it. Exactly how the AWS gods intended.&lt;/p&gt;

&lt;p&gt;Then, one day, your client says:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Hey, I need to get access to the production database"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, there are plenty of legit reasons to want this kind of access: analytics, dashboards, audits, maybe some light database spelunking. In this case, it’s for &lt;a href="https://www.metabase.com/" rel="noopener noreferrer"&gt;Metabase&lt;/a&gt;, which as far as you can tell, magically turns SQL into colourful charts.&lt;/p&gt;

&lt;p&gt;So sure, let's help them out. You're a helpful DevOps engineer. You write Terraform. You breathe YAML. You’ve stared into the void of broken networking configs and lived to tell the tale. This? This is doable.&lt;/p&gt;

&lt;p&gt;The only question is: how do we do it securely, without slapping a public IP on the database and calling it a day?&lt;/p&gt;

&lt;p&gt;That’s exactly what this post is about: how to securely connect to a Medusa.js production database on AWS, without compromising your infrastructure or your sleep.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’ve got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A managed RDS PostgreSQL database (though this applies to pretty much any RDS engine)&lt;/li&gt;
&lt;li&gt;A Metabase instance living somewhere outside the VPC&lt;/li&gt;
&lt;li&gt;A need to connect Metabase to the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds simple enough, but of course it's not just a one-click "make it public" button.&lt;/p&gt;

&lt;p&gt;But wait, some of you might be asking: what's actually stopping us from "just" connecting to the database? Well, our database sits in a private subnet. Which means... (&lt;em&gt;flips through AWS docs&lt;/em&gt;), it doesn’t have a route to an &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html" rel="noopener noreferrer"&gt;internet gateway&lt;/a&gt;. More importantly, it doesn’t have a public IP address. It’s only reachable via its private IP from within the VPC. Oh, there is also the security group, which allows access only from the Medusa backend.&lt;/p&gt;

&lt;p&gt;Now, in theory, I could completely disregard all the security concerns, toss the database into a public subnet, give it a public IP, and call it a day. But I'd probably also get tossed out of a job, and fairly so.&lt;/p&gt;

&lt;p&gt;So, making the database publicly accessible is off the table. That leaves us with one goal: somehow access that private IP from the outside. Luckily (or unluckily), there are a few ways to make that happen.&lt;/p&gt;

&lt;h1&gt;
  
  
  So, what are our options?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Option 1: Port Forwarding
&lt;/h2&gt;

&lt;p&gt;The first and most common approach is good old port forwarding, which is basically asking a server inside your private network to kindly act as a middleman and pass your packets along to the database, like a helpful bouncer who also does package delivery&lt;/p&gt;

&lt;p&gt;To make this work, we need what's called a jump box (or bastion host if you're feeling fancy). This is just a plain old EC2 instance living in your VPC with access to the private subnet and your database (Don’t forget to update your RDS security group to allow traffic from the jump box).&lt;/p&gt;

&lt;p&gt;Here’s a minimal Terraform snippet to spin one up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"jump_box"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0abcdef1234567890"&lt;/span&gt; &lt;span class="c1"&gt;# Replace with latest Amazon Linux 2 AMI&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.nano"&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jump_box_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-ssh-key"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jump-box"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"jump_box_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jump-box-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow SSH"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"your-ip-address/32"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve got our jump box, let’s explore how we can tunnel traffic through it.&lt;br&gt;
For that, we can use the classic way or the AWS way.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Option 1.1: The Classic Way - SSH Port Forwarding&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the method your senior Linux admin probably used in 2009, and honestly, it still works just fine.&lt;/p&gt;

&lt;p&gt;You just need to move your EC2 instance to a public subnet, give it an Elastic IP, SSH key (if you’re using terraform, check the parameter &lt;em&gt;key_name&lt;/em&gt; of &lt;em&gt;aws_instance&lt;/em&gt;), and make sure the port 22 is open (preferably make it accessible only to specific IP addresses).&lt;/p&gt;

&lt;p&gt;Assuming you’ve got your jump box set up and your private key in hand, here’s the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/your-key.pem &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-N&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; 5430:your-db.xxxxx.rds.amazonaws.com:5432 &lt;span class="se"&gt;\&lt;/span&gt;
  ec2-user@your-jumpbox-public-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets up a tunnel from localhost:5430 → your RDS:5432. Just point your client at localhost:5430 and you’re good. (You can change the local port if you need to)&lt;/p&gt;

&lt;p&gt;Most SQL tools support built-in SSH tunneling if you don’t want to set up the tunnel manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It just works&lt;/li&gt;
&lt;li&gt;It’s compatible with most solutions which need to connect to SQL database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instance needs patching and monitoring&lt;/li&gt;
&lt;li&gt;You’re exposing an SSH port (even if restricted)&lt;/li&gt;
&lt;li&gt;You’ll start getting unsolicited connections attempts the moment you open the port&lt;/li&gt;
&lt;li&gt;You have to manage SSH keys (on AWS it’s not that big of a deal)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the second option we chose and well, it just worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Option 1.2: The AWS Way - SSM Port Forwarding&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ah, good old SSM. In theory, this is the cleaner option. No public access, no open ports, just straight up AWS magic. You enable SSM, and then use “aws ssm start-session” to port-forward to the database.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSM Agent installed on your EC2 instance (Amazon Linux already has it preinstalled)&lt;/li&gt;
&lt;li&gt;AWS CLI with SSM plugin installed&lt;/li&gt;
&lt;li&gt;AmazonSSMManagedInstanceCore policy attached to your jump box&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ssm:StartSession&lt;/code&gt; and &lt;code&gt;ssm:DescribeInstanceInformation&lt;/code&gt; permissions for your account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have all of that, you can just run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ssm start-session &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; &amp;lt;instance-id&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--document-name&lt;/span&gt; AWS-StartPortForwardingSession &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="s1"&gt;'{"host":["your-db.xxxxx.rds.amazonaws.com"], "portNumber":["5432"],"localPortNumber":["5430"]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Like the previous options, this will open a tunnel allowing you to connect to the database on &lt;code&gt;localhost:5430&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No public IPs or open ports&lt;/li&gt;
&lt;li&gt;No SSH keys to manage&lt;/li&gt;
&lt;li&gt;IAM-based access&lt;/li&gt;
&lt;li&gt;You can audit access in the AWS CloudTrail&lt;/li&gt;
&lt;li&gt;Feels like you’re doing something correct&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires AWS CLI&lt;/li&gt;
&lt;li&gt;It probably doesn’t actually solve your problem, since most solutions don’t support this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the thing, while port forwarding with SSM is nice for dev and devops access, if you need something like a Metabase to connect to your database then SSM won’t help you.&lt;/p&gt;

&lt;p&gt;So yes, we tried this first, and just after that found out that the access is needed for Metabase.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Option 2: VPN&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In case you somehow don’t know, a VPN (Virtual Private Network) is basically a magic tunnel that lets machines outside your VPC pretend they’re inside it. Once connected, your laptop can access your internal resources, as if they were part of your precious private subnet all along.&lt;/p&gt;

&lt;p&gt;This one’s probably overkill for most use cases like ours, but hey, it exists. You can spin up a VPN (e.g. AWS Client VPN or a WireGuard setup) and let your client connect to your internal network that way. Great if you already have a VPN setup, but if you don’t, then do you really want to?&lt;/p&gt;

&lt;p&gt;There are a few flavors here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Client VPN&lt;/strong&gt;: The “I want AWS to hold my hand” option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roll-your-own VPN&lt;/strong&gt; (OpenVPN, WireGuard): AKA “I like pain.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third-party VPNs&lt;/strong&gt;: Where you pay to inflict the pain on someone else.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll use &lt;strong&gt;AWS Client VPN&lt;/strong&gt; because we’re not trying to impress anyone, we’re just trying to get this over with before lunch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_client_vpn_endpoint"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Client VPN for private RDS access"&lt;/span&gt;
  &lt;span class="nx"&gt;client_cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.10.0/22"&lt;/span&gt;
  &lt;span class="nx"&gt;server_certificate_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:acm:your-cert-arn"&lt;/span&gt;
  &lt;span class="nx"&gt;authentication_options&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"certificate-authentication"&lt;/span&gt;
    &lt;span class="nx"&gt;root_certificate_chain_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:acm:your-root-ca-arn"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;connection_log_options&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;split_tunnel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;dns_servers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"8.8.8.8"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# shrug&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_client_vpn_network_association"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;client_vpn_endpoint_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ec2_client_vpn_endpoint&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_client_vpn_authorization_rule"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;client_vpn_endpoint_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ec2_client_vpn_endpoint&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;target_network_cidr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
  &lt;span class="nx"&gt;authorize_all_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oh right, you’ll need to create certificates. With ACM. Or OpenSSL. Or just write them by hand. Whichever works.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this actually does:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Spins up a VPN endpoint (as if we needed more resources to take care of).&lt;/li&gt;
&lt;li&gt;Associates it with your VPC so users can crawl around your private subnets.&lt;/li&gt;
&lt;li&gt;Lets anyone who can connect access your RDS instance as if they were in the same network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise-y!&lt;/li&gt;
&lt;li&gt;Secure and scalable&lt;/li&gt;
&lt;li&gt;Useful for broader access needs&lt;/li&gt;
&lt;li&gt;Works great if you already have a VPN setup (we don’t)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have to use the VPN&lt;/li&gt;
&lt;li&gt;Doesn’t work with Metabase&lt;/li&gt;
&lt;li&gt;Certs, IAM, CIDR blocks, DNS resolution issues&lt;/li&gt;
&lt;li&gt;Can very easily get very expensive&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Bonus Consideration: Read Replicas&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If your client is doing heavy analytical workloads, consider offloading queries to an RDS read replica. That way, your production DB doesn’t get overwhelmed by analytical queries, and you get some nice isolation between transactional and analytical use.&lt;/p&gt;

&lt;p&gt;Keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read replicas can lag behind the primary DB&lt;/li&gt;
&lt;li&gt;You still need to expose the replica through one of the methods above&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it’s a nice option if performance and reliability matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Wrapping Up&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;So, your client wants access to his Medusa.js database. It’s a valid ask, but you still want to do it responsibly.&lt;/p&gt;

&lt;p&gt;Here’s the quick TLDR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use SSH port forwarding&lt;/strong&gt; if you need quick, compatible access from outside (e.g. Metabase).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use SSM&lt;/strong&gt; for internal-only dev/admin access, just don’t expect it to work with any tool like Metabase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPN&lt;/strong&gt; if you’re going full enterprise or need multi-service access.&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;Read replicas&lt;/strong&gt; if query load is a concern.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And above all: don’t just throw the DB in a public subnet. That way lies sadness, audit findings, and probably a very uncomfortable meeting.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Speed Up Your Next.js App: Optimizing S3 Images with Cloudflare Images</title>
      <dc:creator>Aleksy Bohdziul</dc:creator>
      <pubDate>Wed, 09 Jul 2025 07:00:00 +0000</pubDate>
      <link>https://dev.to/u11d/speed-up-your-nextjs-app-optimizing-s3-images-with-cloudflare-images-1h67</link>
      <guid>https://dev.to/u11d/speed-up-your-nextjs-app-optimizing-s3-images-with-cloudflare-images-1h67</guid>
      <description>&lt;p&gt;Let’s talk about something near and dear to every developer’s heart: &lt;strong&gt;making stuff faster without tearing your hair out&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So you’ve got your images chilling in an S3 bucket — classic move. But if you’re serving them straight from there, you’re probably pushing chunky, uncompressed images across the internet like it’s 2008. On the flip side, maybe you’re leaning on Next.js’s built-in image optimization… which is cool until your server starts wheezing under the load like it just ran a marathon.&lt;/p&gt;

&lt;p&gt;But don’t worry — there’s a better way. In this post, we’ll walk through a dead-simple setup using &lt;strong&gt;Cloudflare Images&lt;/strong&gt; to optimize and cache your images right at the edge, leaving your compute power untouched and your pages blazing fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What You’ll Need&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before we dive in, make sure you’ve got the basics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS account&lt;/strong&gt;&lt;/a&gt; (for your S3 bucket full of glorious JPEGs). If you're using something else for file storage, that’s totally fine too — we don’t judge.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://www.cloudflare.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cloudflare account&lt;/strong&gt;&lt;/a&gt; — free tier works great.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;domain&lt;/strong&gt; hooked up to Cloudflare.&lt;/li&gt;
&lt;li&gt;And since this post is all about &lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;, we’ll assume you’ve already got that part set up and humming along. If not… this might be a weird place to start.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setup Guide: The Part You’re Actually Here For&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Time to get our hands slightly dirty (but like… not &lt;em&gt;real&lt;/em&gt; dirty — this is all pretty painless). Let’s hook everything up so Cloudflare can work its optimization magic on your S3 images.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Set Up Your Bucket&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you already have an S3 bucket full of images, awesome — you’re halfway there. If not, go ahead and create one. Give it a nice name, something you won’t be embarrassed about later — you’ll be seeing it a lot.&lt;/p&gt;

&lt;p&gt;Now the important part: Cloudflare Images needs to be able to &lt;em&gt;access&lt;/em&gt; your images. If you’re already serving images from S3, you’ve probably handled this with either public access or presigned URLs. If not, here’s the quick-and-dirty setup:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Make the bucket public (yes, really)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to your bucket permissions&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Make sure “Block public access” is turned **off&lt;/em&gt;**&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpungt6snfnk2f1qcyuk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpungt6snfnk2f1qcyuk0.png" alt="optimizing_s3_images_with_cloudflare_images_001.png" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then slap this bucket policy on there to allow public reads (it’s just below “Block public access”:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PublicReadGetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::&amp;lt;your-bucket-name&amp;gt;/*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t forget to swap in your actual bucket name, unless you want Cloudflare to look for  and come back very confused.&lt;/p&gt;

&lt;p&gt;That’s it — your images are now ready to be fetched and optimized like champs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.&lt;/strong&gt; Next stop: Cloudflare land &lt;strong&gt;(a.k.a. Where the Magic Happens)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Head to your &lt;strong&gt;Cloudflare dashboard&lt;/strong&gt;, choose your domain, and go to the &lt;strong&gt;Images → Transformations&lt;/strong&gt; tab. Click the little “Enable” toggle next to your domain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkho4mdd31icnkugn2qf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkho4mdd31icnkugn2qf.png" alt="optimizing_s3_images_with_cloudflare_images_002.png" width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Still on the same page, click your domain to open the settings. Under Sources, select Specified Origins and add your S3 bucket’s public domain (like &lt;a href="https://your-bucket.s3.eu-west-1.amazonaws.com" rel="noopener noreferrer"&gt;https://your-bucket.s3.eu-west-1.amazonaws.com&lt;/a&gt;). You can select “Any origin” here… but unless you enjoy living dangerously, stick with the specified option.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hesfbo3i7dr3vy252ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hesfbo3i7dr3vy252ez.png" alt="optimizing_s3_images_with_cloudflare_images_003.png" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Cloudflare Image Format — Fancy URLs That Do All the Work&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cloudflare’s image transformation magic works through simple (and surprisingly readable) URLs. The format looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://&amp;lt;your-domain&amp;gt;/cdn-cgi/image/width&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;width&amp;gt;,quality&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;quality&amp;gt;,format&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;format&amp;gt;/&amp;lt;S3-image-url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break that down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;width&lt;/strong&gt; – how wide you want the image in pixels (e.g., 800)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;quality&lt;/strong&gt; – how much quality to keep (1 to 100 — 75 is a solid sweet spot)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;format&lt;/strong&gt; – choose from auto, avif, webp, or jpeg. Just go with auto unless you’re feeling experimental&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3-image-url&lt;/strong&gt; – the full URL to the image in your bucket, like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://bucket-name.s3.eu-west-1.amazonaws.com/awesome-cat.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a complete example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://your-domain.com/cdn-cgi/image/width&lt;span class="o"&gt;=&lt;/span&gt;800,quality&lt;span class="o"&gt;=&lt;/span&gt;75,format&lt;span class="o"&gt;=&lt;/span&gt;auto/https://bucket-name.s3.eu-west-1.amazonaws.com/awesome-cat.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cloudflare does all the heavy lifting — resizing, compressing, and even converting to modern formats. Your users get fast, lightweight images, and your server gets to nap.&lt;/p&gt;

&lt;p&gt;Want to get fancy? There are more transformation options &lt;a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, but width, quality, and format cover 90% of what you’ll need.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Plugging It Into Next.js&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Alright, time to wire this into your Next.js app. I’m going to assume you’re using the &lt;a href="https://nextjs.org/docs/pages/api-reference/components/image" rel="noopener noreferrer"&gt;&lt;code&gt;&amp;lt;Image&amp;gt;&lt;/code&gt; component&lt;/a&gt; from &lt;code&gt;next/image&lt;/code&gt; — because if you’re not, now’s a great time to start.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Option A: The Quick and Dirty Way&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The quick-and-dirty method: just feed the transformed Cloudflare URL directly into the  component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Image&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/image&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Image&lt;/span&gt;
  &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://your-domain.com/cdn-cgi/image/width=800,format=auto/https://bucket-name.s3.eu-west-1.amazonaws.com/cat-wearing-a-crown.jpg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;800&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A regal cat wearing a golden crown&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Done. It works. Your cat is now optimized.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Option B: The Cleaner, More Reusable Way (Custom Loader)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want to keep your code tidy and avoid sprinkling Cloudflare URLs everywhere, you can roll a custom image loader.&lt;/p&gt;

&lt;p&gt;First, update your next.config.js:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;loaderFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./cloudflare-loader.ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you’re using remote images, you’ll probably need to add a remote pattern too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;remotePatterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bucket-name.s3.eu-west-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, create a new file called &lt;code&gt;cloudflare-loader.ts&lt;/code&gt; in your project root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ImageLoaderProps&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/image&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_DOMAIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bucket-name.s3.eu-west-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CLOUDFLARE_DOMAIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;your domain&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;cloudflareImageLoader&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;quality&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ImageLoaderProps&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parsedSrc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parsedSrc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_DOMAIN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;`width=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`quality=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;quality&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;format=auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;CLOUDFLARE_DOMAIN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/cdn-cgi/image/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Cloudflare image loader error:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// Fallback to Next.js built-in image optimization&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`/_next/image?url=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;encodeURIComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;w=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;q=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;quality&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applies Cloudflare optimization &lt;em&gt;only&lt;/em&gt; to images from your S3 bucket&lt;/li&gt;
&lt;li&gt;Falls back to Next.js optimization for everything else&lt;/li&gt;
&lt;li&gt;Keeps your JSX clean and your dev brain happy &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Result? A Happier, Faster App
&lt;/h3&gt;

&lt;p&gt;Now you’ve got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On-the-fly image optimization&lt;/li&gt;
&lt;li&gt;CDN delivery from Cloudflare’s edge nodes&lt;/li&gt;
&lt;li&gt;No need to pre-process or move your images&lt;/li&gt;
&lt;li&gt;Better scores in Lighthouse, Core Web Vitals, and all that jazz&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;Optimizing images is one of the lowest-effort, highest-impact improvements you can make to a modern web app. And thanks to Cloudflare Images + your trusty old S3 bucket, you don’t need to rearchitect your whole stack to do it.&lt;/p&gt;

&lt;p&gt;So go ahead, optimize those cat pictures. Your users — and their data plans — will thank you. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>nextjs</category>
    </item>
  </channel>
</rss>
