<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Guo Xiang (Harvey) Ng</title>
    <description>The latest articles on DEV Community by Guo Xiang (Harvey) Ng (@guoxiangng).</description>
    <link>https://dev.to/guoxiangng</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/guoxiangng"/>
    <language>en</language>
    <item>
      <title>Vibe Coding a Real-Time ETH-BSC Bridge Monitor utilizing My Own AWS NodeRunner Nodes</title>
      <dc:creator>Guo Xiang (Harvey) Ng</dc:creator>
      <pubDate>Mon, 13 Oct 2025 08:05:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/vibe-coding-a-real-time-eth-bsc-bridge-monitor-utilizing-my-own-aws-noderunner-nodes-3fai</link>
      <guid>https://dev.to/aws-builders/vibe-coding-a-real-time-eth-bsc-bridge-monitor-utilizing-my-own-aws-noderunner-nodes-3fai</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This is a continuation of the &lt;a href="https://medium.com/aws-in-plain-english/setting-up-blockchain-nodes-with-aws-node-runners-for-fun-ethereum-base-bsc-004c6e5ea855" rel="noopener noreferrer"&gt;article&lt;/a&gt; I wrote previously which detailed my experience deploying standalone blockchain nodes with the &lt;a href="https://aws-samples.github.io/aws-blockchain-node-runners/" rel="noopener noreferrer"&gt;AWS Node Runner Project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After setting up my Ethereum and Binance Smart Chain (BSC) nodes, I wanted to actually use them for something. Not just have them sitting there syncing blocks. I’m not ashamed to admit that Claude gave me the simple idea of a bridge monitor which came from the question: &lt;strong&gt;How long does it actually take to bridge assets between Ethereum and BSC?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No one seems to have real data on “bridge congestion” and “delayed transactions”. DefiLlama shows bridge volumes, individual bridge sites show “operational status,” but not transaction-level data. How long does Binance Bridge take vs Multichain vs cBridge? Since I already had the infrastructure running, I figured: why not build a bridge monitor?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Code: &lt;a href="https://github.com/guoxiangng/eth-bsc-bridge-monitoring" rel="noopener noreferrer"&gt;https://github.com/guoxiangng/eth-bsc-bridge-monitoring&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DISCLAIMER — crypto novice here and this personal project was done for learning and curiosity purposes. Forgive me if there are wrong concepts, approaches or statements mentioned.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  The Idea
&lt;/h4&gt;

&lt;p&gt;Bridge transactions work by calling specific smart contracts. When someone bridges USDT from Ethereum to BSC, their transaction goes to a bridge contract address on Ethereum. The bridge then releases funds on BSC.&lt;/p&gt;

&lt;p&gt;So to monitor the bridge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Watch for transactions going to known bridge contract addresses&lt;/li&gt;
&lt;li&gt;Track when they complete&lt;/li&gt;
&lt;li&gt;Calculate performance metrics&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Build
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Detecting Bridge Transactions
&lt;/h4&gt;

&lt;p&gt;I started with the most popular bridges for ETH-BSC transfers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bridges = {
    'eth': {
        '0x6b7a87899490EcE95443e979cA9485CBE7E71522': 'Multichain Router',
        '0x3ee18B2214AFF97000D974cf647E7C347E8fa585': 'Binance Bridge',
        '0x5427FEFA711Eff984124bFBB1AB6fbf5E3DA1820': 'cBridge',
    },
    'bsc': {
        '0x6b7a87899490EcE95443e979cA9485CBE7E71522': 'Multichain Router',
        '0x841ce48F9446C8E281D3F1444cB859b4A6D0738C': 'cBridge BSC',
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The monitoring logic was to scan recent blocks, check if any transactions are going to these addresses, and save them to a database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Problem — The BSC PoA Error
&lt;/h4&gt;

&lt;p&gt;First major roadblock came immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⚠️ Error scanning block 56150636 on bsc: The field extraData is 280 bytes, but should be 32. It is quite likely that you are connected to a POA chain.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turns out BSC uses &lt;strong&gt;Proof of Authority&lt;/strong&gt; consensus, not Proof of Work like Ethereum. This breaks Web3.py’s default assumptions about block structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix:&lt;/strong&gt; One line of middleware and BSC scanning worked afterwards.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from web3.middleware import geth_poa_middleware
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Problem — Everything was Pending
&lt;/h4&gt;

&lt;p&gt;Got a basic terminal-style dashboard working pretty quickly with FastAPI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9nbx1l8278krjcp52bn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9nbx1l8278krjcp52bn.png" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forw9mq5hm7blap2hqqet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forw9mq5hm7blap2hqqet.png" width="800" height="1101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It showed transactions being detected! 15 pending, 1 completed. Felt good. Engrossed in dashboard updates, I would only later find out that the numbers also quickly increased (soon it was 1,497 transactions detected) but &lt;strong&gt;Zero completions.&lt;/strong&gt; 0.0m average time. 0% success rate.&lt;/p&gt;

&lt;p&gt;The issue was that &lt;strong&gt;the status update logic wasn’t right.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It was checking 10 transactions at a time and running only every 30 seconds while using simple time-based logic instead of actual blockchain verification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fixes:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check ALL pending transactions (removed LIMIT)
cursor.execute('''
    SELECT tx_hash, created_at, direction, block_number
    FROM transactions 
    WHERE status = 'pending'
''')

# Update MORE FREQUENTLY
if cycle % 3 == 0: # Every 15 seconds, not 30
    self.show_comprehensive_stats()

# Actually verify transaction on-chain
receipt = w3.eth.get_transaction_receipt(tx_hash)

if receipt and receipt['status'] == 1:
    cursor.execute('''
        UPDATE transactions
        SET status = 'completed',
            completed_at = datetime('now'),
            completion_time = ?,
            gas_used = ?
        WHERE tx_hash = ?
    ''', (elapsed, receipt['gasUsed'], tx_hash))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the fixes, things started looking real:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftto36ktotpvqmrztqqou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftto36ktotpvqmrztqqou.png" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;7,956 transactions tracked&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;7,101 completed&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4.1m average completion time&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success rates by bridge&lt;/strong&gt; : Wormhole Token at 92%, cBridge at 76%, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Expanding From 5 Bridges to 20+
&lt;/h4&gt;

&lt;p&gt;The initial version only monitored 5 bridge contracts. But there are actually dozens of bridges operating between ETH and BSC. I expanded the contract list to include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        self.bridges = {
            'eth': {
                # Multichain (AnySwap) - Multiple contracts
                '0x6b7a87899490EcE95443e979cA9485CBE7E71522': 'Multichain Router',
                '0x533e3c0e6b48010873B947bddC4721b1bDFF9648': 'Multichain USDT',
                '0x765277EebeCA2e31912C9946eAe1021199B39C61': 'Multichain ETH',
                '0xC564EE9f21Ed8A2d8E7e76c085740d5e4c5FaFbE': 'Multichain USDC',
                # Binance Bridge
                '0x3ee18B2214AFF97000D974cf647E7C347E8fa585': 'Binance Bridge',
                '0xfA0F307783AC21C39E939ACFF795e27b650F6e68': 'Binance Token Hub',
                # cBridge (Celer) 
                # ... +15 more from Wormhole Portal Bridge, Stargate (LayerZero), Synapse Bridge, Hop Protocol, Across Protocol, deBridge, Connext
            },

            'bsc': {
                # Multichain on BSC
                '0x6b7a87899490EcE95443e979cA9485CBE7E71522': 'Multichain Router',
                '0x533e3c0e6b48010873B947bddC4721b1bDFF9648': 'Multichain USDT',
                '0xC564EE9f21Ed8A2d8E7e76c085740d5e4c5FaFbE': 'Multichain USDC',
                # Binance Bridge BSC side
                '0x0000000000000000000000000000000000001004': 'BSC Token Hub',
                '0x533e3c0e6b48010873B947bddC4721b1bDFF9648': 'BSC Bridge',
                # cBridge BSC + many more from Wormhole BSCStargate BSC, Synapse BSC, deBridge BSC, Orbit Bridge
            }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This immediately improved detection and catching of transactions manyfold.&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding Some Performance Analytics
&lt;/h4&gt;

&lt;p&gt;With real data flowing in, I could start getting the answer of which Bridge is the fastest!&lt;/p&gt;

&lt;p&gt;Looking at the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesc962t778bk9xcqw4lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesc962t778bk9xcqw4lw.png" width="800" height="1367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synapse Bridge&lt;/strong&gt; : 3.4m avg, 100% success&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stargate&lt;/strong&gt; : 3.7m avg, 13% success&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cBridge Main&lt;/strong&gt; : 3.3m avg, 74% success&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Potential ETH Bridge&lt;/strong&gt; : 4.1m avg, 90% success&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cleaning up the Database
&lt;/h4&gt;

&lt;p&gt;As features were added, new database columns were needed which owuld cause errors when old code tried to save transactions with new fields that didn’t exist yet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❌ Save error: table transactions has no column named token_symbol
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I added a clean_db_setup.py that wipes the database and starts fresh with a comprehensive schema.&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding Stuck Transaction Alerts
&lt;/h4&gt;

&lt;p&gt;Detecting potentially stuck transactions seemed useful:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hurg9zxuurfgk9tlyf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hurg9zxuurfgk9tlyf6.png" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The red alert box at the top shows transactions that have been pending for &amp;gt;30 minutes. In real use, you’d want notifications when your specific transaction gets stuck.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Findings (About Bridge Performance)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7txxdtxj6z4dp22sxa2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7txxdtxj6z4dp22sxa2.png" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on my only few hours long experiment, these are the conclusions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fastest&lt;/strong&gt; : cBridge at ~3.3 minutes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Most Reliable&lt;/strong&gt; : Synapse at 100% success rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slowest&lt;/strong&gt; : Some “Potential” bridges at 30+ minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking at the direction analysis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ETH → BSC&lt;/strong&gt; : 465 transactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BSC → ETH&lt;/strong&gt; : 1,819 transactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overall success rate&lt;/strong&gt; : 97%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Surprisingly, almost 4x more people bridge from BSC to Ethereum than the other way around. I expected the opposite - people fleeing expensive Ethereum gas fees for cheaper BSC. &lt;/p&gt;

&lt;p&gt;But my experiment showed the reverse. This is probably because of the &lt;strong&gt;Trust factor&lt;/strong&gt; - despite costs, Ethereum is seen as more secure for long-term holdings. As such, people could be farming yields or performing trades on BSC, then bridging profits back to the "safer" Ethereum. This is also probably because Ethereum still has the &lt;strong&gt;most mature ecosystem&lt;/strong&gt; - the major DeFi protocols, NFT marketplaces, and institutional infrastructure are all primarily on mainnet, so people bridge back to access them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary of my Experimental Setup
&lt;/h3&gt;

&lt;p&gt;You can find the Personal project code here: &lt;a href="https://github.com/guoxiangng/eth-bsc-bridge-monitoring" rel="noopener noreferrer"&gt;https://github.com/guoxiangng/eth-bsc-bridge-monitoring&lt;/a&gt;&lt;br&gt;&lt;br&gt;
There’s more information in the readme. The simple setup does the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor Process:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scans both ETH and BSC chains every 5 seconds
&lt;/li&gt;
&lt;li&gt;Monitors 20+ bridge contracts
&lt;/li&gt;
&lt;li&gt;Checks last 10 blocks for new transactions
&lt;/li&gt;
&lt;li&gt;Updates transaction statuses every 15 seconds
&lt;/li&gt;
&lt;li&gt;Saves everything to SQLite&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dashboard:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI serving on port 8000
&lt;/li&gt;
&lt;li&gt;Auto-refreshes every 30 seconds
&lt;/li&gt;
&lt;li&gt;Shows performance comparison across bridges
&lt;/li&gt;
&lt;li&gt;Pagination for viewing all transactions
&lt;/li&gt;
&lt;li&gt;Filtering by bridge, status, direction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Server Resources:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Running comfortably on the same EC2 instances as the Ethereum AWS Node Runner node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concluding Thoughts
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility of using my own Nodes&lt;/strong&gt;
Using my own nodes provided the flexibility of tweaking the blockchain scanning frequency however I felt like and having no API delays at all. Using managed providers comes with API call/credit based pricing model potentially with rate limits or API quotas as well. (Although it probably still costs more to manage own nodes in production; we ought to do our own cost analysis).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transaction Volume&lt;/strong&gt;
Actually seeing 1,000+ transactions across the monitored bridges in a short period of time was really surprising. The ETH-BSC corridor is way more active than I thought (although it’s not like I have much of any frame of reference).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success Rates Vary A LOT&lt;/strong&gt;
Some bridges show 100% success, others 70–80%. Not sure if this is real or issues with my completion detection logic; but i’m not going to investigate further :P.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;“Potential” Bridges&lt;/strong&gt;
A significant chunk of transactions I’m detecting are to DEX routers (“&lt;em&gt;Potential&lt;/em&gt;” bridges), not direct bridge contracts. These might be bridge-related (some bridges route through DEXes), or might be noise. Need better filtering.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>blockchain</category>
      <category>noderunner</category>
      <category>ethereum</category>
      <category>aws</category>
    </item>
    <item>
      <title>Setting up Blockchain Nodes with AWS Node Runners (Ethereum, Base, BSC)</title>
      <dc:creator>Guo Xiang (Harvey) Ng</dc:creator>
      <pubDate>Thu, 07 Aug 2025 04:44:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/setting-up-blockchain-nodes-with-aws-node-runners-ethereum-base-bsc-2jl8</link>
      <guid>https://dev.to/aws-builders/setting-up-blockchain-nodes-with-aws-node-runners-ethereum-base-bsc-2jl8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Taken from the intro page, AWS Node Runner is an open source project maintained by AWS DLT and blockchain solution architects. You can read more about it &lt;a href="https://aws-samples.github.io/aws-blockchain-node-runners/docs/intro" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The project contains deployment and infrastructure blueprints that allows for easy deployment of self-managed blockchain nodes and node clusters on AWS.&lt;/p&gt;

&lt;p&gt;*In comparison, there is an AWS service called &lt;a href="https://aws.amazon.com/managed-blockchain/" rel="noopener noreferrer"&gt;Amazon Managed Blockchain&lt;/a&gt; (AMB) which is a managed service and supports limited blockchain networks. There are also the 3rd party RPC API/Node Providers like Alchemy and Infura.&lt;/p&gt;

&lt;p&gt;As Ethereum’s 10 year anniversary was nearing, i decided to embark on a small experiment to try out some node deployments with the AWS Node Runner project. This article documents my learnings and i’m hoping they will be interesting to you :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethereum Single RPC node setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Ethereum" rel="noopener noreferrer"&gt;Ethereum | AWS Blockchain Node Runners&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CDK Deploy
&lt;/h3&gt;

&lt;p&gt;Following the instructions in the link above, the first choice to be made was whether to do the single node setup or the HA setup; and of course in the interest of COST for the POC, single node will do for me. The project is based on Typescript AWS CDK and has 2 deployment layers — common components and node resources (consistent across the supported networks). i.e.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx cdk deploy eth-common
npx cdk deploy eth-single-node --json --outputs-file single-node-deploy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuo9o3abwe2h1owaq1ey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuo9o3abwe2h1owaq1ey.png" alt=" " width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The .env file within lib/eth is the only configuration needed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmj1qpmmgh0h5fpzsjdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmj1qpmmgh0h5fpzsjdi.png" alt="Default Config" width="800" height="502"&gt;&lt;/a&gt;&lt;br&gt;
^Default Config&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzhxq2aby27gkqlw30td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzhxq2aby27gkqlw30td.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;br&gt;
^What i used, reducing the instance size slightly&lt;/p&gt;
&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;Even after waiting for some time, RPC responses are returning 0 (hex 0x0) for everything! Whats going on?!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv23r1mqpdxxev2zn1jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv23r1mqpdxxev2zn1jg.png" alt=" " width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without exploring the codebase, here’s what i found exploring the deployed server:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkadolo5uugwvfjbhn9oi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkadolo5uugwvfjbhn9oi.png" alt=" " width="800" height="98"&gt;&lt;/a&gt;&lt;br&gt;
^ 2 Docker containers for Consensus client and Execution client. &lt;/p&gt;

&lt;p&gt;Read more here on how these 2 clients form an Eth Node: &lt;a href="https://ethereum.org/en/developers/docs/nodes-and-clients/" rel="noopener noreferrer"&gt;https://ethereum.org/en/developers/docs/nodes-and-clients/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cjayb01torm7ppou55k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cjayb01torm7ppou55k.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;br&gt;
^Execution Container Logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s0t4szj9m5mu4kzde13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s0t4szj9m5mu4kzde13.png" alt=" " width="800" height="300"&gt;&lt;/a&gt;&lt;br&gt;
^Consensus Container Logs&lt;/p&gt;

&lt;p&gt;It turns out that Lighthouse is having a critical error loading checkpoint state fromhttps://beaconstate.info/ which was declared by default in the noderunner .env file. After some exploration, it turns out that there’s nothing wrong with the &lt;a href="https://eth-clients.github.io/checkpoint-sync-endpoints/" rel="noopener noreferrer"&gt;Beacon Chain checkpoint sync endpoint&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead, within this file /home/bcuser/docker-compose.yml, i found that the image versions used for both Geth and Lighthouse containers were not the latest!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczdt9ky8l00vff3h1hz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczdt9ky8l00vff3h1hz1.png" alt=" " width="612" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe2v6sohbrpgm86aerpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe2v6sohbrpgm86aerpx.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After updating the versions, removing old containers and creating the new containers, it finally worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /home/bcuser &amp;amp;&amp;amp; sudo docker compose up -d

#the docker template files for the different client combinations are found in /opt/node 
#docker-compose-geth-lighthouse.yml 
#docker-compose-reth-lighthouse.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswaumrbolckcluau8jvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswaumrbolckcluau8jvm.png" alt=" " width="800" height="294"&gt;&lt;/a&gt;&lt;br&gt;
^Consensus Container Logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8avpxedxfn97p8639u97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8avpxedxfn97p8639u97.png" alt=" " width="800" height="306"&gt;&lt;/a&gt;&lt;br&gt;
^Execution Container Logs&lt;/p&gt;
&lt;h3&gt;
  
  
  Monitoring Dashboard
&lt;/h3&gt;

&lt;p&gt;The Node Runner blueprint comes with a monitoring dashboard deployed by the CDK:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgakf29uhehnvrm4fhi95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgakf29uhehnvrm4fhi95.png" alt=" " width="800" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This 3 hour chart shows how it wasn’t moving for awhile and then finally syncing started. We can see the infrastructure metrics and how many blocks behind the node is.&lt;/p&gt;

&lt;p&gt;After a few more hours, the sync is mostly done now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgjkq1r5un2lthl35f49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgjkq1r5un2lthl35f49.png" alt=" " width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidwanfc6wzzt7gsjqlhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidwanfc6wzzt7gsjqlhd.png" alt=" " width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Base Single RPC node setup — Snapshot Download Nightmare
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Base" rel="noopener noreferrer"&gt;Base | AWS Blockchain Node Runners&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Base is an Ethereum Layer 2, incubated by Coinbase and built on the open-source Optimism OP Stack. Read more here: &lt;a href="https://www.coinbase.com/en-gb/developer-platform/discover/protocol-guides/guide-to-base" rel="noopener noreferrer"&gt;https://www.coinbase.com/en-gb/developer-platform/discover/protocol-guides/guide-to-base&lt;/a&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  CDK Deploy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlhw6glonv73lellz15w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlhw6glonv73lellz15w.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;br&gt;
^From Noderunner Base Blueprint page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6s48yptb83ckhgc0dxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6s48yptb83ckhgc0dxz.png" alt=" " width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;^Similar to Ethereum and all other noderunner blueprints, we just need to configure the .env file and run the 2 layers of cdk deploy. I dropped the instance type from 2xlarge to xlarge and included the private IP of my Ethereum node from the previous section.&lt;/p&gt;
&lt;h3&gt;
  
  
  Troubleshooting/What goes on in the node?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplz5g5p7te27qyg10kn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplz5g5p7te27qyg10kn.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rg4xq3lgkxjqci726ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rg4xq3lgkxjqci726ch.png" alt=" " width="800" height="31"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the IO and the network-in metrics! The node is downloading a &lt;a href="https://docs.base.org/base-chain/node-operators/snapshots/" rel="noopener noreferrer"&gt;base snapshot&lt;/a&gt;. During my first try my instance stopped from a cost-saver automation which i did not turn off and the process and script stopped. After locating the script and rerunning it, it restarted the download, wasting a few hours.&lt;/p&gt;

&lt;p&gt;I then redeployed the CDK stack, this time declaring a &lt;a href="https://docs.base.org/base-chain/node-operators/run-a-base-node#restoring-from-snapshot" rel="noopener noreferrer"&gt;BASE_SNAPSHOT_URL&lt;/a&gt; pointing to the latest version (same URL in the end) and tried to let it run its course… After checking in the next day after 17 hours, i noticed the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39r02wfflx72mhqtl9dr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39r02wfflx72mhqtl9dr.png" alt=" " width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download progress was about ~700GB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I finally decided to check the download size and it translates to a MASSIVE 4.6TB snapshot&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wget failed due to insufficient disk space. The .env default EBS volume size was 1000GB which apparently was not enough.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkol6pcchdafa0r7qgf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkol6pcchdafa0r7qgf5.png" alt=" " width="478" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point i decided i had enough “fun” with Base; that size of EBS racks up quite a hefty sum for exploratory learning.&lt;/p&gt;
&lt;h2&gt;
  
  
  BNB Smart Chain (BSC) Single RPC node setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Bsc" rel="noopener noreferrer"&gt;Bsc | AWS Blockchain Node Runners&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  CDK Deploy
&lt;/h3&gt;

&lt;p&gt;I used a much smaller instance type (4xl-&amp;gt;xl), lesser storage (4TB-&amp;gt;1TB) and the &lt;a href="https://github.com/48Club/bsc-snapshots" rel="noopener noreferrer"&gt;Fast Snapshot&lt;/a&gt; instead of default:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76t0210ef8s4mw38eenc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76t0210ef8s4mw38eenc.png" alt=" " width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;^Default .env&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn76b95tbasy62k6ism4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn76b95tbasy62k6ism4z.png" alt=" " width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;^Modified .env&lt;/p&gt;

&lt;p&gt;The fast snapshot is significantly smaller than full:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqo0fubz413dm7lnjrit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqo0fubz413dm7lnjrit.png" alt=" " width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploy CDK stacks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx cdk deploy bsc-common
npx cdk deploy bsc-single-node --json --outputs-file single-node-deploy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For BSC, i faced no troubles at all with the setup despite downsizing from default configuration. As you can see from the monitoring dashboard below, sync was completed in less than 4 hours (inclusive of fast snapshot download)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5z6aclawa7h63vubzjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5z6aclawa7h63vubzjx.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;This quick foray into blockchain nodes was quite an interesting experiment for me. I learned how each blockchain has unique requirements — from Ethereum’s dual-client architecture and sync processes to Base’s massive 4.6TB snapshots and storage requirements. The software stacks are rapidly evolving, requiring constant vigilance for updates. While using managed services for building applications is great, running your own infrastructure has no substitute when the goal is to understand what goes on under the hood (although i only just scratched the surface with this) :)&lt;/p&gt;

&lt;p&gt;*On top of Eth and Base, i also read up cursorily on a number of the Noderunner blueprints and unfortunately could not play around with Solana which is a non EVM chain and whose Agave clients generate significant outbound traffic that would blow my budget up (AWS charges for outbound internet data transfer and not inbound).&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>noderunner</category>
      <category>aws</category>
      <category>ethereum</category>
    </item>
    <item>
      <title>Breaking the Waterfall: Using Azure DevOps Boards for Agile AWS Infrastructure Delivery</title>
      <dc:creator>Guo Xiang (Harvey) Ng</dc:creator>
      <pubDate>Mon, 19 May 2025 16:12:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/breaking-the-waterfall-using-azure-devops-boards-for-agile-aws-infrastructure-delivery-274j</link>
      <guid>https://dev.to/aws-builders/breaking-the-waterfall-using-azure-devops-boards-for-agile-aws-infrastructure-delivery-274j</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After experiencing quite a bit of AWS cloud infrastructure delivery work, I’ve noticed and socialized a persistent challenge: while software development teams have widely embraced agile methodologies, day 1 infrastructure delivery teams often remain trapped in waterfall planning and project status reporting. On the other hand, cloud operations teams may operate in a reactive mode, responding to tickets and service requests with limited capacity for proactive planning or improvements.&lt;/p&gt;

&lt;p&gt;The traditional approach to new infrastructure projects typically follows a sequential model: design the architecture, build the foundation, implement the components, and then operate the result. In today’s public cloud-native world, this approach has serious limitations. “Cloud infrastructure” needs to adapt continuously as new workloads are deployed, requirements evolve, and operational insights demand changes.&lt;/p&gt;

&lt;p&gt;For tech consultancies/system integrators and small lean cloud operation teams, many IT scopes could fall under the purview of a single squad (i.e. security, networking, observability, DevSecOps, database administration, infrastructure work like systems administration, configuration management and container platform operations) instead of having established siloed teams.&lt;/p&gt;

&lt;p&gt;This article aims to share a simplified and generalized agile methodology utilizing Azure DevOps Boards — specifically with the Basic process template — to manage AWS infrastructure delivery in an agile manner. Whether you’re an infrastructure project manager used to waterfall who is looking to adopt agile practices or a cloud engineer wanting to implement self-organization, hopefully this read could help you with a starting point to manage the team and deliver results more efficiently and responsively.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Public Cloud Infrastructure Projects Need Agile Approaches&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure delivery has fundamentally changed with cloud adoption. Consider these shifts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous Evolution&lt;/strong&gt;: Infrastructure is no longer “built once and forgotten.” AWS environments continuously evolve as new services become available and as business/non-functional requirements change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: With tools like Terraform, CloudFormation, and CDK, infrastructure changes can be versioned, tested, and deployed using the same agile principles as application code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mixed Work Types&lt;/strong&gt;: As mentioned earlier, Cloud delivery/operations teams might need to simultaneously handle planned infrastructure development, operational support, security improvements, platform feature requests across many IT domains.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Team Dependencies&lt;/strong&gt;: Cloud infrastructure/operations teams have complex dependencies with multiple application teams, security teams, and third parties all with different timelines.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These factors make the traditional waterfall approach increasingly ineffective. When infrastructure projects follow rigid phases, they struggle to adapt to changing requirements, can’t easily incorporate operational learnings, and create bottlenecks for application teams.&lt;/p&gt;

&lt;p&gt;The most relevant point for myself is point 3 on managing mixed workstream types. For a day 1 tech consultancy team, traditional project management approaches often fail to account for this mix, focusing only on planned development work while treating the rest as distractions or incomplete planned development work.&lt;/p&gt;

&lt;p&gt;In this approach, we explicitly acknowledge all these work types and use Azure DevOps boards to visualize them and make informed decisions while providing an actual view and history of the work on the ground for management if required.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Azure Devops Board&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While Azure Devops has many features including integration with code version control systems, pipelines, artifact repository, we want to focus on the project management and agile tooling which is its Kanban Boards feature. Let’s get into it and set up a beginner friendly board and i’ll ‘talk’ through my thought process of how to utilize it.&lt;/p&gt;

&lt;p&gt;Create a new project and select the ‘Basic’ Work Item Process Template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqc603momuy1by74d2ma5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqc603momuy1by74d2ma5.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 4 &lt;a href="https://learn.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process?view=azure-devops&amp;amp;tabs=agile-process" rel="noopener noreferrer"&gt;default processes&lt;/a&gt; currently and basic is the most lightweight and is in selective preview currently:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvk3ewedyh8hhuaz21r4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvk3ewedyh8hhuaz21r4n.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The simplicity of the Basic process template (with just Epics, Issues, and Tasks) makes it accessible for infrastructure-focused teams who do not wish to follow scrum or over-complicate the agile process, but need better tracking than spreadsheets or emails provide.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Epic-Level Organization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can use Functional/Technical domain areas as epics such as “network security”, “infra compliance/hardening” or “Architecture” but this will create never-ending buckets of work which prevents tracking at epic level (go-ahead if the team is small and tracking at issue level is good enough for getting started).&lt;/p&gt;

&lt;p&gt;Else, we want to organize our epics as time-bound, completable milestones. This approach maintains the essential agile principle that epics should eventually reach a “done” state.&lt;/p&gt;

&lt;p&gt;Consider the following sample epic categories for either capturing AWS infrastructure milestones or value stream/capability additions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“MVP Landing Zone Implementation” / “Establish Secure Multi-Account Foundation”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Establish Infrastructure Provisioning pipelines and tooling”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Production Network Deployment” / Epics that encompass the deployment of all resources for specific network zones (accounts/VPCs)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Security Controls Baseline”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Setup Container Orchestration Platform” / “Setup Kubernetes / Nomad cluster baseline”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“System Monitoring Baseline with Cloudwatch/Dynatrace”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Creation of baseline EC2 golden images” / Other way to establish baseline Server capabilities (i.e. Remote access, vulnerability scan, patching)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consider a dedicated “Ongoing Operational Support” epic to capture smaller operational tasks, bug fixes, and workload support activities that don’t naturally fit under specific capability epics (just so those issues can have a parent)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Issue Classification and Management&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Under each epic, issues represent the actual units of work. In the Basic process template, the “Issue” work item type is versatile enough to handle various kinds of infrastructure work.&lt;br&gt;&lt;br&gt;
Explore issue creation in different ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team Self-Creation&lt;/strong&gt;: Good engineers identify and create issues proactively&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tech Lead Assignment:&lt;/strong&gt; Tech lead/PM can choose to assign the entire epic to good engineers with ownership or assign at the issue level as well (please don’t micromanage by assigning tasks :X )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Requests&lt;/strong&gt;: Application teams can submit requests through integrated channels (*see below)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring Alerts&lt;/strong&gt;: Automated creation from monitoring systems for operational issues&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The approach you choose should align with your team’s maturity level, culture and ability. Teams with experienced engineers who understand the broader architectural context can benefit from more self management.&lt;/p&gt;

&lt;p&gt;*Azure DevOps doesn’t natively function as an ITSM tool but it can be integrated with platforms like ServiceNow or Jira to automatically create work items from service requests. Other Microsoft native options include Teams, Forms and Outlook Email.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Organizing Work Through Tagging&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It is worth exploring a simple dual-layer tagging system to enable filtering and organization of 1.Technical Domain 2.Work Type. |&lt;br&gt;&lt;br&gt;
Consider the non-exhaustive example list below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Domain Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;networking&lt;/code&gt;: VPC, routing, firewalls, Certificates, IoT&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;security&lt;/code&gt;: IAM, security groups, compliance controls&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;compute&lt;/code&gt;: EC2, container platforms, serverless&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;storage&lt;/code&gt;: S3, EBS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;monitoring&lt;/code&gt;: CloudWatch, metrics, alerting&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sysad&lt;/code&gt;: Patch Management, Windows Active Directory, Backup Management&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;middleware&lt;/code&gt;: Configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;logging&lt;/code&gt;: Cloudwatch, log transformation, syslog, splunk&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Work Type Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;new-build&lt;/code&gt;: Net new infrastructure builds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;enhancement&lt;/code&gt;: Improvements to existing builds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;support-task&lt;/code&gt;: Tasks supporting application teams&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;bug-fix&lt;/code&gt;: Issues that need remediation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;compliance&lt;/code&gt;: Regulatory or security compliance work&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cost-optimization&lt;/code&gt;: Cost reduction efforts&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Kanban Board Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s configure the Azure DevOps board with columns that represent our workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To Do&lt;/strong&gt;: Identified work yet to be assigned/picked up&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Doing:&lt;/strong&gt; Currently being worked on&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Optional) Blocked/Pending:&lt;/strong&gt; Cannot be completed due to factors beyond the team’s control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Optional) Review/Validation&lt;/strong&gt;: Initial implementation completed but require review/testing/validation to be sure it meets the DoD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Optional) To be propagated/replicated:&lt;/strong&gt; Generic Solution/mechanisms implemented and tested to have worked but need to be replicated for multiple workloads/accounts/automations/etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Done&lt;/strong&gt;: Completed work&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For infrastructure teams, the Review/Validation column is particularly important, as it represents the time between technical completion and confirmation that the infrastructure is working as intended in the actual environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Sample Board Illustration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a sample board for illustration purposes for the completely uninitiated. It doesn’t include the optional workflow states mentioned earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o21arwb530dpkhu4u11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o21arwb530dpkhu4u11.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To modify and include more workflow columns, navigate to&lt;br&gt;&lt;br&gt;
&amp;gt;Settings&amp;gt;Column&amp;gt;Add Column&lt;/p&gt;

&lt;p&gt;It can then look like this if you choose to add additional columns:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F488bkfdy81ko6lf1txsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F488bkfdy81ko6lf1txsb.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*If you would like a script to generate the 5 epics, 10 issues and 10 tasks above at one go. Here is a link to the PS file:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/guoxiangng/azure-devops-board-templates" rel="noopener noreferrer"&gt;https://github.com/guoxiangng/azure-devops-board-templates&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using Swimlanes for Work Types&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Azure DevOps boards support swimlanes, which we use to separate different types of work visually. This gives the team immediate visibility into how much effort is going into each category:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;New Capability&lt;/strong&gt;: Brand new infrastructure components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhancement&lt;/strong&gt;: Improvements to existing infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Operational&lt;/strong&gt;: Work related to running the platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Urgent/Unplanned&lt;/strong&gt;: Critical fixes or urgent requests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this visualization, it becomes immediately apparent if one type of work is consuming too much capacity or if critical work is not progressing quickly enough.&lt;/p&gt;

&lt;p&gt;This approach to swimlanes complements our epic structure by allowing us to see work types across all epics. While an epic might focus on a specific capability (like “Implement Secure Container Platform”), the swimlanes show us how that epic’s work is distributed across new features, enhancements, and operational tasks.&lt;/p&gt;

&lt;p&gt;Here’s how you can add swimlanes on Azure Devops Board:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1711nd8gev33t7m3q3bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1711nd8gev33t7m3q3bp.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Common Challenges and Key Lessons Learned&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;These are some challenges from my experience and research (online and asking around):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Estimation Difficulty&lt;/strong&gt;: Infrastructure tasks varied significantly in complexity, making categorization of issues/tasks for work estimation challenging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Operational Interruptions&lt;/strong&gt;: Unpredictable operational issues/management escalations still can disrupt planned work or disrespect the Agile Process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Team Coordination&lt;/strong&gt;: Coordinating dependencies with other teams required additional communication channels (and infrastructure blockers can take quite long to resolve if other teams work by waterfall and had a planning gap)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key lessons learned:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start Simple&lt;/strong&gt;: Begin with basic board usage and gradually introduce more agile practices as the team adapts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Make All Work Visible&lt;/strong&gt;: Ensure operational and support work appears on the board alongside planned development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Adapt Planning&lt;/strong&gt;: Adjust capacity allocations based on emerging needs rather than sticking rigidly to initial plans. (i.e. adapt whether to have sprints and standups based on team maturity, adapt sprint duration based on agreed service SLAs, adapt sprint planning approaches to protect the team from management/app team expectations)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment and Optimize the Process&lt;/strong&gt;: With the metrics at hand and query-able, try to define ways to measure success and optimize for work process improvements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Concluding Remarks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With all this said, remember that the best methodology is the one that fits your team’s culture, capabilities, and constraints. Use this approach as a starting point, but always adapt it to your specific context and improve on processes over time.&lt;/p&gt;

&lt;p&gt;Lastly, once again this article is for the uninitiated… and may not be relevant for seasoned operations teams with their own ITSM tooling nor for well-oiled co-located teams with clear role delineations.&lt;/p&gt;

&lt;p&gt;If you made it to the end, hope you got something out of the read. There are many more features that Azure Devops Boards and similar Project Management Softwares provide. Have fun exploring!&lt;/p&gt;

</description>
      <category>projectmanagement</category>
      <category>agile</category>
      <category>azure</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
