<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ejeh Daniel</title>
    <description>The latest articles on DEV Community by Ejeh Daniel (@danishaft).</description>
    <link>https://dev.to/danishaft</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danishaft"/>
    <language>en</language>
    <item>
      <title>How I Built a Tech Event Discovery Platform with Real-Time Scraping</title>
      <dc:creator>Ejeh Daniel</dc:creator>
      <pubDate>Sun, 23 Nov 2025 13:00:33 +0000</pubDate>
      <link>https://dev.to/danishaft/how-i-built-a-tech-event-discovery-platform-with-real-time-scraping-3o4f</link>
      <guid>https://dev.to/danishaft/how-i-built-a-tech-event-discovery-platform-with-real-time-scraping-3o4f</guid>
      <description>&lt;p&gt;I'm a software developer, and I've been attending tech events for over three years now. I've used platforms like Luma and Eventbrite to find events, but there's always been one problem that frustrated me. The noise.&lt;/p&gt;

&lt;p&gt;Most event listing sites list cool tech events, but they also mix in so many non-tech events that it becomes overwhelming. When I'm looking for a React workshop or an AI conference, I don't want to scroll through cooking classes and yoga sessions. I remember searching for "JavaScript meetups" and getting results for wine tasting events and fitness bootcamps mixed in. The problem was clear. I wanted a clean, focused experience that only showed tech events.&lt;/p&gt;

&lt;p&gt;At first, I just thought about it, but I didn't know how to approach building it. Then recently, I had to automate a dataset ops workflow at work. I needed to pull product details, categorize products, process and clean data, and save results. That's when I realized scraping could be useful here too. I've always loved building software solutions I wish I had. It's personal. I looked into it, decided it could be done, and I got started.&lt;/p&gt;

&lt;p&gt;The goal was straightforward. I wanted a platform that caters exclusively to tech events with a clean interface and smooth experience. Success would be searching for tech events and getting relevant results without the noise, delivered quickly. I built it in a week, focusing on real-time scraping as users search rather than pre-scraping everything. I was curious about making scraping fast and reliable on demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Architecture
&lt;/h2&gt;

&lt;p&gt;The core idea is simple. Scrape tech events from platforms like Luma and Eventbrite, save them to the database, and then list them. Once events are in the database, I can filter, search, and display them without hitting the source platforms every time.&lt;/p&gt;

&lt;p&gt;From there, I built the architecture around database-first search. When someone searches for events, it checks the database first. If results exist, they're served instantly. Only when nothing's in the database does it trigger a background scraping job. Most searches are fast this way, no waiting around for scraping when the data already exists.&lt;/p&gt;

&lt;p&gt;Here's how the database-first lookup works in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Check database first
try {
  const dbResults = await searchDatabase(searchQuery, filters, DEFAULT_DB_SEARCH_LIMIT)

  if (dbResults.events.length &amp;gt; 0) {
    return NextResponse.json({
      success: true,
      source: 'database',
      events: dbResults.events,
      total: dbResults.total,
    })
  }
} catch (dbError) {
  console.error('Database search failed:', dbError)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For async processing, I went with BullMQ and the job queue pattern. When a search has no database results, the API creates a job and returns immediately with a job reference. A separate worker process handles the scraping in the background while the frontend maintains a connection to track job completion. This decouples the search request from the scraping operation. The user gets an immediate response, and scraping happens independently without blocking the request cycle. It's tempting to scrape synchronously, but that defeats the purpose of having a database layer in the first place. This way, you get responsiveness without sacrifice.&lt;/p&gt;

&lt;p&gt;When there are no database results, the API queues a scraping job and returns immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create unique job ID
const jobId = `search-${Date.now()}-${crypto.randomBytes(4).toString('hex')}`

await scrapingQueue.add('scrape-events', {
  jobId,
  query: searchQuery,
  platforms: searchPlatforms,
  city: searchCity,
})

return NextResponse.json({
  success: true,
  jobId,
  status: 'running',
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To keep the database fresh, I set up daily scraping runs at 6 AM UTC using Vercel cron jobs. These runs hit all the event platforms systematically, ensuring the database stays current with new events. The cron jobs use Next.js after() to process scraping operations asynchronously, so the endpoint responds immediately while the work happens in the background. This keeps the database continuously updated without manual intervention or blocking request handlers.&lt;/p&gt;

&lt;p&gt;For scraping, I started with Apify because it's battle-tested and handles most edge cases out of the box. It works well, but it introduces recurring costs and adds a dependency I can't control directly. If the platform changes and Apify's selectors break, I'm waiting on their updates.&lt;/p&gt;

&lt;p&gt;That's when I added Puppeteer as a fallback. It's lightweight, gives me full control over selectors and timing, and with the stealth plugin, it handles anti-bot detection just fine. So now I run with both: Apify handles the heavy lifting most of the time, but if selectors fail or I need to adapt quickly, Puppeteer takes over. That dual approach gives me reliability plus flexibility. The trade-off is managing two tools instead of one, but for something as brittle as web scraping, having a fallback mechanism actually reduces risk.&lt;/p&gt;

&lt;p&gt;Here's how I configure Puppeteer with stealth mode and handle browser initialization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import puppeteer from "puppeteer-extra";
import StealthPlugin from "puppeteer-extra-plugin-stealth";
import { z } from "zod";
import { prisma } from "./prisma";

puppeteer.use(StealthPlugin());

// user agents to rotate
const USER_AGENTS = [
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0",
];

// Get random user agent
function getRandomUserAgent(): string {
    return USER_AGENTS[Math.floor(Math.random() * USER_AGENTS.length)];
}

// Create browser with stealth configuration
async function createBrowser() {
    // Try multiple possible paths for Chromium
    let executablePath = process.env.PUPPETEER_EXECUTABLE_PATH;

    if (!executablePath) {
        const fs = require('fs');
        const possiblePaths = ['/usr/bin/chromium', '/usr/bin/chromium-browser', '/usr/bin/google-chrome'];
        for (const path of possiblePaths) {
            try {
                if (fs.existsSync(path)) {
                    executablePath = path;
                    break;
                }
            } catch {
                continue;
            }
        }
    }

    return await puppeteer.launch({
        headless: true,
        executablePath, // Use system Chromium in Docker
        args: [
            "--no-sandbox",
            "--disable-setuid-sandbox",
            "--disable-blink-features=AutomationControlled",
            "--disable-features=IsolateOrigins,site-per-process",
            "--disable-web-security",
            "--disable-dev-shm-usage",
            "--disable-gpu",
            "--disable-software-rasterizer",
            "--disable-extensions",
            "--no-first-run",
            "--disable-default-apps",
            "--disable-background-networking",
            "--single-process",
            "--disable-zygote",
            "--disable-crash-reporter",
            "--disable-breakpad",
            "--disable-background-timer-throttling",
            "--disable-backgrounding-occluded-windows",
            "--disable-renderer-backgrounding",
        ],
        ignoreDefaultArgs: ["--disable-extensions"],
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The batch scraping function navigates to the page, waits for content to load, and scrolls to trigger lazy loading. This approach handles dynamic content that loads as you scroll. It's important for platforms like Eventbrite where most events are loaded on demand rather than served upfront. The scraper tries multiple selectors to find event cards, accounts for network errors with exponential backoff, and detects when it's being blocked before wasting resources.&lt;/p&gt;

&lt;p&gt;Here's how I handle navigation with retry logic for network errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let retries = 3;
let lastError: Error | null = null;

while (retries &amp;gt; 0) {
  try {
    await page.goto(searchUrl, {
      waitUntil: "domcontentloaded",
      timeout: 60000,
    });
    break;
  } catch (error: any) {
    lastError = error;
    retries--;
    if (
      error.message?.includes("ERR_NETWORK_CHANGED") ||
      error.message?.includes("net::ERR") ||
      error.message?.includes("Navigation timeout")
    ) {
      if (retries &amp;gt; 0) {
        await delay(2000 * (4 - retries));
        continue;
      }
    }
    throw error;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the page loads, I scroll to trigger lazy loading and extract events with multiple selector fallbacks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await page.evaluate(async () =&amp;gt; {
  await new Promise&amp;lt;void&amp;gt;((resolve) =&amp;gt; {
    let totalHeight = 0;
    const distance = 100;
const timer = setInterval(() =&amp;gt; {
      const scrollHeight = document.body.scrollHeight;
      window.scrollBy(0, distance);
      totalHeight += distance;

      if (totalHeight &amp;gt;= scrollHeight) {
        clearInterval(timer);
        resolve();
      }
    }, 100);
  });
});

const selectors = [
  'article[class*="event-card"]',
  'div[class*="event-card"]',
  '[data-testid="event-card"]',
  "article.eds-event-card-content",
];

for (const selector of selectors) {
  const elements = document.querySelectorAll(selector);
  if (elements.length &amp;gt; 0) {
    // Extract events...
    break;
  }
}

// Detect if we're being blocked
const bodyText = document.body?.textContent || "";
if (bodyText.includes("blocked") || bodyText.includes("captcha")) {
  console.error("[PUPPETEER] Possible blocking detected");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Redis setup ended up being one of those decisions where two tools actually work better than one. I'm using ioredis for the BullMQ connection because job queues need persistent, reliable connections. For caching though, I switched to Upstash Redis. It's HTTP-based and built for serverless, so it plays nice with Vercel. Two clients, two purposes, and together they give me reliable job processing plus fast caching that scales.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Actually Does
&lt;/h2&gt;

&lt;p&gt;Here's how it works in practice. When you search for "React workshops in Seattle," the system checks the database first. If matching events are already saved, you get them instantly. No waiting, no scraping. Results show up immediately with all the details. Title, date, venue, price, and a link to register.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx4t918p3gznlnjejk3y.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx4t918p3gznlnjejk3y.jpeg" alt="A screenshot showing events being listed from the database" width="750" height="1026"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image: Event listing page with search results from database&lt;/p&gt;

&lt;p&gt;But what if you're searching for something not yet in the database. That's when the job queue kicks in. The API creates a scraping job and returns immediately with a job ID. A worker process starts scraping Luma and Eventbrite in the background while the frontend tracks the job status. Once the worker finds events, they get saved to the database and the frontend automatically updates with the results. From your perspective, you search, see a loading state, and then results appear. The page stays responsive throughout.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux50g7fgp7ue7a5vlbog.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux50g7fgp7ue7a5vlbog.gif" alt="A gif showing live scraping in progress " width="400" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GIF: Live scraping in progress with results appearing as they're found&lt;/p&gt;

&lt;p&gt;For daily updates, Vercel cron jobs run at 6 AM UTC and systematically scrape all the event platforms. Instead of a single broad search, I run multiple targeted queries per platform. "ai," "data science," "python," "reactjs," "javascript," "machine learning." This multi-query approach gives much better coverage than casting a wide net. For each city and platform combination, I deduplicate by URL to avoid saving duplicate events. When you wake up, fresh events are already there without any manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned and What Went Wrong
&lt;/h2&gt;

&lt;p&gt;Anti-bot detection was the first real hurdle. Eventbrite and Luma both have systems that detect automated browsing, and my initial Puppeteer setup got blocked almost immediately. I thought the stealth plugin would be enough, but it wasn't. I had to rotate user agents, override the webdriver property, set realistic viewports, and add random delays between actions. Even then, I still hit rate limits occasionally. The bigger lesson is that anti-bot systems are constantly evolving. What works today might not work next month when they update their detection. This is why having Apify as a fallback matters. If my Puppeteer setup breaks, I can switch strategies without rewriting the whole system.&lt;/p&gt;

&lt;p&gt;Docker and Chromium compatibility turned into its own problem. When I first tried running Puppeteer in a Docker container, Chromium would crash with cryptic errors about crashpad handlers and zygote processes. I spent hours debugging before realizing I needed specific flags like --single-process and --disable-zygote for Docker environments. The executable path detection was also tricky. Different systems have Chromium in different locations, so I built fallback logic to find it automatically. This taught me that serverless deployment has its own constraints. You can't just run browser automation anywhere. You need to know your environment and adapt to it.&lt;/p&gt;

&lt;p&gt;Data quality was messier than I expected. Event titles are inconsistent, dates come in different formats, and some events have missing fields. I use Zod schemas for validation, but incomplete data still slips through. Deduplication helps, but I've seen duplicate events when the URLs are slightly different or when the same event appears on multiple platforms with different identifiers. This is the reality of aggregating data from multiple sources. There's no perfect deduplication strategy. For a personal project, it's acceptable. For production, I'd need more sophisticated data cleaning, probably a dedicated validation pipeline. The irony is that the scraping is the easy part. Making the data consistent is where the real work is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Tech event discovery should be simpler than it is. These Platforms have APIs, but they're not free to work with. Scraping fills the gap, but it's fragile. For now, Tech Event Vista solves my problem. I can find tech events without the noise.&lt;/p&gt;

&lt;p&gt;Building this project revealed something important. The real challenge isn't technology. We have powerful tools like Next.js, Puppeteer, BullMQ, and Redis that make something like this possible in a week. The hard part is everything else. Anti-bot systems that constantly evolve, data quality across multiple sources, and the constant maintenance that scraping demands.&lt;/p&gt;

&lt;p&gt;If you find it useful, great. The code lives on GitHub. Fork it, run it, break it, fix it. 🚀&lt;/p&gt;

</description>
      <category>automation</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>You Don't Always Need useCallback and useMemo</title>
      <dc:creator>Ejeh Daniel</dc:creator>
      <pubDate>Tue, 04 Nov 2025 01:35:46 +0000</pubDate>
      <link>https://dev.to/danishaft/you-dont-always-need-usecallback-and-usememo-47op</link>
      <guid>https://dev.to/danishaft/you-dont-always-need-usecallback-and-usememo-47op</guid>
      <description>&lt;p&gt;&lt;a href="https://react.dev/reference/react" rel="noopener noreferrer"&gt;React Hooks&lt;/a&gt; have revolutionized how we manage logic and side effects in functional components, but two hooks—&lt;a href="https://react.dev/reference/react/useCallback" rel="noopener noreferrer"&gt;useCallback&lt;/a&gt; and &lt;a href="https://react.dev/reference/react/useMemo" rel="noopener noreferrer"&gt;useMemo&lt;/a&gt;—are frequently misunderstood and overused in the process of obsessively trying to memoise.&lt;/p&gt;

&lt;p&gt;This article explains when memoisation actually helps, when it's counterproductive, and which modern alternatives provide better solutions.&lt;/p&gt;

&lt;p&gt;Why do we memoise in the first place? In React, more often than not, we decide to create a memoised version of a value with &lt;strong&gt;useMemo&lt;/strong&gt; or a memoised function with &lt;strong&gt;useCallback&lt;/strong&gt; in order to skip re-rendering a sub-tree. Now re-rendering a sub-tree is usually slow in React, so we'll prefer to skip any unnecessary re-renders. In order to optimise performance, we mostly reach out for &lt;a href="https://react.dev/reference/react/memo" rel="noopener noreferrer"&gt;&lt;strong&gt;React.memo&lt;/strong&gt;&lt;/a&gt; to skip re-rendering a component when its props are unchanged.&lt;/p&gt;

&lt;p&gt;Here's the catch: when we pass a function or a non-primitive value as props to this memoised component, we need to make sure they have stable references. Before React skips re-rendering a sub-tree, it first compares the props of the memoised component to ascertain if they have changed. So if our props always have unstable references, the value of optimisation is lost because then our memoised component will not get memoised at the end of the day.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Nah() {
  return (
    &amp;lt;MemoizedComponent
      value={{ hello: 'world' }} // New object reference each render
      onChange={(result) =&amp;gt; console.log('result')} // New function reference each render
    /&amp;gt;
  )
}

function Okay() {
  const value = useMemo(() =&amp;gt; ({ hello: 'world' }), [])
  const onChange = useCallback((result) =&amp;gt; console.log(result), [])

  return &amp;lt;MemoizedComponent value={value} onChange={onChange} /&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is just one of the reasons we try to memoise; another reason is to prevent effects from firing too often. When you pass a prop as a dependency to an effect, React does the same thing it does: it compares the dependency to ascertain if the effect needs to be re-run. In all of these, React is trying to do the same thing, which is to keep the references stable by caching them. So it's normal that useCallback and useMemo always come through for this reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Memoisation Fails
&lt;/h2&gt;

&lt;p&gt;When memoisation isn't solving one of the two problems mentioned above, it becomes useless noise. So there are cases where striving for stability in references is important, and there are others where it's pointless. Let's see some cases where the use of &lt;strong&gt;useCallback&lt;/strong&gt; and &lt;strong&gt;useMemo&lt;/strong&gt; becomes redundant:&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 1: Memoising Unmemoized Components (Zero Performance Gain)
&lt;/h3&gt;

&lt;p&gt;The common mistake is using &lt;strong&gt;useCallback&lt;/strong&gt; or &lt;strong&gt;useMemo&lt;/strong&gt; on a prop being passed to an unmemoized functional component or a React built-in component like a button.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Okay() {
  const value = useMemo(() =&amp;gt; ({ hello: 'world' }), [])
  const onChange = useCallback((result) =&amp;gt; console.log(result), [])

  return &amp;lt;Component value={value} onChange={onChange} /&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it’s important to note that the custom component and the button don’t care if the props have stable references. So if your custom component is not memoised using React.memo, it really doesn't care about your referential stability. You gain no performance improvement while introducing unnecessary boilerplate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case 2: The Broken Dependency Chain
&lt;/h3&gt;

&lt;p&gt;It's rarely a good idea to add non-primitive props like objects or functions to your dependency arrays. Why? Because your component has no control over whether the parent keeps those references stable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function NotOkay({ onChange }) {
  const handleChange = useCallback((e: React.ChangeEvent) =&amp;gt; {
    trackAnalytics('changeEvent', e)
    onChange?.(e)
  }, [onChange])

  return &amp;lt;SomeMemoizedComponent onChange={handleChange} /&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This useCallback is not useful in this context, or at best, it depends on how consumers will use this component. In all likelihood, there is a call-side that just invokes an inline function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;NotOkay onChange={() =&amp;gt; props.doSomething()} /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The parent's unstable prop reference forces the child's useCallback to invalidate its cache on every render. The only way a developer who writes this code could know that not wrapping the prop in useCallback themselves breaks some internal memoisation is if they drill down into the component to see how the props are being used.&lt;/p&gt;

&lt;p&gt;That's a horrible developer experience.😰 The only other popular option will be to memoise everything, always reach out for useCallback and useMemo, but these aren't great practices, but rather create overhead under the hood.&lt;/p&gt;

&lt;p&gt;Now, let's examine a real-world example that demonstrates why excessive memoisation becomes problematic. Even in well-architected codebases, I've seen cascading memoisation failures like this.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Life Example
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/WordPress/gutenberg" rel="noopener noreferrer"&gt;WordPress Gutenberg&lt;/a&gt; project is the block editor powering millions of WordPress sites. It went through a major cleanup of unnecessary memoisation. They removed useCallback from multiple components because the function wasn't passed to any hook or memoised component that might require a stable reference.&lt;/p&gt;

&lt;p&gt;Here's a pattern similar to what they found - a block settings component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function BlockSettings({ onUpdate, settings }) {
  // Unnecessary useCallback - passed to regular component
  const handleChange = useCallback((key, value) =&amp;gt; {
    onUpdate({ ...settings, [key]: value });
  }, [settings, onUpdate]);

  return (
    &amp;lt;SettingsPanel&amp;gt;
      &amp;lt;TextControl
        onChange={(val) =&amp;gt; handleChange('title', val)}
      /&amp;gt;
      &amp;lt;ToggleControl
        onChange={(val) =&amp;gt; handleChange('visible', val)}
      /&amp;gt;
    &amp;lt;/SettingsPanel&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Spot the problem? 🔍 The &lt;strong&gt;TextControl&lt;/strong&gt; and &lt;strong&gt;ToggleControl&lt;/strong&gt; aren't memoized components. They're going to re-render whenever &lt;strong&gt;BlockSettings&lt;/strong&gt; re-renders anyway. The useCallback achieves absolutely nothing here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function BlockEditor({ block }) {
  // Another useCallback
  const handleUpdate = useCallback((newSettings) =&amp;gt; {
    updateBlock(block.id, newSettings);
  }, [block.id]);

  // This creates a new object every render!
  const settings = {
    title: block.title,
    visible: block.visible,
    layout: block.layout
  };

  return &amp;lt;BlockSettings settings={settings} onUpdate={handleUpdate} /&amp;gt;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's trace the cascade:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;settings object&lt;/strong&gt; - Created fresh every render (new object reference)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;handleChange in BlockSettings&lt;/strong&gt; - Depends on settings, so recreated every render despite useCallback&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;handleUpdate in BlockEditor&lt;/strong&gt; - Only stable if block.id doesn't change, but...&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;settings breaks the chain&lt;/strong&gt; - The moment settings is a new object, everything downstream fails&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even though we have two useCallbacks, both are completely useless at this point because settings isn't memoised.&lt;/p&gt;

&lt;p&gt;The WordPress Gutenberg team's solution? They removed the unnecessary useCallbacks entirely. The code became simpler and more maintainable, with zero performance impact because the memoisation was never working anyway.&lt;/p&gt;

&lt;p&gt;This same pattern appears everywhere in React codebases - well-intentioned useCallbacks that achieve nothing because somewhere in the dependency chain, a new object or array is created. Let's take a look at better ways to handle this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Escaping the Dependency Trap
&lt;/h2&gt;

&lt;p&gt;The immediate goal is to stabilise an effect dependency and avoid breaking memoizations in our code; there are currently far better ways than always reaching for manual useCallback and useMemo. These methods will allow you to access the latest state/props inside an effect without forcing the effect to re-run.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Ref Pattern 🎯
&lt;/h3&gt;

&lt;p&gt;This pattern is pretty straightforward; it aims to solve our problem of using unstable references and causing the effect to re-run unnecessarily. What we do here is store the value we want to access in a ref, and update it on every render, that’s it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function useDragHandlers(draggableId: string, callbacks: DragCallbacks) {
  // Store callbacks in a ref
  const callbacksRef = useRef(callbacks);

  // Update ref every render (cheap operation)
  useEffect(() =&amp;gt; {
    callbacksRef.current = callbacks;
  });

  // Handlers never change, but always use the latest callbacks
  const onDragStart = useCallback((event) =&amp;gt; {
    callbacksRef.current.onStart?.(draggableId, event);
  }, [draggableId]); // Only draggableId in dependencies!

  const onDragEnd = useCallback((event) =&amp;gt; {
    callbacksRef.current.onEnd?.(draggableId, event);
  }, [draggableId]);

  useEffect(() =&amp;gt; {
    const element = document.getElementById(draggableId);
    element?.addEventListener('mousedown', onDragStart);
    element?.addEventListener('mouseup', onDragEnd);

    return () =&amp;gt; {
      element?.removeEventListener('mousedown', onDragStart);
      element?.removeEventListener('mouseup', onDragEnd);
    };
  }, [draggableId, onDragStart, onDragEnd]); // These never change now

  return { onDragStart, onDragEnd };
}

// Now consumers don't need ANY memoisation
function DraggableCard({ id, onDragStart, onDragEnd }) {
  // Just pass callbacks directly - no useMemo needed!
  const handlers = useDragHandlers(id, {
    onStart: onDragStart,
    onEnd: onDragEnd
  });

  return &amp;lt;div {...handlers}&amp;gt;Drag me&amp;lt;/div&amp;gt;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ref always points to the latest callbacks, the handlers never change, you've got stable dependencies!, and we've eliminated the entire fragile memoisation chain.&lt;/p&gt;

&lt;p&gt;Many popular component libraries use this pattern to avoid forcing consumers to memoise their callbacks. For example, Headless UI (by Tailwind Labs) and Radix UI both store callback refs internally to ensure components work correctly regardless of whether users memoise their props. Imagine if these libraries required consumers to memoise their options manually, it would be a terrible developer experience&lt;/p&gt;

&lt;h3&gt;
  
  
  useEffectEvent🆕
&lt;/h3&gt;

&lt;p&gt;React 19.2 recently introduced &lt;a href="https://react.dev/reference/react/experimental_useEffectEvent" rel="noopener noreferrer"&gt;useEffectEvent&lt;/a&gt;, a hook that helps you separate non-reactive logic from effects, avoiding &lt;strong&gt;stale closures&lt;/strong&gt; and unnecessary effect re-runs. In short, it's used when you need imperative access to the latest value of something during a reactive effect without explicitly forcing the effect to re-run. This is now the recommended solution for the pattern described above.&lt;/p&gt;

&lt;p&gt;Here's how you can refactor with useEffectEvent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function useDragHandlers(draggableId: string, callbacks: DragCallbacks) {
  // useEffectEvent handles callbacks that "aren't reactive"
  const handleDragStart = useEffectEvent((event) =&amp;gt; {
    callbacks.onStart?.(draggableId, event);
  });

  const handleDragEnd = useEffectEvent((event) =&amp;gt; {
    callbacks.onEnd?.(draggableId, event);
  });

  useEffect(() =&amp;gt; {
    const element = document.getElementById(draggableId);
    element?.addEventListener('mousedown', handleDragStart);
    element?.addEventListener('mouseup', handleDragEnd);

    return () =&amp;gt; {
      element?.removeEventListener('mousedown', handleDragStart);
      element?.removeEventListener('mouseup', handleDragEnd);
    };
  }, [draggableId]); // Only draggableId needed!

  return { onDragStart: handleDragStart, onDragEnd: handleDragEnd };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes handleChange non-reactive; it always "sees" the latest values of onUpdate and settings, and it's referentially stable between renders. The best of all worlds, without having to write a single useless useCallback or useMemo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For new code on React 19.2+:&lt;/strong&gt; Use useEffectEvent instead of the Latest Ref pattern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For existing codebases or older React versions:&lt;/strong&gt; The Latest Ref pattern remains a solid solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should You Use useCallback and useMemo?🤔
&lt;/h3&gt;

&lt;p&gt;Now that you've seen the problems and solutions, here's a simple guide for evaluating the use of useCallback and useMemo in your code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Is the function passed to React.memo() component?&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  If NO — ❌ Don't use useCallback&lt;/li&gt;
&lt;li&gt;  If YES — Continue to #2&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Have you measured that the component is slow (&amp;gt;16ms render)?&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  If NO — ❌ Don't use useCallback&lt;/li&gt;
&lt;li&gt;  If YES — Continue to #3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Are all dependencies stable (not props or changing frequently)?&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  If NO — ⚠️ Use useEffectEvent or Latest Ref pattern instead&lt;/li&gt;
&lt;li&gt;  If YES — ✅ useCallback is appropriate&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Is this for a useEffect dependency?&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  — ⚠️ Use useEffectEvent (React 19.2+) or Latest Ref pattern instead&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The key principle:&lt;/strong&gt; Don't memoise unless you can answer "yes" to: &lt;em&gt;"Will this actually prevent something expensive from happening?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you're unsure, don't memoise. It's easier to add optimisation later than to debug broken memoisation chains. Remember, the best code is simple code. Three unnecessary useCallbacks or useMemo are harder to maintain than zero.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>performance</category>
      <category>react</category>
    </item>
    <item>
      <title>Understanding How Computers Actually Work</title>
      <dc:creator>Ejeh Daniel</dc:creator>
      <pubDate>Sat, 01 Nov 2025 12:55:30 +0000</pubDate>
      <link>https://dev.to/danishaft/understanding-how-computers-actually-work-4e0n</link>
      <guid>https://dev.to/danishaft/understanding-how-computers-actually-work-4e0n</guid>
      <description>&lt;p&gt;I use computers every day. You probably do too. But if someone asked me to explain how they really work, the high-level understanding I have would not suffice. I've been writing code for years now. But the gap between typing a command and pixels lighting up? Total mystery.🤔&lt;/p&gt;

&lt;p&gt;So I went down the rabbit hole. I spent more time researching, watching videos, and piecing together explanations from a dozen sources. It turned out to be surprisingly fun and more rewarding than I expected. Once you have the right frame, it's not hard to understand.&lt;/p&gt;

&lt;p&gt;At the core, every computer is simply an electronic machine that takes in input data (keyboard presses, mouse clicks, voice commands, etc.), stores it (short-term or long-term in registers, hard drives, RAM, etc.), processes it (using the CPU), and outputs results in some form (displaying text and images on the monitor, sound via speakers, network packets, etc.).&lt;/p&gt;

&lt;p&gt;Here's the thing: computers are built on &lt;strong&gt;layers of abstraction.&lt;/strong&gt; Each layer, from hardware, firmware, operating system, programming language, and application, hides the complexity beneath it. When you write code or use a website, you don't think about transistors or voltage levels or how the CPU decodes instructions. You just call functions. The layers beneath handle the rest.&lt;/p&gt;

&lt;p&gt;In this article, I'll focus on Storage and Processing, the parts that happen inside the computer and remain mysterious to most of us. The computer turns your input actions into signals and instructions, and the result is something you can interact with. The real question is where those signals and instructions live when they're inside, and how the computer transforms them? We'll unravel all of that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0ifdyxtp7s7laopmaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0ifdyxtp7s7laopmaz.png" alt="Concentric circles diagram showing computer abstraction layers from innermost to outermost: Hardware, Firmware, Operating System, Programming Languages, and Applications" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Image:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;A concentric ring or pyramid diagram showing the layer of abstraction progression from&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Hardware&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;→&lt;/em&gt; &lt;strong&gt;&lt;em&gt;OS&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;→&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Applications&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Language: Why Binary and How Bits Encode Everything
&lt;/h2&gt;

&lt;p&gt;Computers don't see the world the way we do. At the most basic level, they only speak binary: a language of 1s and 0s.💻 Each 1 or 0 is called a &lt;strong&gt;bit&lt;/strong&gt; (short for binary digit), and it's the fundamental unit of information. This means that every input or output, including every photo, song, video, email, and program you've ever interacted with, gets translated into or from binary data.&lt;/p&gt;

&lt;p&gt;Modern computers are digital. They don't just relay information. They give electrical signals meaning. A signal under a certain voltage threshold is interpreted as "off" (0), and over a certain threshold is "on" (1). The continuous voltage range is reduced to just two discrete states. This is where the term "digital" originates: from the digits 0 and 1.&lt;/p&gt;

&lt;p&gt;You might wonder why computers use only two values (0 and 1) instead of ten, like our decimal number system (0 to 9). The answer is that two states are much easier to tell apart reliably. When you're working with electrical signals, noise and variations are inevitable. With just two distinct states, one representing 0 and the other representing 1, there's no ambiguity, even when conditions aren't perfect. More states would mean more confusion, more errors, more complexity.&lt;/p&gt;

&lt;p&gt;Binary works just like decimal, except with powers of two. For example, 01001000 in binary equals 72 in decimal. This value represents the letter 'H' in ASCII, a standard whose first 128 characters are directly adopted by Unicode.&lt;/p&gt;

&lt;p&gt;With 8 bits (a byte), you can represent 256 possible values (2^8 = 256, ranging from 0 to 255). Combine enough bits, and you can represent anything: numbers, letters, images, sound, video. What makes that possible are standards. The computer doesn't "know" what 'H' is. It just follows agreed-upon rules like Unicode for text, RGB for images, or WAV for sound.&lt;/p&gt;

&lt;p&gt;Now, since everything boils down to numbers, how do we store something like a song or a video? Through encoding standards like MP4 for video or JPEG for images, these get converted to numbers that the computer can store and manipulate.&lt;/p&gt;

&lt;p&gt;Everything inside the computer, everything Storage holds, and everything Processing touches, is built on this foundation of 1s and 0s. But those bits are not abstract ideas. They're physical voltages controlled by billions of switches working together. Let's see how&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware: From Silicon to Logic
&lt;/h2&gt;

&lt;p&gt;Now let's descend to the hardware layer and talk about how the computer physically creates those 1s and 0s using electricity, and how it uses them to make decisions and perform operations. Remember those binary digits we just discussed? Here's how they physically exist inside the machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Motherboard
&lt;/h3&gt;

&lt;p&gt;Inside your computer case, you'll find many things: a power supply, cooling fans, storage drives, and various cables connecting everything together. But &lt;strong&gt;most of the action takes place on the motherboard&lt;/strong&gt;, the main circuit board of the computer, so that's where we'll focus.&lt;/p&gt;

&lt;p&gt;The motherboard is a printed circuit board (PCB), a flat board with copper wires etched into it, connecting various components. The board can have multiple layers for efficient signal routing.&lt;/p&gt;

&lt;p&gt;The key innovation for modern computers is the integrated circuit (IC), commonly called a chip or microchip. An IC is essentially a complete electronic circuit etched onto a single piece of silicon. This allows billions of tiny components called transistors to be packed onto a chip smaller than your fingernail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key components on the motherboard:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;The CPU (Central Processing Unit)&lt;/strong&gt; - A single chip that plugs into a socket on the motherboard. This is where all the processing happens. Inside the CPU are several specialized components working together: a &lt;strong&gt;control unit&lt;/strong&gt; that orchestrates everything, an &lt;strong&gt;ALU&lt;/strong&gt; that performs calculations, tiny fast memory locations called &lt;strong&gt;registers&lt;/strong&gt;, and small pools of ultra-fast memory called &lt;strong&gt;cache&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;RAM (Random Access Memory)&lt;/strong&gt; - Memory sticks that slot into the motherboard. This is the fast, temporary storage for data that the computer is actively working on.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Chipset&lt;/strong&gt; - A set of chips that manage data flow between the CPU, memory, and peripheral devices. Think of it as the motherboard's traffic controller, coordinating communication between all the major components.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;BIOS/UEFI chip&lt;/strong&gt; - Stores firmware (the low-level software layer between hardware and the operating system) that runs when you first turn on the computer. This firmware performs hardware checks and loads the operating system from storage.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Ports&lt;/strong&gt; - USB ports, network ports, etc., that connect external devices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7karoayigya9b91ukxhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7karoayigya9b91ukxhi.png" alt="Labeled diagram of a computer motherboard showing key components including CPU socket, RAM slots, chipset, PCIe slots, SATA ports, and I/O ports" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Image&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;:Motherboard layout showing the main circuit board with key components like the CPU socket, RAM slots, chipset, and ports.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We'll focus on what happens inside the CPU chip, because that's where processing happens.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Inside the CPU: Transistors and Gates
&lt;/h3&gt;

&lt;p&gt;Under the hood, bits are voltages. In modern circuits, a "high" voltage encodes 1, a "low" voltage encodes 0. Numbers are literally voltage levels flowing through wires. The component responsible for controlling these voltages is called a transistor, an electrically controlled switch that's the fundamental building block of all digital electronics.&lt;/p&gt;

&lt;p&gt;The type used in modern CPUs is called a MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor). Apply voltage to the control? Current flows. Switch ON. Remove voltage? Current stops. Switch OFF.&lt;/p&gt;

&lt;p&gt;A modern processor can pack tens of billions of transistors into a chip the size of your fingernail. Apple's M3 chips, for example, contain over 90 billion transistors 🤯, each one flipping on and off billions of times per second.&lt;/p&gt;

&lt;p&gt;Wiring these transistors in specific patterns creates &lt;strong&gt;logic gates&lt;/strong&gt;. Gates are tiny circuits that implement rules. They take voltage inputs (1s and 0s) and produce voltage outputs based on logical operations, like AND gates that only output 1 when both inputs are 1, or NOT gates that flip their input.&lt;/p&gt;

&lt;p&gt;Combine enough of these gates, and you can build something far greater: the &lt;strong&gt;ALU (Arithmetic Logic Unit)&lt;/strong&gt;—the part of your CPU that adds, subtracts, compares numbers, and performs logical operations. All computer math happens here. Want to add two numbers? The ALU chains together XOR gates and AND gates in a specific pattern called an adder circuit. But there's a subtle problem lurking here.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Clock: Keeping Everything in Sync
&lt;/h3&gt;

&lt;p&gt;Here's something interesting about voltage. It takes time to ripple through gates. Send a signal through one gate, and it might take a nanosecond to stabilize. Chain thousands of gates together and you've got signals arriving at different times, some still changing while others have already settled. Without coordination, it's chaos. Results would be corrupted mid-flight. Outputs would be garbage.&lt;/p&gt;

&lt;p&gt;That's where the &lt;strong&gt;clock&lt;/strong&gt; comes in. Think of the clock like an orchestra conductor 🎵 keeping all the musicians in time. A CPU's clock ticks billions of times per second (measured in GHz, a 3.5 GHz CPU ticks 3.5 billion times per second). The clock generates consistent electrical pulses sent down the wires. It does this the same way a digital watch keeps time: using a quartz crystal that vibrates at a precise frequency when electricity is applied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Between ticks:&lt;/strong&gt; Voltages propagate through gates. Computation is happening, signals are flowing, and math is being done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On each tick:&lt;/strong&gt; Results get locked into place (stored in small memory circuits) so the next stage can start fresh with stable values.&lt;/p&gt;

&lt;p&gt;The clock remains the fundamental heartbeat that synchronizes everything and keeps electrons marching in formation.&lt;/p&gt;

&lt;p&gt;So now we have transistors that can switch on and off to represent 1s and 0s. We have gates that use those switches to make decisions. We have an ALU that chains gates together to do math. And we have a clock that keeps everything synchronized. That's processing. But processing is useless without somewhere to put the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory: Where Data Lives
&lt;/h2&gt;

&lt;p&gt;Moving up a layer, we need to understand where data lives. Processing is only half the story. The CPU needs somewhere to store the data it's working on, and not all storage is created equal. Modern computers use a memory hierarchy, balancing trade-offs between speed, size, and cost. Think of memory like a pyramid:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu20cpss6g2uqdh43cefr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu20cpss6g2uqdh43cefr.png" alt="Pyramid diagram illustrating computer memory hierarchy from top to bottom: Registers (fastest, smallest), Cache L1/L2/L3, RAM (middle ground), and Storage/Hard Drives (slowest, largest)" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Image&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;: A pyramid diagram illustrating the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;memory hierarchy&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;, showing the trade-off in speed, size, and cost. It progresses from the fastest layer&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Registers&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;at the top, through&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Cache&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;L1/L2/L3 and&lt;/em&gt; &lt;strong&gt;&lt;em&gt;RAM&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;, to&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Storage&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;the largest layer at the bottom.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the top (fastest, smallest, most expensive):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Registers&lt;/strong&gt; - Tiny storage locations inside the CPU itself. A CPU might have only a few dozen registers, each holding a single number (typically 64 bits). These are where the CPU keeps the data it's using &lt;em&gt;right now&lt;/em&gt;. There are general-purpose registers for holding any data, and special-purpose registers like the &lt;strong&gt;program counter&lt;/strong&gt; which tracks which instruction to execute next and the &lt;strong&gt;instruction register&lt;/strong&gt; which holds the current instruction being processed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache&lt;/strong&gt; - Small, fast memory also very close to the CPU. Split into levels: L1 (smallest, fastest, typically per-core), L2 (bigger, slower, also per-core in modern CPUs), L3 (bigger still, slower still, usually shared across all cores). Modern CPUs might have 256 KB of L1 per core, 1-8 MB of L2 per core, and 8-64 MB of shared L3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In the middle:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;RAM (Random Access Memory)&lt;/strong&gt; - The main working memory. Your computer might have 8, 16, or 32 GB of RAM. Much larger than cache, but slower to access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;At the bottom (slowest, largest, cheapest):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Storage&lt;/strong&gt; - SSDs and hard drives. Terabytes of space, but thousands of times slower than RAM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This hierarchy exists because of physics and economics. You can't make infinite amounts of super-fast memory, it's expensive and generates too much heat. So computers keep a little bit of ultra-fast memory close to the CPU and larger pools of slower memory further away. But how do these different types of memory actually work, and how does data move between them?&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting CPU and RAM: Buses
&lt;/h3&gt;

&lt;p&gt;Before I show you how memory works, remember that data doesn't magically move; it travels along wires grouped into &lt;strong&gt;buses&lt;/strong&gt; (they "transport" information, like a bus transports passengers).&lt;/p&gt;

&lt;p&gt;There are two critical buses connecting the CPU and RAM:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Address bus&lt;/strong&gt;: The CPU sends memory addresses along this bus to tell RAM which location it wants to access. Think of it like telling a librarian which book you want by giving them the call number.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data bus&lt;/strong&gt;: The actual data travels back and forth along this bus. This is typically 64 bits wide (64 wires running in parallel), allowing 64 bits to transfer simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the CPU needs data from RAM, it sends the address on the address bus, and RAM responds by sending the data back on the data bus. This communication takes time, hundreds of clock cycles. This is why the memory hierarchy exists: to keep frequently used data closer.&lt;/p&gt;

&lt;h3&gt;
  
  
  RAM: The Active Workspace
&lt;/h3&gt;

&lt;p&gt;The most common type of RAM is &lt;strong&gt;DRAM&lt;/strong&gt; (Dynamic RAM). Each bit is stored as a tiny electric charge in a capacitor. If the capacitor is charged, that's a 1. If it's empty, that's a 0.&lt;/p&gt;

&lt;p&gt;Reading the charge tells you the bit's value, but reading is destructive, it discharges the capacitor, which must be recharged. Additionally, capacitors naturally leak charge over time, so RAM constantly refreshes itself thousands of times per second. This is why it's called &lt;em&gt;Dynamic&lt;/em&gt; RAM.&lt;/p&gt;

&lt;p&gt;Here's the catch: DRAM is volatile memory because it requires constant power to maintain data. When you turn off the power, the capacitors lose their charge, and everything in RAM disappears. That's why you have to save your work to disk which uses non-volatile memory that retains data without power, like SSDs using flash memory or traditional hard drives using magnetic storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache and registers use a different technology: SRAM&lt;/strong&gt; (Static RAM). Instead of capacitors, SRAM uses circuits made from transistors wired in a configuration that holds its state as long as power flows, no constant refreshing is needed. The same transistors that make logic gates can also be configured to store values. This makes SRAM much faster than DRAM, but also much more expensive and physically larger for the same amount of storage. That's why you only get megabytes of cache versus gigabytes of RAM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Storage: Data That Lasts
&lt;/h3&gt;

&lt;p&gt;RAM is fast but forgets everything when you power off. For data that needs to survive, computers use persistent storage like hard drives and SSDs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hard Disk Drives (HDDs)&lt;/strong&gt; store data using magnetism. Inside are spinning metal platters with a tiny head floating above them like a record player. The head magnetizes microscopic spots on the disk, one direction is 1, the opposite is 0. Reading just detects those magnetic fields.&lt;/p&gt;

&lt;p&gt;The problem? Moving parts. The head has to physically swing to the right location, and the platter has to spin to the right spot. That takes time. Millions of clock cycles. But HDDs are cheap and can store terabytes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSDs&lt;/strong&gt; use flash memory, a special type of transistor with an extra "floating gate" that can trap electrons. Force electrons onto that gate, and they stay there even without power. That's your 1. No electrons? That's 0. Reading just checks if charge is trapped.&lt;/p&gt;

&lt;p&gt;No moving parts means SSDs are significantly faster than HDDs. They're also more expensive per gigabyte, but for speed, nothing beats them.&lt;/p&gt;

&lt;p&gt;Modern computers often use both: SSD for the OS and programs, HDD for bulk storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Location Matters
&lt;/h3&gt;

&lt;p&gt;Here's a quick example: Let's say the CPU needs to add two numbers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;If the numbers are in registers:&lt;/strong&gt; 1-2 clock cycles. Nearly instant.⚡&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;If they're in L1 cache:&lt;/strong&gt; 4 cycles.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;If they're in L3 cache:&lt;/strong&gt; 40 cycles.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;If they're in RAM:&lt;/strong&gt; 200 cycles.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;If they're on disk:&lt;/strong&gt; Millions of cycles.🐌&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why good software always tries to keep frequently-used data "close" to the CPU. This principle is called &lt;strong&gt;locality of reference&lt;/strong&gt;. If the CPU just used a piece of data, it'll probably need it again soon, so keep it in cache. Modern CPUs are smart about this. They automatically predict what data you'll need next and pre-load it into cache. When they guess right, your program is faster. When they guess wrong, you wait.&lt;/p&gt;

&lt;p&gt;Now here's where it all comes together. We've got transistors and gates that can compute. We've got memory that can store data at different speeds. But how does the CPU orchestrate all of these to actually run a program?&lt;/p&gt;

&lt;h2&gt;
  
  
  Processing: How the CPU Executes Instructions
&lt;/h2&gt;

&lt;p&gt;We've seen the building blocks, transistors, gates, and memory. Now, let's explore how the CPU actually &lt;em&gt;runs&lt;/em&gt; a program.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Instruction Cycle
&lt;/h3&gt;

&lt;p&gt;Every program is a list of instructions stored in RAM. Each instruction is just a number; some bits describe the operation (add, subtract, load, store), and other bits point to where the data lives in memory. The CPU runs these instructions one at a time in a loop called the &lt;strong&gt;instruction cycle&lt;/strong&gt; also called the fetch-decode-execute cycle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69xua8jdjlzvneg6sfyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69xua8jdjlzvneg6sfyg.png" alt="Flow diagram showing the CPU instruction cycle: arrows connecting RAM to Instruction Register to Control Unit, which coordinates with ALU and Registers, with results flowing back to RAM" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Image:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;A flow visualization of the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;fetch-decode-execute cycle&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;, showing how instructions flow from&lt;/em&gt; &lt;strong&gt;&lt;em&gt;RAM&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;to the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;instruction register&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;, through the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;control unit&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;and&lt;/em&gt; &lt;strong&gt;&lt;em&gt;ALU/registers&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;, and back to memory.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's worth noting that different CPUs use different instruction sets (called ISAs)—x86-64, ARM, RISC-V, etc. Each has its own binary encoding, which is why software compiled for one processor won't run on another without translation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting It All Together
&lt;/h3&gt;

&lt;p&gt;Here's a simple example. Imagine a program that adds two numbers and stores the result. In assembly language (a human-readable version of machine code), it’ll look like this:👇&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOAD R1, [1000] ; Load the value at memory address 1000 into register R1LOAD R2, [1004] ; Load the value at memory address 1004 into register 
R2ADD R3, R1, R2 ; Add R1 and R2, store result in R3STORE 
R3, [1008] ; Store R3's value back to memory address 1008 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What the CPU does:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Fetch&lt;/strong&gt; the first instruction from RAM&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Decode&lt;/strong&gt; it: "LOAD means get data from memory"&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Execute&lt;/strong&gt;: Send address 1000 to RAM, get back the value, and put it in register R1&lt;/li&gt;
&lt;li&gt; Move to the next instruction, repeat for R2&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Fetch&lt;/strong&gt; the ADD instruction&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Decode&lt;/strong&gt;: "ADD means use the ALU"&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Execute&lt;/strong&gt;: The ALU takes values from R1 and R2, adds them using XOR and AND gate circuits, and puts the result in R3&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Fetch&lt;/strong&gt; the STORE instruction&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Execute&lt;/strong&gt;: Send R3's value to RAM at address 1008&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step is just voltages changing, gates opening and closing, electrons flowing through silicon. But layer enough of these steps together, and you get everything a computer can do.&lt;/p&gt;

&lt;p&gt;Modern CPUs do clever tricks to go faster: they fetch the next instruction while decoding the current one (called &lt;strong&gt;pipelining&lt;/strong&gt;), they execute multiple instructions at once on different parts of the chip (called &lt;strong&gt;superscalar execution&lt;/strong&gt;), and they run multiple programs simultaneously on different &lt;strong&gt;cores&lt;/strong&gt; (multiple CPUs on one chip). But the fundamental cycle—fetch, decode, execute, is always there, happening millions of times in the blink of an eye.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Conductor: How the OS Brings It All Together
&lt;/h2&gt;

&lt;p&gt;We've covered most of storage and processing, the core of how computers work. But there's one more piece needed to make this all useful: something has to coordinate everything we've discussed.&lt;/p&gt;

&lt;p&gt;That's where the &lt;strong&gt;operating system&lt;/strong&gt; comes in. Think of it as the conductor of the orchestra we've been building.&lt;/p&gt;

&lt;p&gt;You're probably running dozens of programs right now. Your browser, music player, and text editor, they're all loaded in RAM, all waiting for CPU time. The OS manages this by giving each program a tiny slice of CPU time, maybe 10 milliseconds, then switching to the next program. It happens so fast you don't notice. This is &lt;strong&gt;multitasking&lt;/strong&gt; in play.🔄&lt;/p&gt;

&lt;p&gt;The OS also protects programs from each other. Each program thinks it has full access to RAM, but the OS carves up memory into protected sections. If one program crashes, it can't corrupt another's data. A piece of hardware called the MMU (Memory Management Unit) works with the OS to implement virtual memory, where each program gets its own isolated address space. The MMU translates these virtual addresses to actual physical RAM addresses. When RAM fills up, the OS can even move unused data to disk (called paging or swapping), making programs think there's more RAM than physically exists, though accessing this disk-backed memory is much slower than real RAM as we now know.&lt;/p&gt;

&lt;p&gt;When your program needs to talk to hardware,display something on screen, save a file, or send data over the network, it doesn't do it directly. It asks the OS through a &lt;strong&gt;system call&lt;/strong&gt;, and the OS handles the details using &lt;strong&gt;device drivers&lt;/strong&gt; that know how to communicate with specific hardware.&lt;/p&gt;

&lt;p&gt;This layering means you can write code that says "save this file" without knowing anything about how your SSD controller works. The OS translates your high-level request into the low-level commands that storage hardware understands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So, how do computers actually work?&lt;/p&gt;

&lt;p&gt;At the bottom, billions of transistors switch on and off, representing 1s and 0s as voltage levels. Move one layer up, and logic gates combine those 1s and 0s into decisions, building up to ALUs that perform math using adder circuits and flags. Another layer up, the memory hierarchy balances speed and size—registers, cache, RAM, and storage each serve data at different speeds, connected by buses. Another layer up, the CPU executes its cycle: fetch, decode, execute, all synchronized by a clock ticking thousands of times per millisecond. One more layer, and the OS orchestrates it all, managing programs, protecting memory, and coordinating hardware. At the top sits your code, blissfully unaware of the intricate dance playing out below.&lt;/p&gt;

&lt;p&gt;Remember when I said the gap between typing a command and pixels lighting up was a total mystery? Now you know: your keypress becomes a voltage signal, travels through buses, gets stored as charges in RAM, triggers CPU instructions fetched from memory, executed by billions of transistors arranged in logic gates, all synchronized by a clock ticking billions of times per second, with the result written back to memory and sent to your graphics card to light up specific pixels. What once seemed like magic is now just... Well, really pretty clever engineering.✨&lt;/p&gt;

&lt;p&gt;Does understanding all this make you a better programmer? Maybe not directly. But there's something deeply satisfying about knowing how these various components make up the computer. And that, I think, is pretty remarkable.&lt;/p&gt;

&lt;p&gt;Here’s a list of amazing resources I’ve found while doing some research for this post:🙂.📚&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo" rel="noopener noreferrer"&gt;&lt;strong&gt;Crash Course Computer Science: Episodes 5, 7 and 17&lt;/strong&gt;&lt;/a&gt; - Good resource on CPU operations, memory hierarchy, and instruction cycles&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/i-math/intro-to-truth-tables-boolean-algebra-73b331dd9b94" rel="noopener noreferrer"&gt;&lt;strong&gt;Intro to Truth Tables &amp;amp; Boolean Algebra | by Brett Berry | Math Hacks | Medium&lt;/strong&gt;&lt;/a&gt; - An introduction to Logic and Truth Tables:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=cNN_tTXABUA" rel="noopener noreferrer"&gt;&lt;strong&gt;See How a CPU Works - In One Lesson&lt;/strong&gt;&lt;/a&gt; - A Visual demonstration of fetch-decode-execute cycle&lt;/p&gt;

&lt;p&gt;&lt;a href="https://computer.howstuffworks.com/motherboard.htm" rel="noopener noreferrer"&gt;&lt;strong&gt;How Motherboards Work - HowStuffWorks&lt;/strong&gt;&lt;/a&gt; - Motherboard components and PCB structure&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.howtogeek.com/394678/why-you-cant-use-cpu-clock-speed-to-compare-computer-performance/" rel="noopener noreferrer"&gt;&lt;strong&gt;Why You Can't Use CPU Clock Speed to Compare Computer Performance - How-To Geek&lt;/strong&gt;&lt;/a&gt; - An intro to Clock speed and CPU performance factors&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ed.ted.com/lessons/how-do-hard-drives-work-kanawat-senanan" rel="noopener noreferrer"&gt;&lt;strong&gt;How do hard drives work? - TED-Ed&lt;/strong&gt;&lt;/a&gt; - A deep dive into HDD magnetic storage mechanism&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Computer_memory" rel="noopener noreferrer"&gt;&lt;strong&gt;Computer memory - Wikipedia&lt;/strong&gt;&lt;/a&gt; - Comprehensive coverage of RAM, cache, registers, SRAM vs DRAM&lt;/p&gt;

&lt;p&gt;&lt;a href="https://electronics.stackexchange.com/questions/tagged/flash-memory" rel="noopener noreferrer"&gt;&lt;strong&gt;Stack Exchange - Flash Memory&lt;/strong&gt;&lt;/a&gt; - SSD flash memory technology and floating gates&lt;/p&gt;

</description>
      <category>programming</category>
      <category>computerscience</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>How I Built an n8n Community Node for Neon Database</title>
      <dc:creator>Ejeh Daniel</dc:creator>
      <pubDate>Sun, 07 Sep 2025 22:13:47 +0000</pubDate>
      <link>https://dev.to/danishaft/how-i-built-an-n8n-community-node-for-neon-database-3650</link>
      <guid>https://dev.to/danishaft/how-i-built-an-n8n-community-node-for-neon-database-3650</guid>
      <description>&lt;p&gt;&lt;a href="https://n8n.io/" rel="noopener noreferrer"&gt;n8n&lt;/a&gt; is an open-source automation platform that connects APIs, databases, and tools through visual workflows. You build workflows using nodes—triggers, actions, and transforms—and run them self-hosted or on n8n Cloud. &lt;/p&gt;

&lt;p&gt;Recently, I had to automate a product ops workflow: pull a product catalog, categorise items, surface close alternatives, and sync the results downstream. I used n8n as my automation engine and Neon as the database to handle deduping, versioning, and categorizing so the workflow could make smarter decisions over time.&lt;/p&gt;

&lt;p&gt;n8n provides official nodes for popular services like Slack, Google Sheets, and GitHub, but the real strength shows up in the community. Developers step in to cover the integrations the core team might never build. For example, &lt;a href="https://www.npmjs.com/package/n8n-nodes-apify" rel="noopener noreferrer"&gt;Minhlucvan’s Apify node&lt;/a&gt; brings web scraping workflows into n8n, &lt;a href="https://www.npmjs.com/package/n8n-nodes-kommo" rel="noopener noreferrer"&gt;Yatolstoy’s Kommo node&lt;/a&gt; connects to a well-known CRM, and &lt;a href="https://www.npmjs.com/package/n8n-nodes-applyboard" rel="noopener noreferrer"&gt;Mohsen Hadianfard’s ApplyBoard node&lt;/a&gt; helps education agents sync student applications. These are polished, ready-to-use integrations built by third-party developers and published on the npm registry.&lt;/p&gt;

&lt;p&gt;Here’s where things got interesting. At first, I used n8n’s official Postgres node to connect to my Neon database, but it quickly felt limiting. Neon is a modern, serverless Postgres built for speed, scale, and branching, yet the generic Postgres node couldn’t access features like branch switching. If I wanted to get the most out of Neon inside my workflows, I needed something else. That’s when I decided to build a custom Neon node for n8n.&lt;/p&gt;

&lt;p&gt;The goal was to make something that works, and also build it in a way that followed n8n’s conventions, handled database operations securely (avoiding SQL injection), and matched the quality bar of community nodes. The node needed to feel native to both n8n and Neon.&lt;/p&gt;

&lt;p&gt;The scope became clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Execute custom queries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Standard CRUD (INSERT, SELECT, UPDATE, DELETE)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support Neon’s branch switching via the n8n UI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow n8n resources on building custom nodes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this scope in mind, I also had to decide which type of node architecture to use. n8n supports two approaches: &lt;strong&gt;programmatic nodes&lt;/strong&gt; (trigger nodes) and &lt;strong&gt;declarative nodes&lt;/strong&gt; (JSON-based syntax nodes). Given my goals, declarative was the obvious choice. It would keep the node maintainable and aligned with community standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up My Custom Neon Node
&lt;/h2&gt;

&lt;p&gt;Before setting up my environment, the first thing I had to do was get familiar with how n8n nodes are actually built. &lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Before diving in&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 If you’re thinking, maybe “I’ll try building a node myself” here’s what I had to get in place to get my environment running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Node.js + npm&lt;/strong&gt; → the bread and butter of building custom nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TypeScript&lt;/strong&gt; → because n8n speaks TS, not plain JS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker + Docker Compose&lt;/strong&gt; → easiest way to spin up n8n locally without headaches&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Some JS/TS + Docker knowledge&lt;/strong&gt; → nothing advanced, but enough to not get stuck on the basics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once I had this stack ready, it was smooth sailing to clone &lt;a href="https://github.com/n8n-io/n8n-nodes-starter" rel="noopener noreferrer"&gt;n8n’s starter template&lt;/a&gt; and strip it down for my own Neon node. 🚀&lt;/p&gt;

&lt;p&gt;Nothing fancy yet, but at this point, only three essentials are really needed to get a custom node running:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;node JSON file&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;node TypeScript file&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;credentials file&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s the bare minimum foundation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Structuring the Node
&lt;/h3&gt;

&lt;p&gt;The n8n’s official &lt;strong&gt;resource on building custom nodes&lt;/strong&gt; already lays down the patterns people expect from database integrations, so sticking close to that gave my Neon node the same “native” feel. I also made one early call: go modular. It’s tempting to cram everything into a single file, but Neon workflows can get hairy fast, and n8n’s mental model is all about consistency. Every database node should feel familiar, so I leaned into that.&lt;/p&gt;

&lt;p&gt;In the n8n ecosystem, the recommended file structure is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;actions/&lt;/strong&gt; → sub-directories for each resource. Each contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a resource description file (&lt;code&gt;&amp;lt;resourceName&amp;gt;.resource.ts&lt;/code&gt; or &lt;code&gt;index.ts&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;one file per operation (&lt;code&gt;&amp;lt;operationName&amp;gt;.operation.ts&lt;/code&gt;), each exporting both the operation’s description and its execute function&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;methods/&lt;/strong&gt; → optional, for dynamic parameter functions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;transport/&lt;/strong&gt; → handles the actual communication layer (API calls, DB connections, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that mental model in mind, here’s how my Neon node structure shaped up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nodes/Neon/
├── actions/operations/
│   ├── executeQuery.operation.ts
│   ├── insert.operation.ts
│   ├── select.operation.ts
│   ├── update.operation.ts
│   └── delete.operation.ts
├── helpers/
│   ├── utils.ts
│   └── interface.ts
└── methods/
    ├── credentialTest.ts
    ├── listSearch.ts
    └── resourceMapping.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This separation of concerns made the whole thing more maintainable, and way easier for me to reason about.&lt;/p&gt;

&lt;p&gt;So basically, an n8n node is a single class in the main TypeScript file. That class acts as the entry poin, the hub where credentials, operations, and helper methods all plug in. Mine looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class Neon implements INodeType {
    description: INodeTypeDescription = {
        displayName: 'Neon',
        name: 'neon',
        icon: 'file:neon.svg',
        group: ['input'],
        version: 1,
        credentials: [
            {
                name: 'neonApi',
                required: true,
                testedBy: 'neonApiCredentialTest',
            },
        ],
             properties: [
              //fields the users interact with for each operation
             ]
    };

    methods = {
        credentialTest: { neonApiCredentialTest },
    };
      async execute(this: IExecuteFunctions):                                   Promise&amp;lt;INodeExecutionData[][]&amp;gt; {
}
// execution logic for each operation goes here
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Going forward, having that mental model upfront, made it much easier to see how each piece fits together as I went deeper.&lt;/p&gt;

&lt;p&gt;The next big decision was choosing a database engine. My first choice was to use the raw &lt;code&gt;pg&lt;/code&gt; library. It works, but it leaves you with a lot of heavy lifting: managing connection pools, handling transactions, formatting queries, basically more boilerplate code for me.&lt;/p&gt;

&lt;p&gt;Instead, I went with &lt;strong&gt;pg-promise&lt;/strong&gt;. It’s lightweight, wraps &lt;code&gt;pg&lt;/code&gt; under the hood, and takes care of the boring stuff for you. That made it a no-brainer for my custom node: less boilerplate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Credentials and Authentication
&lt;/h2&gt;

&lt;p&gt;One of the first hurdles was handling credentials properly. In n8n, credentials abstract away sensitive details like connection URLs, usernames, and passwords. For my Neon node, that meant figuring out how to handle Neon’s connection string.&lt;/p&gt;

&lt;p&gt;At first, I thought: &lt;em&gt;why not just drop a single field for the whole connection string?&lt;/em&gt; Easy enough. But in practice, it got messy fast. One typo, a missing parameter, or a poorly formatted copy-paste, and suddenly nothing works. Not the best option after all.&lt;/p&gt;

&lt;p&gt;So I decided to break things into explicit fields: host, port, database name, username, password, and an SSL toggle. That way, everything is clear, validated, and less error-prone.&lt;/p&gt;

&lt;p&gt;Those fields live inside a &lt;code&gt;NeonApi&lt;/code&gt; credential class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class NeonApi implements ICredentialType {
    name = 'neonApi';
    displayName = 'Neon Database API';

    properties: INodeProperties[] = [
        { displayName: 'Host', name: 'host', type: 'string', default: '' },
        { displayName: 'Database', name: 'database', type: 'string', default: '' },
        { displayName: 'Username', name: 'user', type: 'string', default: '' },
        { displayName: 'Password', name: 'password', type: 'string', typeOptions: { password: true }, default: '' },
        { displayName: 'Port', name: 'port', type: 'number', default: 5432 },
        { displayName: 'SSL', name: 'ssl', type: 'options', options: [{ name: 'Require', value: 'require' }, { name: 'Allow', value: 'allow' }], default: 'require' },
    ];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: having explicit fields makes it cleaner — This saved me from a lot of silly mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Core Operations
&lt;/h2&gt;

&lt;p&gt;Once credentials were sorted out, the next challenge was to make the core operations work: &lt;strong&gt;INSERT, SELECT, UPDATE, DELETE, and EXECUTE&lt;/strong&gt;. These are the bread and butter of any database node, and my Neon node needed to handle all of them to feel complete.&lt;/p&gt;

&lt;p&gt;At the core, an operation in n8n is a collection of configuration objects. It describes what fields show up in the interface, how users fill them out. Practically, each operation had very different interface needs, so each operation gets its own description so the interface feels consistent and predictable.&lt;/p&gt;

&lt;h3&gt;
  
  
  INSERT Operation: Auto-Map vs Manual Mapping
&lt;/h3&gt;

&lt;p&gt;The first operation I tackled was &lt;strong&gt;INSERT&lt;/strong&gt;. The goal was simple: let users drop new rows into their database tables without needing to write custom queries.&lt;/p&gt;

&lt;p&gt;But sometimes the input data already matched column names perfectly, other times it didn’t. So I provided two data map modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-Map Input Data&lt;/strong&gt; → n8n tries to map item keys directly to table columns. Perfect if the JSON keys from the previous node match the column names in the Neon database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Map (Define Values Below)&lt;/strong&gt; → if keys don’t match user will pick each column and set its value themselves. More tedious, but gives full control.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an example of the INSERT operation config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const properties: INodeProperties[] = [
 // Data to send for insert operations
 {
   displayName: 'Map Column Mode',
   name: 'mappingMode',
   type: 'options',
   options: [
     {
       name: 'Auto-Map Input Data to Columns',
       value: 'autoMapInputData',
       description: 'Use when node input properties names exactly match the neon column names',
     },
     {
       name: 'Map Each Column Manually',
       value: 'defineBelow',
       description: 'Set the value for each destination column manually',
     },
   ],
   default: 'autoMapInputData',
   description:
     'Whether to map node input properties and the table data automatically or manually',
   displayOptions: {
     show: {
       resource: ['row'],
       operation: ['insert'],
     },
   },
 },
 {
   displayName: 'Values to Send',
   name: 'valuesToSend',
   placeholder: 'Add Value',
   type: 'fixedCollection',
   typeOptions: {
     multipleValueButtonText: 'Add Value',
     multipleValues: true,
   },
   displayOptions: {
     show: {
       mappingMode: ['defineBelow'],
     },
   },
   default: {},
   options: [
     {
       displayName: 'Values',
       name: 'values',
       values: [
         {
           // eslint-disable-next-line n8n-nodes-base/node-param-display-name-wrong-for-dynamic-options
           displayName: 'Column',
           name: 'column',
           type: 'options',
           // eslint-disable-next-line n8n-nodes-base/node-param-description-wrong-for-dynamic-options
           description:
             'Choose from the list, or specify an ID using an &amp;lt;a href="https://docs.n8n.io/code/expressions/" target="_blank"&amp;gt;expression&amp;lt;/a&amp;gt;',
           typeOptions: {
             loadOptionsMethod: 'getTableColumns',
             loadOptionsDependsOn: ['schema', 'table'],
           },
           default: '',
         },
         {
           displayName: 'Value',
           name: 'value',
           type: 'string',
           default: '',
         },
       ],
     },
   ],
 },
 optionsCollection
];

const displayOptions = {
   show: {
     resource: ['row'],
     operation: ['insert'],
   },
   hide: {
     table: [''],
   },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the user selects manual mode, they see a &lt;strong&gt;“Values to Send”&lt;/strong&gt; field where they can pick columns and provide values one by one. Behind the scenes, I had to normalise data from both modes into the same object structure before building the SQL query. Otherwise, I had to deal with unstable data formats. &lt;/p&gt;

&lt;p&gt;The catch? Not every input object matched the actual table schema. That led to confusing errors where inserts would silently fail. So before building the query, I pulled the schema directly from the database and cross-checked it with the input object. If any mismatch is found, I stop execution early.&lt;/p&gt;

&lt;h3&gt;
  
  
  Execute Operation: The Pain Hole
&lt;/h3&gt;

&lt;p&gt;The INSERT operation was straightforward; EXECUTE was the tricky one. On the surface, the interface need is simple: a SQL editor for the raw query, plus a field for query parameters. A user could type something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM users WHERE id = {{ $json.userId }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks fine, but here’s the catch—&lt;strong&gt;n8n expressions (&lt;code&gt;{{ $json.userId }}&lt;/code&gt;) aren’t plain strings&lt;/strong&gt;. They’re dynamic placeholders. I passed them straight into my query, and Neon kept throwing errors.&lt;/p&gt;

&lt;p&gt;The fix was small but critical. I looped through the query, found all expressions, and resolved them before execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (const resolvable of getExpressions(query)) {
  query = query.replace(
    resolvable,
    this.evaluateExpression(resolvable, index) as string
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Worked!!!`, as expected, I just had to play nicely with the queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  SELECT Operation: Condition and Sorting
&lt;/h3&gt;

&lt;p&gt;The SELECT operation felt trickier, but needed more thinking around it. The goal was to let users fetch rows with &lt;strong&gt;filters, conditions, and sort results&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;So the interface need was clear:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Select Rows&lt;/strong&gt; → A fixed collection where users pick a column, an operator (&lt;code&gt;=&lt;/code&gt;, &lt;code&gt;&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;&lt;/code&gt;, etc.), and a value. Example: &lt;em&gt;Column = age, Operator = &amp;gt;, Value = 18.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Combine Conditions&lt;/strong&gt; → A simple dropdown: &lt;code&gt;AND&lt;/code&gt; or &lt;code&gt;OR&lt;/code&gt;. This lets users build compound filters.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sort&lt;/strong&gt; → A collection field where users choose a column and direction (&lt;code&gt;ASC&lt;/code&gt; or &lt;code&gt;DESC&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Behind the scenes, I build a WHERE clause with these input values before proceeding to add it to the final query to be executed. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User config →&lt;code&gt;{ column: 'age', operator: '&amp;gt;', value: 18 }&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Values → &lt;code&gt;['age', 18]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the built  WHERE clause:&lt;br&gt;
&lt;code&gt;WHERE $0:name &amp;gt; $1 AND $2:name = $3&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This pattern made SELECT operations safe. Users could stack conditions, combine them, and sort results without writing queries themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  UPDATE and DELETE Operations
&lt;/h3&gt;

&lt;p&gt;Once INSERT and SELECT were working, &lt;strong&gt;UPDATE&lt;/strong&gt; and &lt;strong&gt;DELETE&lt;/strong&gt; felt much easier. The patterns were already in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Users will specify conditions the same way as SELECT.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For UPDATE, they will choose which columns to modify and what the new values should be.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For DELETE, they will choose to drop the whole table, truncate all the table data, or just delete rows that match the filter conditions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By this point, the heavy lifting was done. Most of the work was making sure inputs were validated against the Neon schema and queries were parameterised for safety.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Moving Forward
&lt;/h2&gt;

&lt;p&gt;Looking back, one of the earliest decisions that saved me countless headaches was sticking with &lt;strong&gt;parameterised queries&lt;/strong&gt;. Raw SQL feels more straightforward, especially when you’re building complex conditions, but parameterisation gave me two wins at once: protection against injection and a consistent way to construct queries programmatically. &lt;code&gt;pg-promise&lt;/code&gt; made this possible, to use syntax like &lt;code&gt;$1:name&lt;/code&gt; or &lt;code&gt;$2:name&lt;/code&gt;, with this everything became smoother.&lt;/p&gt;

&lt;p&gt;Another key choice for me was leaning on &lt;strong&gt;n8n’s conventions&lt;/strong&gt;. Their documentation and official node patterns became my guardrails. Any time I drifted too far from those patterns, things broke. In n8n, conventions aren’t optional.&lt;/p&gt;

&lt;p&gt;The irony? I spent more time researching, planning, and translating Neon requirements into n8n’s mental model than actually writing code. The hardest part of building a custom node isn’t SQL or TypeScript, t’s learning how to translate the platform’s needs into something that fits seamlessly into n8n’s conventions.&lt;/p&gt;

&lt;p&gt;The fact that almost any custom node you can imagine is buildable in just a few weeks is wild — the custom node came together in such a short time, and it blew my mind.&lt;/p&gt;

&lt;p&gt;The code lives on &lt;a href="https://github.com/danishaft/Neon-db-node" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Fork it, run it, break it, fix it. 🚀&lt;/p&gt;

</description>
      <category>automation</category>
      <category>postgres</category>
      <category>community</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
