n8n is an open-source automation platform that connects APIs, databases, and tools through visual workflows. You build workflows using nodes—triggers, actions, and transforms—and run them self-hosted or on n8n Cloud.
Recently, I had to automate a product ops workflow: pull a product catalog, categorise items, surface close alternatives, and sync the results downstream. I used n8n as my automation engine and Neon as the database to handle deduping, versioning, and categorizing so the workflow could make smarter decisions over time.
n8n provides official nodes for popular services like Slack, Google Sheets, and GitHub, but the real strength shows up in the community. Developers step in to cover the integrations the core team might never build. For example, Minhlucvan’s Apify node brings web scraping workflows into n8n, Yatolstoy’s Kommo node connects to a well-known CRM, and Mohsen Hadianfard’s ApplyBoard node helps education agents sync student applications. These are polished, ready-to-use integrations built by third-party developers and published on the npm registry.
Here’s where things got interesting. At first, I used n8n’s official Postgres node to connect to my Neon database, but it quickly felt limiting. Neon is a modern, serverless Postgres built for speed, scale, and branching, yet the generic Postgres node couldn’t access features like branch switching. If I wanted to get the most out of Neon inside my workflows, I needed something else. That’s when I decided to build a custom Neon node for n8n.
The goal was to make something that works, and also build it in a way that followed n8n’s conventions, handled database operations securely (avoiding SQL injection), and matched the quality bar of community nodes. The node needed to feel native to both n8n and Neon.
The scope became clear:
Execute custom queries
Standard CRUD (INSERT, SELECT, UPDATE, DELETE)
Support Neon’s branch switching via the n8n UI
Follow n8n resources on building custom nodes
With this scope in mind, I also had to decide which type of node architecture to use. n8n supports two approaches: programmatic nodes (trigger nodes) and declarative nodes (JSON-based syntax nodes). Given my goals, declarative was the obvious choice. It would keep the node maintainable and aligned with community standards.
Setting up My Custom Neon Node
Before setting up my environment, the first thing I had to do was get familiar with how n8n nodes are actually built.
👉 Before diving in
If you’re thinking, maybe “I’ll try building a node myself” here’s what I had to get in place to get my environment running:
Node.js + npm → the bread and butter of building custom nodes
TypeScript → because n8n speaks TS, not plain JS
Docker + Docker Compose → easiest way to spin up n8n locally without headaches
Some JS/TS + Docker knowledge → nothing advanced, but enough to not get stuck on the basics.
Once I had this stack ready, it was smooth sailing to clone n8n’s starter template and strip it down for my own Neon node. 🚀
Nothing fancy yet, but at this point, only three essentials are really needed to get a custom node running:
The node JSON file
The node TypeScript file
The credentials file
That’s the bare minimum foundation.
Structuring the Node
The n8n’s official resource on building custom nodes already lays down the patterns people expect from database integrations, so sticking close to that gave my Neon node the same “native” feel. I also made one early call: go modular. It’s tempting to cram everything into a single file, but Neon workflows can get hairy fast, and n8n’s mental model is all about consistency. Every database node should feel familiar, so I leaned into that.
In the n8n ecosystem, the recommended file structure is this:
-
actions/ → sub-directories for each resource. Each contains:
- a resource description file (
<resourceName>.resource.ts
orindex.ts
) - one file per operation (
<operationName>.operation.ts
), each exporting both the operation’s description and its execute function
- a resource description file (
methods/ → optional, for dynamic parameter functions
transport/ → handles the actual communication layer (API calls, DB connections, etc.)
With that mental model in mind, here’s how my Neon node structure shaped up:
nodes/Neon/
├── actions/operations/
│ ├── executeQuery.operation.ts
│ ├── insert.operation.ts
│ ├── select.operation.ts
│ ├── update.operation.ts
│ └── delete.operation.ts
├── helpers/
│ ├── utils.ts
│ └── interface.ts
└── methods/
├── credentialTest.ts
├── listSearch.ts
└── resourceMapping.ts
This separation of concerns made the whole thing more maintainable, and way easier for me to reason about.
So basically, an n8n node is a single class in the main TypeScript file. That class acts as the entry poin, the hub where credentials, operations, and helper methods all plug in. Mine looked something like this:
export class Neon implements INodeType {
description: INodeTypeDescription = {
displayName: 'Neon',
name: 'neon',
icon: 'file:neon.svg',
group: ['input'],
version: 1,
credentials: [
{
name: 'neonApi',
required: true,
testedBy: 'neonApiCredentialTest',
},
],
properties: [
//fields the users interact with for each operation
]
};
methods = {
credentialTest: { neonApiCredentialTest },
};
async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
}
// execution logic for each operation goes here
}
Going forward, having that mental model upfront, made it much easier to see how each piece fits together as I went deeper.
The next big decision was choosing a database engine. My first choice was to use the raw pg
library. It works, but it leaves you with a lot of heavy lifting: managing connection pools, handling transactions, formatting queries, basically more boilerplate code for me.
Instead, I went with pg-promise. It’s lightweight, wraps pg
under the hood, and takes care of the boring stuff for you. That made it a no-brainer for my custom node: less boilerplate.
Managing Credentials and Authentication
One of the first hurdles was handling credentials properly. In n8n, credentials abstract away sensitive details like connection URLs, usernames, and passwords. For my Neon node, that meant figuring out how to handle Neon’s connection string.
At first, I thought: why not just drop a single field for the whole connection string? Easy enough. But in practice, it got messy fast. One typo, a missing parameter, or a poorly formatted copy-paste, and suddenly nothing works. Not the best option after all.
So I decided to break things into explicit fields: host, port, database name, username, password, and an SSL toggle. That way, everything is clear, validated, and less error-prone.
Those fields live inside a NeonApi
credential class:
export class NeonApi implements ICredentialType {
name = 'neonApi';
displayName = 'Neon Database API';
properties: INodeProperties[] = [
{ displayName: 'Host', name: 'host', type: 'string', default: '' },
{ displayName: 'Database', name: 'database', type: 'string', default: '' },
{ displayName: 'Username', name: 'user', type: 'string', default: '' },
{ displayName: 'Password', name: 'password', type: 'string', typeOptions: { password: true }, default: '' },
{ displayName: 'Port', name: 'port', type: 'number', default: 5432 },
{ displayName: 'SSL', name: 'ssl', type: 'options', options: [{ name: 'Require', value: 'require' }, { name: 'Allow', value: 'allow' }], default: 'require' },
];
}
Notice: having explicit fields makes it cleaner — This saved me from a lot of silly mistakes.
Building the Core Operations
Once credentials were sorted out, the next challenge was to make the core operations work: INSERT, SELECT, UPDATE, DELETE, and EXECUTE. These are the bread and butter of any database node, and my Neon node needed to handle all of them to feel complete.
At the core, an operation in n8n is a collection of configuration objects. It describes what fields show up in the interface, how users fill them out. Practically, each operation had very different interface needs, so each operation gets its own description so the interface feels consistent and predictable.
INSERT Operation: Auto-Map vs Manual Mapping
The first operation I tackled was INSERT. The goal was simple: let users drop new rows into their database tables without needing to write custom queries.
But sometimes the input data already matched column names perfectly, other times it didn’t. So I provided two data map modes:
Auto-Map Input Data → n8n tries to map item keys directly to table columns. Perfect if the JSON keys from the previous node match the column names in the Neon database
Manual Map (Define Values Below) → if keys don’t match user will pick each column and set its value themselves. More tedious, but gives full control.
Here is an example of the INSERT operation config:
const properties: INodeProperties[] = [
// Data to send for insert operations
{
displayName: 'Map Column Mode',
name: 'mappingMode',
type: 'options',
options: [
{
name: 'Auto-Map Input Data to Columns',
value: 'autoMapInputData',
description: 'Use when node input properties names exactly match the neon column names',
},
{
name: 'Map Each Column Manually',
value: 'defineBelow',
description: 'Set the value for each destination column manually',
},
],
default: 'autoMapInputData',
description:
'Whether to map node input properties and the table data automatically or manually',
displayOptions: {
show: {
resource: ['row'],
operation: ['insert'],
},
},
},
{
displayName: 'Values to Send',
name: 'valuesToSend',
placeholder: 'Add Value',
type: 'fixedCollection',
typeOptions: {
multipleValueButtonText: 'Add Value',
multipleValues: true,
},
displayOptions: {
show: {
mappingMode: ['defineBelow'],
},
},
default: {},
options: [
{
displayName: 'Values',
name: 'values',
values: [
{
// eslint-disable-next-line n8n-nodes-base/node-param-display-name-wrong-for-dynamic-options
displayName: 'Column',
name: 'column',
type: 'options',
// eslint-disable-next-line n8n-nodes-base/node-param-description-wrong-for-dynamic-options
description:
'Choose from the list, or specify an ID using an <a href="https://docs.n8n.io/code/expressions/" target="_blank">expression</a>',
typeOptions: {
loadOptionsMethod: 'getTableColumns',
loadOptionsDependsOn: ['schema', 'table'],
},
default: '',
},
{
displayName: 'Value',
name: 'value',
type: 'string',
default: '',
},
],
},
],
},
optionsCollection
];
const displayOptions = {
show: {
resource: ['row'],
operation: ['insert'],
},
hide: {
table: [''],
},
};
If the user selects manual mode, they see a “Values to Send” field where they can pick columns and provide values one by one. Behind the scenes, I had to normalise data from both modes into the same object structure before building the SQL query. Otherwise, I had to deal with unstable data formats.
The catch? Not every input object matched the actual table schema. That led to confusing errors where inserts would silently fail. So before building the query, I pulled the schema directly from the database and cross-checked it with the input object. If any mismatch is found, I stop execution early.
Execute Operation: The Pain Hole
The INSERT operation was straightforward; EXECUTE was the tricky one. On the surface, the interface need is simple: a SQL editor for the raw query, plus a field for query parameters. A user could type something like:
SELECT * FROM users WHERE id = {{ $json.userId }}
Looks fine, but here’s the catch—n8n expressions ({{ $json.userId }}
) aren’t plain strings. They’re dynamic placeholders. I passed them straight into my query, and Neon kept throwing errors.
The fix was small but critical. I looped through the query, found all expressions, and resolved them before execution:
for (const resolvable of getExpressions(query)) {
query = query.replace(
resolvable,
this.evaluateExpression(resolvable, index) as string
);
}
Worked!!!`, as expected, I just had to play nicely with the queries.
SELECT Operation: Condition and Sorting
The SELECT operation felt trickier, but needed more thinking around it. The goal was to let users fetch rows with filters, conditions, and sort results.
So the interface need was clear:
-
Select Rows → A fixed collection where users pick a column, an operator (
=
,>
,<
, etc.), and a value. Example: Column = age, Operator = >, Value = 18. -
Combine Conditions → A simple dropdown:
AND
orOR
. This lets users build compound filters. -
Sort → A collection field where users choose a column and direction (
ASC
orDESC
).
Behind the scenes, I build a WHERE clause with these input values before proceeding to add it to the final query to be executed. For example:
- User config →
{ column: 'age', operator: '>', value: 18 }
- Values →
['age', 18]
Here’s the built WHERE clause:
WHERE $0:name > $1 AND $2:name = $3
This pattern made SELECT operations safe. Users could stack conditions, combine them, and sort results without writing queries themselves.
UPDATE and DELETE Operations
Once INSERT and SELECT were working, UPDATE and DELETE felt much easier. The patterns were already in place:
Users will specify conditions the same way as SELECT.
For UPDATE, they will choose which columns to modify and what the new values should be.
For DELETE, they will choose to drop the whole table, truncate all the table data, or just delete rows that match the filter conditions.
By this point, the heavy lifting was done. Most of the work was making sure inputs were validated against the Neon schema and queries were parameterised for safety.
Challenges and Moving Forward
Looking back, one of the earliest decisions that saved me countless headaches was sticking with parameterised queries. Raw SQL feels more straightforward, especially when you’re building complex conditions, but parameterisation gave me two wins at once: protection against injection and a consistent way to construct queries programmatically. pg-promise
made this possible, to use syntax like $1:name
or $2:name
, with this everything became smoother.
Another key choice for me was leaning on n8n’s conventions. Their documentation and official node patterns became my guardrails. Any time I drifted too far from those patterns, things broke. In n8n, conventions aren’t optional.
The irony? I spent more time researching, planning, and translating Neon requirements into n8n’s mental model than actually writing code. The hardest part of building a custom node isn’t SQL or TypeScript, t’s learning how to translate the platform’s needs into something that fits seamlessly into n8n’s conventions.
The fact that almost any custom node you can imagine is buildable in just a few weeks is wild — the custom node came together in such a short time, and it blew my mind.
The code lives on GitHub. Fork it, run it, break it, fix it. 🚀
Top comments (0)