DEV Community

Cover image for How to Store Persistent Data Across Workflows Executions in n8n
Federico Moretti
Federico Moretti

Posted on

How to Store Persistent Data Across Workflows Executions in n8n

Soon, storing persistent data across workflows executions will be possible on n8n: they released a beta feature called Data tables that stores data in an SQL-like structure which can be shared within projects and between different workflows. Something n8n users like me waited for a long time.


It’s finally here. Or, better, it will be here in a while: I’m talking about data persistency on n8n. You should already know that workflows live in executions, so they only keep data during their lifecycle, but you can store inputs and outputs in external databases to save data for the future.

My company, for example, chose Redis and Aurora to do so, depending on the developers’ needs. I feel pretty comfortable with them: I can also use different triggers to get, update, and delete data whenever I have to. But this requires existing, external databases.

Data tables, on the other hand, are built-in, so you don’t need other resources to achieve data persistency in n8n. You will get the new feature by installing n8n@next globally, since the latest stable version doesn’t include it. FYI, I’m using v1.113.2 at the time, and it works like a charm.

Data tables - 1

You will find Data tables next to the Executions tab, and it makes sense, because you can share them like you do with Credentials. Clicking Create Data table lets you create a new one: you can only choose a name for it. It’s a really simple operation, and you don’t have to be a DBA to proceed.

Data tables - 2

Once a new data table has been created, you will see it on your dashboard immediately. It comes with three default columns: id, createdAt, and updatedAt. You can add more columns, but you can’t remove them. So, I suggest you to start adding a new column, since they will be filled automatically.

Data tables - 3

Now, you can create a new workflow or edit an existing one, adding a node to control your data table. Seven new nodes appear on the list, allowing you to work on your tables: you can get, insert, delete, update, and upsert rows, as well as execute nodes whether a table exists or not.

This, combined with sub-workflows or other nodes in the same workflow, enables a lot of new features. In my case, for example, I needed to store a prompt made of about fifteen parts which are taken from single Google Docs documents. If a document gets updated, a sub-workflow will be triggered.

But, if no changes were made, the final prompt is taken from the data table, without executing an heavy sub-workflow that slows down the whole process. It relies on Aurora in production, because data tables are still experimental, but I’ve already planned a switch when they’ll be ready.


The sense of using a built-in function, instead of an external database, is that you don’t need to create new credentials, nor to be afraid of slow connections. Controlling the flow by setting triggers and nodes which handle data does the rest. I can’t wait for it to be available.

Top comments (0)