<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eric Goldman</title>
    <description>The latest articles on DEV Community by Eric Goldman (@thisisgoldman).</description>
    <link>https://dev.to/thisisgoldman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thisisgoldman"/>
    <language>en</language>
    <item>
      <title>How PostgreSQL logical decoding and plugins work</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 22 May 2025 20:24:00 +0000</pubDate>
      <link>https://dev.to/sequin/how-postgresql-logical-decoding-and-plugins-work-471m</link>
      <guid>https://dev.to/sequin/how-postgresql-logical-decoding-and-plugins-work-471m</guid>
      <description>&lt;p&gt;When you're building a change data capture (CDC) pipeline with PostgreSQL, one of the first decisions you'll make is which &lt;a href="https://www.postgresql.org/docs/current/logicaldecoding.html" rel="noopener noreferrer"&gt;output plugin&lt;/a&gt; to use. These plugins determine how your database changes get formatted and delivered—whether you're replicating to another Postgres instance, streaming to Kafka, or delivering changes to a webhook.&lt;/p&gt;

&lt;p&gt;But what exactly are these plugins, and how do they differ in practice? Let's dive in with a concrete example and explore your options.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple example: see an update flow end-to-end
&lt;/h2&gt;

&lt;p&gt;To understand how different output plugins work, let's start with a simple scenario. We'll set up logical replication, make a change, and see how different plugins format that change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up logical replication
&lt;/h3&gt;

&lt;p&gt;First, you need to enable logical replication in PostgreSQL. You can read the Sequin docs to get specific instructions for Postgres providers like AWS, GCP, and Azure. But at a high level, you can run the command &lt;code&gt;show wal_level;&lt;/code&gt; . If it’s not, you can follow &lt;a href="https://sequinstream.com/docs/connect-postgres#enable-logical-replication" rel="noopener noreferrer"&gt;these steps&lt;/a&gt; to enable it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a table and replication slot
&lt;/h3&gt;

&lt;p&gt;Create a table (if you don’t have one already) and create a publication and replication slot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- 1. Create a test table&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- 2. Set the replica identity to full for your table so all changes flow through:&lt;/span&gt;
&lt;span class="k"&gt;alter&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;properties&lt;/span&gt; &lt;span class="n"&gt;replica&lt;/span&gt; &lt;span class="k"&gt;identity&lt;/span&gt; &lt;span class="k"&gt;full&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- 2. Insert initial data&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'John Doe'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'john@example.com'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- 3. Crate a publication&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;cool_pub_name&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;all&lt;/span&gt; &lt;span class="n"&gt;tables&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;publish_via_partition_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- 4. Create a logical replication slot with your chosen plugin&lt;/span&gt;
&lt;span class="c1"&gt;-- Start with test_decoding to get familiar:&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;pg_create_logical_replication_slot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'test_slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'test_decoding'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now make the change we want to track:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- The change we're tracking&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'John Smith'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reading changes from the replication slot
&lt;/h3&gt;

&lt;p&gt;To see the formatted output, you can read from your replication slot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- For test_decoding (simplest to read): &lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;lsn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;xid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_logical_slot_peek_changes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'test_slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple update will look very diff erent depending on which output plugin you're using.&lt;/p&gt;

&lt;h2&gt;
  
  
  How changes Fflow from table to output plugin
&lt;/h2&gt;

&lt;p&gt;When you executed that &lt;code&gt;UPDATE&lt;/code&gt; statement, here's the journey your change took through PostgreSQL to arrive in the replication slot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F847wgamt4kqp8v0rtypo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F847wgamt4kqp8v0rtypo.png" alt="How Changes Flow from Table to Output Plugin - visual selection (1).png" width="800" height="836"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Write-Ahead Log (WAL)
&lt;/h3&gt;

&lt;p&gt;PostgreSQL's WAL does not store pre-decoded, human-readable messages. Instead, it logs low-level binary records describing each change at the storage level. When our user name update happens, the WAL record looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# What's actually in the WAL (simplified)
WAL Record: LSN 0/1A2B3C4
- Relation OID: 16384 (internal table identifier)
- Transaction ID: 12345
- Operation: UPDATE
- Block/offset: physical storage location
- Old tuple: [binary data for old row]
- New tuple: [binary data for new row]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this stage the WAL only knows about internal identifiers and bytes – it does not contain table names, column names, or data in text form. Any readable output you see (e.g. JSON or text changes) is generated later by the decoding plugin – not stored in the WAL itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Logical decoding process
&lt;/h3&gt;

&lt;p&gt;A logical replication slot streams changes by decoding WAL records on demand as a client consumes the slot. Here's what happens when you read from your slot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL starts from your slot's position (LSN)&lt;/strong&gt; and reads WAL records sequentially&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Looks up table metadata&lt;/strong&gt;: The decoding process uses the WAL record's metadata (like the relation OID) to look up the corresponding table and column definitions in the system catalogs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transforms internal data&lt;/strong&gt;: The binary tuple data gets converted into logical representation with actual table names, column names, and typed values&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assembles complete transactions&lt;/strong&gt;: Internally, the logical decoder assembles transactions in commit order; only after a transaction is fully decoded (and committed) will its changes be passed to the plugin&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Importantly, this decoding happens at read time, not when the WAL is originally written.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Plugin-specific formatting
&lt;/h3&gt;

&lt;p&gt;The output plugin takes the decoded change data and formats it according to its own rules. For our user update, every plugin receives the same structured information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Table name: &lt;code&gt;public.users&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Operation: &lt;code&gt;UPDATE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;New values: &lt;code&gt;{id: 1, name: "John Smith", email: "john@example.com"}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Old values: &lt;code&gt;{name: "John Doe"}&lt;/code&gt; (what changed)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every logical decoding plugin receives the same core information about the change; what differs is how they output it. The &lt;code&gt;test_decoding&lt;/code&gt; plugin formats this as human-readable text, &lt;code&gt;wal2json&lt;/code&gt; converts it to JSON, and &lt;code&gt;pgoutput&lt;/code&gt; encodes it in PostgreSQL's binary logical replication protocol.&lt;/p&gt;

&lt;p&gt;This architecture allows PostgreSQL to support many output formats without changing the underlying WAL format or storing duplicate information. The core database only needs to log changes once in the WAL, and then any number of output plugins can decode those logs and present the data in JSON, SQL, binary, etc., as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built-in output plugins
&lt;/h2&gt;

&lt;p&gt;PostgreSQL ships with two logical decoding plugins out of the box. These don't require any additional installations—they're ready to use on any Postgres 10+ server.&lt;/p&gt;

&lt;h3&gt;
  
  
  pgoutput
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgoutput&lt;/code&gt; is PostgreSQL's default plugin for logical replication. If you're using the built-in publish/subscribe system, you're already using this plugin behind the scenes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What our user name update looks like:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Binary protocol message (conceptual representation)
BEGIN LSN: 0/1A2B3C4
TABLE: public.users
UPDATE: id[integer]=1 name[text]='John Smith' (old: 'John Doe') email[text]='john@example.com'
COMMIT LSN: 0/1A2B3C4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The actual output is a binary protocol, so you can't just read it directly. Tools like Sequin or custom replication clients parse this format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade-offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Binary format is efficient and compact&lt;/li&gt;
&lt;li&gt;✅ Handles complex PostgreSQL data types without losing information&lt;/li&gt;
&lt;li&gt;✅ High performance with incremental streaming&lt;/li&gt;
&lt;li&gt;✅ Production-ready and universally supported on managed services&lt;/li&gt;
&lt;li&gt;✅ Used by PostgreSQL's native logical replication&lt;/li&gt;
&lt;li&gt;❌ Requires special tools to read (can't just use SQL functions)&lt;/li&gt;
&lt;li&gt;❌ Binary protocol is not human-readable for debugging&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  test_decoding
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;test_decoding&lt;/code&gt; is PostgreSQL's example plugin. It's mainly useful for understanding how logical decoding works or for quick debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What our user name update looks like:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt; &lt;span class="mi"&gt;12345&lt;/span&gt;
&lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="s1"&gt;'John Smith'&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="s1"&gt;'john@example.com'&lt;/span&gt;
&lt;span class="k"&gt;COMMIT&lt;/span&gt; &lt;span class="mi"&gt;12345&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Trade-offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Text format is human-readable and easy to understand&lt;/li&gt;
&lt;li&gt;✅ Great for learning how logical decoding works&lt;/li&gt;
&lt;li&gt;✅ Included with PostgreSQL by default&lt;/li&gt;
&lt;li&gt;✅ Useful for quick debugging of replication issues&lt;/li&gt;
&lt;li&gt;❌ Output format is not designed for production parsing&lt;/li&gt;
&lt;li&gt;❌ No advanced features like filtering or sophisticated type handling&lt;/li&gt;
&lt;li&gt;❌ Limited functionality compared to other plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Popular third-party plugins
&lt;/h2&gt;

&lt;p&gt;The PostgreSQL community has created several plugins for specific integration needs. Here's a high-level overview of the most common ones:&lt;/p&gt;

&lt;h3&gt;
  
  
  wal2json
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;wal2json&lt;/code&gt; outputs changes in JSON format, making it easy to work with in any programming language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What our user name update looks like:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"change"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"public"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"users"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"columnnames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"columnvalues"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Smith"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"john@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-01-15T10:30:00"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"oldkeys"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"keynames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"keyvalues"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"oldvalues"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Doe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"john@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-01-15T10:30:00"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Trade-offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Easy to parse in any language&lt;/li&gt;
&lt;li&gt;✅ Human-readable format&lt;/li&gt;
&lt;li&gt;✅ Supported on most managed services (RDS, Cloud SQL)&lt;/li&gt;
&lt;li&gt;❌ Higher overhead than binary formats&lt;/li&gt;
&lt;li&gt;❌ Can struggle with very large transactions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  decoderbufs
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;decoderbufs&lt;/code&gt; uses Protocol Buffers for efficient binary serialization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What our user name update looks like:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Binary&lt;/span&gt; &lt;span class="n"&gt;protobuf&lt;/span&gt; &lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conceptual&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;RowMessage&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;transaction_id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;12345&lt;/span&gt;
  &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"public.users"&lt;/span&gt;
  &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;UPDATE&lt;/span&gt;
  &lt;span class="n"&gt;new_tuple&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;INTEGER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"John Smith"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"john@example.com"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;old_tuple&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"John Doe"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Trade-offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Very efficient binary format&lt;/li&gt;
&lt;li&gt;✅ Schema-defined structure&lt;/li&gt;
&lt;li&gt;✅ Great for high-throughput scenarios&lt;/li&gt;
&lt;li&gt;❌ Requires building and installing&lt;/li&gt;
&lt;li&gt;❌ Not available on most managed services&lt;/li&gt;
&lt;li&gt;❌ More complex to work with&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Specialized plugins
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;wal2mongo&lt;/strong&gt;: Specifically designed for replicating to MongoDB, outputs MongoDB-compatible JSON operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;decoder_raw&lt;/strong&gt;: Outputs changes as raw SQL statements that you could execute on another database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the right plugin
&lt;/h2&gt;

&lt;p&gt;Your choice often comes down to a few practical factors:&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment constraints
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Managed services&lt;/strong&gt; (AWS RDS, Google Cloud SQL, Azure): You're typically limited to &lt;code&gt;pgoutput&lt;/code&gt;, &lt;code&gt;test_decoding&lt;/code&gt;, and &lt;code&gt;wal2json&lt;/code&gt;. Choose &lt;code&gt;pgoutput&lt;/code&gt; for Postgres-to-Postgres replication or &lt;code&gt;wal2json&lt;/code&gt; for external integrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-hosted&lt;/strong&gt;: You have full flexibility. Consider &lt;code&gt;decoderbufs&lt;/code&gt; for high-performance scenarios or stick with &lt;code&gt;pgoutput&lt;/code&gt; for simplicity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance requirements
&lt;/h3&gt;

&lt;p&gt;For high-volume scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;pgoutput&lt;/strong&gt; - Best overall choice for most use cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;decoderbufs&lt;/strong&gt; - If you need non-Postgres output and can manage the complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;wal2json&lt;/strong&gt; - Convenient but can be a bottleneck at scale&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Output plugin vs. final format
&lt;/h2&gt;

&lt;p&gt;One common confusion: the output plugin format isn't necessarily the format your application sees. &lt;/p&gt;

&lt;p&gt;For example, with Sequin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debezium can use &lt;code&gt;pgoutput&lt;/code&gt; to receive binary data from Postgres&lt;/li&gt;
&lt;li&gt;Then convert it to JSON for delivery to sinks&lt;/li&gt;
&lt;li&gt;Your application sees the JSON format, not the &lt;code&gt;pgoutput&lt;/code&gt; format&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>dataengineering</category>
      <category>learning</category>
    </item>
    <item>
      <title>Standing up Debezium Server for Postgres CDC</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Wed, 14 May 2025 18:53:54 +0000</pubDate>
      <link>https://dev.to/sequin/standing-up-debezium-server-for-postgres-cdc-b75</link>
      <guid>https://dev.to/sequin/standing-up-debezium-server-for-postgres-cdc-b75</guid>
      <description>&lt;p&gt;In this tutorial, you'll set up a complete CDC pipeline using &lt;strong&gt;Debezium Server&lt;/strong&gt; (version 3.1) to capture changes from a PostgreSQL database and stream them directly to a webhook endpoint. Unlike the Kafka Connect runtime, Debezium Server provides a lightweight standalone application that can send change events directly to various destinations without requiring a Kafka cluster. Keep in mind that your trading out the overhead of Kafka for less scale, performance, and delivery guarantees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you'll learn:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to configure PostgreSQL for logical replication&lt;/li&gt;
&lt;li&gt;How to set up Debezium Server to capture database changes&lt;/li&gt;
&lt;li&gt;How to stream these changes directly to a webhook endpoint&lt;/li&gt;
&lt;li&gt;How to observe and work with CDC events from your database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Here's what you're building:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A PostgreSQL database with logical replication enabled&lt;/li&gt;
&lt;li&gt;Debezium Server running as a standalone application&lt;/li&gt;
&lt;li&gt;A webhook endpoint (using webhook.site) that receives the change events&lt;/li&gt;
&lt;li&gt;A simple "customers" table that you'll monitor for changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you're done, any change to the customers table will be captured by Debezium Server and sent as a JSON event to your webhook endpoint in real-time. This setup provides a foundation for building event-driven architectures without the complexity of managing a Kafka cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Configure PostgreSQL for logical replication
&lt;/h2&gt;

&lt;p&gt;Ensure logical replication is enabled on your Postgres database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;psql &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"SHOW wal_level;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see &lt;code&gt;logical&lt;/code&gt; in the output. If not, run the following commands to enable logical replication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Connect to PostgreSQL as the postgres user to modify system settings&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"ALTER SYSTEM SET wal_level = logical;"&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"ALTER SYSTEM SET max_replication_slots = 10;"&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"ALTER SYSTEM SET max_wal_senders = 10;"&lt;/span&gt;

&lt;span class="c"&gt;# Restart PostgreSQL to apply changes&lt;/span&gt;
&lt;span class="c"&gt;# For Linux (systemd):&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart postgresql

&lt;span class="c"&gt;# For macOS (Homebrew):&lt;/span&gt;
brew services restart postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the Write-Ahead Log (WAL) level to "logical", enabling detailed change tracking&lt;/li&gt;
&lt;li&gt;Configure replication slots to allow Debezium to track its position&lt;/li&gt;
&lt;li&gt;Increase the number of WAL sender processes that can run simultaneously&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Create a database user and sample data
&lt;/h2&gt;

&lt;p&gt;Debezium requires a PostgreSQL user with replication privileges and a table to monitor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a dedicated user for Debezium&lt;/span&gt;
psql &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"CREATE ROLE dbz WITH LOGIN PASSWORD 'dbz' REPLICATION;"&lt;/span&gt;

&lt;span class="c"&gt;# Create a sample database&lt;/span&gt;
createdb &lt;span class="nt"&gt;-U&lt;/span&gt; postgres inventory

&lt;span class="c"&gt;# Create a sample table&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; inventory &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;SQL&lt;/span&gt;&lt;span class="sh"&gt;'
CREATE TABLE customers (
  id SERIAL PRIMARY KEY,
  name TEXT NOT NULL,
  email TEXT UNIQUE,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);

-- Set REPLICA IDENTITY to FULL to capture old values on updates and deletes
ALTER TABLE customers REPLICA IDENTITY FULL;

-- Grant necessary permissions to the dbz user
GRANT ALL PRIVILEGES ON DATABASE inventory TO dbz;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO dbz;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO dbz;
&lt;/span&gt;&lt;span class="no"&gt;SQL
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;REPLICA IDENTITY FULL&lt;/code&gt; setting ensures that both old and new values are captured for updates and deletes, which is crucial for comprehensive change tracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Download and set up Debezium Server
&lt;/h2&gt;

&lt;p&gt;Debezium Server is a standalone application that connects to PostgreSQL and forwards change events to various sinks. Let's download and extract it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download Debezium Server&lt;/span&gt;
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; debezium-server.zip https://repo1.maven.org/maven2/io/debezium/debezium-server-dist/3.1.1.Final/debezium-server-dist-3.1.1.Final.zip

&lt;span class="c"&gt;# Extract the archive&lt;/span&gt;
unzip debezium-server.zip

&lt;span class="c"&gt;# Navigate to the Debezium Server directory&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;debezium-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll now have a directory structure containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;run.sh&lt;/code&gt; - The script to start Debezium Server&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lib/&lt;/code&gt; - JAR files for Debezium and its dependencies&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;config/&lt;/code&gt; - Configuration directory&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4: Create a webhook endpoint
&lt;/h2&gt;

&lt;p&gt;To make it easy to standup and test Debezium Server, use webhook.site to quickly test webhook delivery:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;a href="https://webhook.site" rel="noopener noreferrer"&gt;https://webhook.site&lt;/a&gt; in your browser&lt;/li&gt;
&lt;li&gt;A unique URL will be automatically generated for you (it looks like &lt;code&gt;https://webhook.site/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Copy this URL - you’ll use it in the Debezium configuration&lt;/li&gt;
&lt;li&gt;Keep this browser tab open to see events as they arrive&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 5: Configure Debezium Server
&lt;/h2&gt;

&lt;p&gt;Create or modify the &lt;code&gt;application.properties&lt;/code&gt; file in the &lt;code&gt;config/&lt;/code&gt; directory to tell Debezium where to connect and where to send events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create or modify the configuration file
# PostgreSQL source connector configuration
&lt;/span&gt;&lt;span class="py"&gt;debezium.source.connector.class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;io.debezium.connector.postgresql.PostgresConnector&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.offset.storage.file.filename&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;data/offsets.dat&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.offset.flush.interval.ms&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.hostname&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;localhost&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.port&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;5432&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.user&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;dbz&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.password&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;dbz&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.dbname&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;inventory&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.database.server.name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;inventory-server&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.schema.include.list&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;public&lt;/span&gt;
&lt;span class="py"&gt;debezium.source.table.include.list&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;public.customers&lt;/span&gt;

&lt;span class="c"&gt;# Set a unique replication slot name to avoid conflicts
&lt;/span&gt;&lt;span class="py"&gt;debezium.source.slot.name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;debezium_tutorial&lt;/span&gt;

&lt;span class="c"&gt;# Topic prefix configuration
&lt;/span&gt;&lt;span class="py"&gt;debezium.source.topic.prefix&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;inventory-server&lt;/span&gt;

&lt;span class="c"&gt;# For capturing all changes including updates and deletes in full
&lt;/span&gt;&lt;span class="py"&gt;debezium.source.tombstones.on.delete&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;

&lt;span class="c"&gt;# Initial snapshot configuration
&lt;/span&gt;&lt;span class="py"&gt;debezium.source.snapshot.mode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;initial&lt;/span&gt;

&lt;span class="c"&gt;# HTTP sink (webhook) configuration
&lt;/span&gt;&lt;span class="py"&gt;debezium.sink.type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;
&lt;span class="py"&gt;debezium.sink.http.url&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;YOUR_WEBHOOK.SITE_URL_HERE&lt;/span&gt;
&lt;span class="py"&gt;debezium.sink.http.timeout.ms&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;10000&lt;/span&gt;

&lt;span class="c"&gt;# JSON formatter
&lt;/span&gt;&lt;span class="py"&gt;debezium.format.value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;json&lt;/span&gt;
&lt;span class="py"&gt;debezium.format.key&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;YOUR_WEBHOOK_URL_HERE&lt;/code&gt; with the URL you copied from webhook.site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Start Debezium Server
&lt;/h2&gt;

&lt;p&gt;Now that everything is configured, start Debezium Server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Make the run script executable&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x run.sh

&lt;span class="c"&gt;# Start Debezium Server&lt;/span&gt;
./run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prepare for a wall of Java logs. You should see output indicating that Debezium Server is starting, with messages about Quarkus, the connector, and eventually reaching a "started" state.&lt;/p&gt;

&lt;p&gt;Common startup messages include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quarkus initialization&lt;/li&gt;
&lt;li&gt;PostgreSQL connector configuration&lt;/li&gt;
&lt;li&gt;Connection to the database&lt;/li&gt;
&lt;li&gt;Snapshot process (if this is the first run)&lt;/li&gt;
&lt;li&gt;HTTP sink initialization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 7: Test the setup with database changes
&lt;/h2&gt;

&lt;p&gt;Open a new terminal (keep Debezium Server running in the first one) and execute some changes to the customers table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Insert some test data&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; inventory &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;SQL&lt;/span&gt;&lt;span class="sh"&gt;'
INSERT INTO customers (name, email) VALUES 
  ('Alice Johnson', 'alice@example.com'),
  ('Bob Smith', 'bob@example.com');
&lt;/span&gt;&lt;span class="no"&gt;SQL

&lt;/span&gt;&lt;span class="c"&gt;# Update a record&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; inventory &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"UPDATE customers SET email = 'alice.new@example.com' WHERE name = 'Alice Johnson';"&lt;/span&gt;

&lt;span class="c"&gt;# Delete a record&lt;/span&gt;
psql &lt;span class="nt"&gt;-d&lt;/span&gt; inventory &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"DELETE FROM customers WHERE name = 'Bob Smith';"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Observe the results
&lt;/h2&gt;

&lt;p&gt;Switch back to your webhook.site browser tab. You should see several POST requests that correspond to the database operations you just performed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Two insert events (one for each new customer)&lt;/li&gt;
&lt;li&gt;An update event (when you changed Alice's email)&lt;/li&gt;
&lt;li&gt;A delete event (when you removed Bob's record)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each event contains a JSON payload with details about the operation and the data. For example, an insert event might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;information&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;*/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"payload"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice Johnson"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-05-06T10:15:30.123456Z"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;/*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;metadata&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;about&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;event&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;source&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;*/&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"db"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"public.customers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1683367230123&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"op"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1683367230456&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;op&lt;/code&gt; field indicates the operation type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;c&lt;/code&gt; for create (insert)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;u&lt;/code&gt; for update&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;d&lt;/code&gt; for delete&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;r&lt;/code&gt; for read (during initial snapshot)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For updates, both &lt;code&gt;before&lt;/code&gt; and &lt;code&gt;after&lt;/code&gt; fields are populated, showing the previous and new values.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Let's understand what's happening under the hood:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PostgreSQL logical replication&lt;/strong&gt;: The WAL settings enable PostgreSQL to maintain a log of changes that can be read by external processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Debezium Server&lt;/strong&gt;: Acts as a standalone change data capture service that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects to PostgreSQL using the configured credentials&lt;/li&gt;
&lt;li&gt;Reads the WAL stream to detect changes&lt;/li&gt;
&lt;li&gt;Converts database changes to structured JSON events&lt;/li&gt;
&lt;li&gt;Forwards these events to the configured sink (webhook in our case)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Webhook endpoint&lt;/strong&gt;: Receives HTTP POST requests containing the change events as JSON payloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Next steps and variations
&lt;/h2&gt;

&lt;p&gt;Now that you have a working Debezium setup, here are some ways to expand and customize it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitor multiple tables
&lt;/h3&gt;

&lt;p&gt;To track changes from additional tables, adjust the &lt;code&gt;debezium.source.table.include.list&lt;/code&gt; property in &lt;code&gt;application.properties&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;debezium.source.table.include.list&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;public.customers,public.orders,public.products&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Transform events before sending
&lt;/h3&gt;

&lt;p&gt;Debezium supports Single Message Transforms (SMTs) to modify events before they're sent. For example, to rename a field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add this to application.properties
&lt;/span&gt;&lt;span class="py"&gt;debezium.transforms&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;rename&lt;/span&gt;
&lt;span class="py"&gt;debezium.transforms.rename.type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;org.apache.kafka.connect.transforms.ReplaceField$Value&lt;/span&gt;
&lt;span class="py"&gt;debezium.transforms.rename.renames&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;email:contact_email&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You've successfully set up Debezium Server to capture and stream PostgreSQL changes to a webhook endpoint. This foundation can be extended to build robust event-driven architectures, real-time data pipelines, and more.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Want to skip all this complexity? Check out &lt;a href="https://sequinstream.com" rel="noopener noreferrer"&gt;Sequin&lt;/a&gt; for hassle-free change data capture and event streaming without the maintenance burden.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>eventdriven</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Streaming Postgres Changes with Debezium and Kafka Connect: A Hands-On Tutorial</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Tue, 06 May 2025 01:32:01 +0000</pubDate>
      <link>https://dev.to/sequin/streaming-postgres-changes-with-debezium-and-kafka-connect-a-hands-on-tutorial-2jgl</link>
      <guid>https://dev.to/sequin/streaming-postgres-changes-with-debezium-and-kafka-connect-a-hands-on-tutorial-2jgl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Looking for a simpler solution?&lt;/strong&gt; If you're reading this tutorial, you want to reliably stream changes out of your Postgres database. You’ve turned to Debezium, only to discover that the official Debezium documentation leaves more to be desired. I hit the same snag, and decided to save future developers similar toil by writting this guide. If you are looking for an easier, faster alternative to Debezium, we built &lt;a href="https://sequinstream.com" rel="noopener noreferrer"&gt;Sequin&lt;/a&gt; to provide the same guarantees of Debezium while eliminating all this complexity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you'll set up a complete CDC pipeline using &lt;strong&gt;&lt;a href="https://debezium.io/" rel="noopener noreferrer"&gt;Debezium&lt;/a&gt;&lt;/strong&gt; (version 3.1) with Kafka Connect to capture changes from a &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL database&lt;/a&gt;. By the end, you'll have a working system that streams every insert, update, and delete operation from your database into Apache Kafka topics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you'll learn:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to set up a local development environment for Debezium with Docker Compose&lt;/li&gt;
&lt;li&gt;How to configure PostgreSQL for logical replication&lt;/li&gt;
&lt;li&gt;How to set up a Debezium connector to capture changes&lt;/li&gt;
&lt;li&gt;How to observe and work with CDC events from your database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving in, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker Engine&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://docs.docker.com/compose/install/linux/" rel="noopener noreferrer"&gt;Docker Compose v2&lt;/a&gt;&lt;/strong&gt; installed on your system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;curl&lt;/strong&gt; command-line tool for making HTTP requests&lt;/li&gt;
&lt;li&gt;Basic familiarity with PostgreSQL and command-line operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll be doing everything in docker and your CLI - Debezium doesn't come with a UI or additional tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Here's what you're setting up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A PostgreSQL database with logical replication enabled&lt;/li&gt;
&lt;li&gt;Apache Kafka (which requires ZooKeeper in this version) for message streaming&lt;/li&gt;
&lt;li&gt;Kafka Connect with the Debezium PostgreSQL connector to provide CDC on the Postgres database and stream those changes to the aformentioned Kafka topic&lt;/li&gt;
&lt;li&gt;A simple "customers" table that you'll monitor for changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you're done, any change to the customers table will be captured by Debezium and sent as an event to a Kafka topic, which you can then consume from any application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Setting Up the Infrastructure with Docker Compose
&lt;/h2&gt;

&lt;p&gt;First, create your environment using Docker Compose. This will spin up all the necessary services in isolated containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a docker-compose.yml file
&lt;/h3&gt;

&lt;p&gt;Create a new directory and then create a new file called docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;debezium-example
&lt;span class="nb"&gt;cd &lt;/span&gt;debezium-example
&lt;span class="nb"&gt;touch &lt;/span&gt;docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy and paste the following into the &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/debezium/zookeeper:3.1&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2181:2181"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/debezium/kafka:3.1&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;29092:29092"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ZOOKEEPER_CONNECT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper:2181&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:29092&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ADVERTISED_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INTERNAL://kafka:9092,EXTERNAL://localhost:29092&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_INTER_BROKER_LISTENER_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INTERNAL&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

  &lt;span class="na"&gt;connect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/debezium/connect:3.1&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8083:8083"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;BOOTSTRAP_SERVERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka:9092&lt;/span&gt;
      &lt;span class="na"&gt;GROUP_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;CONFIG_STORAGE_TOPIC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect_configs&lt;/span&gt;
      &lt;span class="na"&gt;OFFSET_STORAGE_TOPIC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect_offsets&lt;/span&gt;
      &lt;span class="na"&gt;STATUS_STORAGE_TOPIC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect_statuses&lt;/span&gt;
      &lt;span class="na"&gt;KEY_CONVERTER_SCHEMAS_ENABLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
      &lt;span class="na"&gt;VALUE_CONVERTER_SCHEMAS_ENABLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;

  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debezium/postgres:15&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres -c wal_level=logical -c max_wal_senders=10 -c max_replication_slots=10&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dbz&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dbz&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding the docker-compose.yml
&lt;/h3&gt;

&lt;p&gt;Briefly, this file defines the following services - all of which are needed to run Debezium:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ZooKeeper&lt;/strong&gt;: Provides distributed configuration and synchronization for Kafka&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka&lt;/strong&gt;: The message broker that will store your change data events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka Connect&lt;/strong&gt;: The framework that runs your Debezium connector&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt;: Your database with logical replication enabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the PostgreSQL configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're using the &lt;code&gt;debezium/postgres:15&lt;/code&gt; image, which comes with the necessary logical decoding plugins&lt;/li&gt;
&lt;li&gt;You set &lt;code&gt;wal_level=logical&lt;/code&gt; to enable logical replication&lt;/li&gt;
&lt;li&gt;You configure &lt;code&gt;max_wal_senders&lt;/code&gt; and &lt;code&gt;max_replication_slots&lt;/code&gt; to allow multiple replication connections&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Start the containers
&lt;/h3&gt;

&lt;p&gt;Back in your terminal, ensure you are in the &lt;code&gt;debezium-example&lt;/code&gt; directory containing your &lt;code&gt;docker-compose.yml&lt;/code&gt; file, and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts all the services in detached mode. To verify that all containers are running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see all four containers (zookeeper, kafka, connect, and postgres) with a status of "Up".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS          PORTS                                        NAMES
c218e8a9fc67   quay.io/debezium/connect:3.1     &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   59 minutes ago   Up 59 minutes   8778/tcp, 0.0.0.0:8083-&amp;gt;8083/tcp, 9092/tcp   debezium-connect-1
5ffe2fc31745   quay.io/debezium/kafka:3.1       &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   59 minutes ago   Up 59 minutes   9092/tcp, 0.0.0.0:29092-&amp;gt;29092/tcp           debezium-kafka-1
1b1de4194458   debezium/postgres:15             &lt;span class="s2"&gt;"docker-entrypoint.s…"&lt;/span&gt;   59 minutes ago   Up 59 minutes   0.0.0.0:5432-&amp;gt;5432/tcp                       debezium-postgres-1
d5d87a35b013   quay.io/debezium/zookeeper:3.1   &lt;span class="s2"&gt;"/docker-entrypoint.…"&lt;/span&gt;   59 minutes ago   Up 59 minutes   2888/tcp, 0.0.0.0:2181-&amp;gt;2181/tcp, 3888/tcp   debezium-zookeeper-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Preparing PostgreSQL for CDC
&lt;/h2&gt;

&lt;p&gt;Now that the required infrastructure is running, you need to configure PostgreSQL for change data capture. You’ll connect to the Postgres instance you just created and add a new replication user for Debezium, set up a demo table, and configure the table for full row captures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a replication user
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Create the demo table
&lt;/h3&gt;

&lt;p&gt;Create a simple &lt;code&gt;customers&lt;/code&gt; table that we'll monitor for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
  psql &lt;span class="nt"&gt;-U&lt;/span&gt; dbz &lt;span class="nt"&gt;-d&lt;/span&gt; example &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"CREATE TABLE customers (id SERIAL PRIMARY KEY, name VARCHAR(255), email VARCHAR(255));"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This table has three columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;id&lt;/code&gt;: An auto-incrementing primary key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: A customer's name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;email&lt;/code&gt;: A customer's email address&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you'd like to explore the database more directly in a SQL client like TablePlus, you can connect to it using any PostgreSQL client with these connection details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host&lt;/strong&gt;: localhost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 5432&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: example&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User&lt;/strong&gt;: dbz&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: dbz&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL Mode&lt;/strong&gt;: disable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or using a connection string:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgresql://dbz:dbz@localhost:5432/example?sslmode=disable
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This allows you to explore the database schema, run queries, and make changes that will be captured by Debezium.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Configure full row images
&lt;/h3&gt;

&lt;p&gt;By default, PostgreSQL's logical replication only includes the primary key and changed columns in update events. To get the full "before" and "after" state of rows, you need to set the &lt;code&gt;REPLICA IDENTITY&lt;/code&gt; to &lt;code&gt;FULL&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
  psql &lt;span class="nt"&gt;-U&lt;/span&gt; dbz &lt;span class="nt"&gt;-d&lt;/span&gt; example &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"ALTER TABLE customers REPLICA IDENTITY FULL;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setting ensures that when a row is updated or deleted, the entire row's data (before the change) is included in the WAL (Write-Ahead Log) entry, giving you complete information about the change in the resulting Kafka message produced by Debezium.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Setting Up the Debezium Connector
&lt;/h2&gt;

&lt;p&gt;Now, configure and register the Debezium PostgreSQL connector with Kafka Connect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the connector configuration
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;register-postgres.json&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example-connector"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"connector.class"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"io.debezium.connector.postgresql.PostgresConnector"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database.hostname"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"postgres"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database.port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5432"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database.user"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dbz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database.password"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dbz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database.dbname"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"topic.prefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"slot.name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example_slot"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"publication.autocreate.mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"filtered"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"table.include.list"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"public.customers"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Names the connector &lt;code&gt;example-connector&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Specifies the PostgreSQL connection details&lt;/li&gt;
&lt;li&gt;Sets a topic prefix of &lt;code&gt;example&lt;/code&gt; (resulting in a topic named &lt;code&gt;example.public.customers&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Creates a replication slot named &lt;code&gt;example_slot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configures the publication to only include the tables you specify&lt;/li&gt;
&lt;li&gt;Limits change capture to only the &lt;code&gt;public.customers&lt;/code&gt; table&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Register the connector
&lt;/h3&gt;

&lt;p&gt;Register the connector with Kafka Connect's REST API running on port &lt;code&gt;8083&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--data&lt;/span&gt; @register-postgres.json &lt;span class="se"&gt;\&lt;/span&gt;
     http://localhost:8083/connectors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, you should receive a JSON object that looks similar to the configuration in your &lt;code&gt;register-postgres.json&lt;/code&gt; file. This means that Kafka Connect has started the Debezium connector, which is now listening for changes to our &lt;code&gt;customers&lt;/code&gt; table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify the connector is running
&lt;/h3&gt;

&lt;p&gt;You can check the status of your connector:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8083/connectors/example-connector/status | jq

&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"example-connector"&lt;/span&gt;,
  &lt;span class="s2"&gt;"connector"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"state"&lt;/span&gt;: &lt;span class="s2"&gt;"RUNNING"&lt;/span&gt;,
    &lt;span class="s2"&gt;"worker_id"&lt;/span&gt;: &lt;span class="s2"&gt;"172.20.0.5:8083"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"tasks"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"id"&lt;/span&gt;: 0,
      &lt;span class="s2"&gt;"state"&lt;/span&gt;: &lt;span class="s2"&gt;"RUNNING"&lt;/span&gt;,
      &lt;span class="s2"&gt;"worker_id"&lt;/span&gt;: &lt;span class="s2"&gt;"172.20.0.5:8083"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;,
  &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"source"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see that the connector is in the &lt;code&gt;RUNNING&lt;/code&gt; state. If you encounter any issues, you can check the Kafka Connect logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Generating and Observing Change Events
&lt;/h2&gt;

&lt;p&gt;Now for the exciting part! Make some changes to your database and watch the CDC events flow into Kafka.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start a Kafka consumer
&lt;/h3&gt;

&lt;p&gt;First, open a terminal window and start a console consumer to watch the Kafka topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;kafka /kafka/bin/kafka-console-consumer.sh &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="nt"&gt;--topic&lt;/span&gt; example.public.customers &lt;span class="nt"&gt;--from-beginning&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command connects to Kafka and subscribes to the &lt;code&gt;example.public.customers&lt;/code&gt; topic, displaying all messages as they arrive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insert a new customer
&lt;/h3&gt;

&lt;p&gt;In a new terminal window, insert a record into our customers table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
  psql &lt;span class="nt"&gt;-U&lt;/span&gt; dbz &lt;span class="nt"&gt;-d&lt;/span&gt; example &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"INSERT INTO customers(name,email) VALUES ('Alice','alice@example.com');"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your consumer terminal, you should see a JSON message appear that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice@example.com"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3.1.0.Final"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"connector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"postgresql"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1620000000000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"snapshot"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"false"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"db"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sequence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;12345678&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;12345678&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"public"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"customers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"txId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"lsn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12345678&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"op"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1620000000001&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the Debezium change event format, also known as the "envelope." It contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;before&lt;/code&gt;: The previous state of the row (null for inserts)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;after&lt;/code&gt;: The new state of the row&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;source&lt;/code&gt;: Metadata about the event source&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;op&lt;/code&gt;: The operation type ("c" for create/insert)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ts_ms&lt;/code&gt;: Timestamp of when the event was processed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Update the customer
&lt;/h3&gt;

&lt;p&gt;Now let's update the customer record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
  psql &lt;span class="nt"&gt;-U&lt;/span&gt; dbz &lt;span class="nt"&gt;-d&lt;/span&gt; example &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"UPDATE customers SET name='Alice Updated' WHERE id=1;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your consumer terminal, you should see another JSON message, this time with both &lt;code&gt;before&lt;/code&gt; and &lt;code&gt;after&lt;/code&gt; fields populated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice@example.com"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice Updated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice@example.com"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;*/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"op"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"u"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1620000000002&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that &lt;code&gt;op&lt;/code&gt; now has a value of "u" for update, and both &lt;code&gt;before&lt;/code&gt; and &lt;code&gt;after&lt;/code&gt; states are included.&lt;/p&gt;

&lt;h3&gt;
  
  
  Delete the customer
&lt;/h3&gt;

&lt;p&gt;Finally, let's delete the customer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
  psql &lt;span class="nt"&gt;-U&lt;/span&gt; dbz &lt;span class="nt"&gt;-d&lt;/span&gt; example &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"DELETE FROM customers WHERE id=1;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see a final message with &lt;code&gt;op&lt;/code&gt; set to "d" (delete):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice Updated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice@example.com"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;*/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"op"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ts_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1620000000003&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that for deletes, &lt;code&gt;after&lt;/code&gt; is null since the row no longer exists after the operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Cleaning Up
&lt;/h2&gt;

&lt;p&gt;When you're finished experimenting, clean up the environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stops all containers and removes volumes created by Docker Compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Architecture
&lt;/h2&gt;

&lt;p&gt;Now that you've seen Debezium in action, it’s helpful to discuss what is happening behind the scenes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL Write-Ahead Log (WAL)&lt;/strong&gt;: When data changes in PostgreSQL, those changes are written to the WAL for durability. With logical replication enabled, these changes can be decoded into logical change events (we go into &lt;a href="https://blog.sequinstream.com/how-debezium-captures-changes-from-postgres/" rel="noopener noreferrer"&gt;more detail in this post&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debezium PostgreSQL Connector&lt;/strong&gt;: Debezium creates a replication slot in PostgreSQL and subscribes to changes. It reads from the WAL, transforms the binary changes into structured events, and sends them to Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka Connect&lt;/strong&gt;: This framework manages the connector lifecycle and ensures reliable delivery of events to Kafka, handling failures and offsets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka Topics&lt;/strong&gt;: Each table's changes are published to a dedicated topic, allowing consumers to subscribe only to the tables they care about.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You've successfully set up a change data capture pipeline using Debezium and Kafka Connect. You've seen how to capture inserts, updates, and deletes from PostgreSQL and stream them to Kafka in real-time. &lt;/p&gt;

&lt;p&gt;In this Kafka Connect runtime, Debezium relies on Kafka (and the Kafka Exosystem) to provide capabilities like single message transforms, message retention, and efficient scaling. Now that the system is running, get familiar with these tool to get set up.&lt;/p&gt;

&lt;p&gt;From here, you'll want to tailor this template to your Postgres database. Ensure you enable logical replication in your Postgres instance, create a user for Debezium, and then buckle up for Java logs.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>kafka</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>A developer's reference to Postgres change data capture (CDC)</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 17 Apr 2025 02:32:40 +0000</pubDate>
      <link>https://dev.to/sequin/a-developers-reference-to-postgres-change-data-capture-cdc-2bpk</link>
      <guid>https://dev.to/sequin/a-developers-reference-to-postgres-change-data-capture-cdc-2bpk</guid>
      <description>&lt;p&gt;As a developer researching change data capture (CDC) on Postgres, what do you need to know?&lt;/p&gt;

&lt;p&gt;We’ve worked with hundreds of developers as they implement CDC specifically on Postgres. We’ve &lt;a href="https://blog.sequinstream.com/" rel="noopener noreferrer"&gt;written extensively&lt;/a&gt; on the topic. This post aims to synthesize our learnings to help you build a better understanding of CDC, weigh different implementation options, and find the best CDC implementation for your use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Change Data Capture (CDC)
&lt;/h2&gt;

&lt;p&gt;As the name suggests, change data capture is a process that detects every incremental &lt;em&gt;change&lt;/em&gt; to a database, and then delivers (i.e. &lt;em&gt;captures&lt;/em&gt;) those changes in a downstream system. When done correctly, the other system can process these changes to perfectly replicate the original data set. &lt;/p&gt;

&lt;p&gt;In practice, this means that every insert, update, and delete in the Postgres database needs to be captured and delivered in order. No alteration can be missed. The physics of never missing a change and guaranteeing delivery makes building a reliable CDC system challenging.&lt;/p&gt;

&lt;p&gt;Luckily, PostgreSQL 10+ comes with &lt;a href="https://blog.sequinstream.com/how-debezium-captures-changes-from-postgres/#how-does-postgres-emit-logs-to-debezium" rel="noopener noreferrer"&gt;logical replication built in&lt;/a&gt;. The database records every change in the Write Ahead Log (WAL). You can tap into the WAL and decode the messages to capture every insert, update, and delete. Moreover, &lt;a href="https://www.postgresql.org/docs/current/logical-replication.html" rel="noopener noreferrer"&gt;Postgres writes messages to the WAL&lt;/a&gt; during normal operation, so processing changes through the WAL adds no additional overhead to the database.&lt;/p&gt;

&lt;p&gt;While the WAL is the most reliable way to interface with Postgres to power CDC, it isn’t the only option nor is it necessarily the easiest to work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to reach for CDC
&lt;/h2&gt;

&lt;p&gt;Almost every data intensive application includes a CDC component. (In fact, we’ve found that almost every company with 10 or more engineers has either built or bought a CDC solution.)&lt;/p&gt;

&lt;p&gt;As systems grow, teams begin to move work away from a single Postgres instance towards specialized tools. In this effort, common use cases begin to emerge that require CDC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/how-to/replicate-tables" rel="noopener noreferrer"&gt;&lt;strong&gt;Database replication&lt;/strong&gt;&lt;/a&gt;: Reliably mirror data across databases with greater flexibility than traditional Postgres read-replicas. CDC enables the primary database to operate independently of replicas while providing powerful tools for transforming data during transmission.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/guides/tinybird" rel="noopener noreferrer"&gt;&lt;strong&gt;Real-time analytics&lt;/strong&gt;&lt;/a&gt;: Stream changes directly to OLAP databases and data warehouses, enabling immediate insights and analytics without impacting production database performance.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/how-to/maintain-caches" rel="noopener noreferrer"&gt;&lt;strong&gt;Cache management&lt;/strong&gt;&lt;/a&gt;: Ensure application caches and search indexes remain perfectly synchronized with source data, eliminating staleness issues while reducing unnecessary refresh operations.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/how-to/trigger-automated-workflows" rel="noopener noreferrer"&gt;&lt;strong&gt;Microservice consistency&lt;/strong&gt;&lt;/a&gt;: Preserve data integrity across distributed systems by propagating changes to specialized services while maintaining strong transactional guarantees.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/how-to/trigger-automated-workflows" rel="noopener noreferrer"&gt;&lt;strong&gt;Event-driven automation&lt;/strong&gt;&lt;/a&gt;: Trigger workflows and jobs with immediate, transaction-guaranteed execution, enabling responsive systems that act on data changes as they occur.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sequinstream.com/docs/how-to/create-audit-logs" rel="noopener noreferrer"&gt;&lt;strong&gt;Audit logging&lt;/strong&gt;&lt;/a&gt;: Capture every database change for compliance requirements or to power user-facing features like detailed history views and precision rollbacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Beyond use cases, CDC also helps reduce technical debt while setting a sturdy foundation for scale. &lt;/p&gt;

&lt;p&gt;For instance, imagine that one system is constantly polling a table for changes while another uses triggers to log changes from the same table. The load on the database is unnecessarily high, and the process of testing and building new features becomes unnecessarily complex. CDC provides a consistent pattern to detect changes on the same table independently - without any additional burden on the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Postgres CDC performance considerations
&lt;/h2&gt;

&lt;p&gt;A good change data capture system will provide a set of guarantees around reliability, performance, and data integrity. The right CDC system will completely abstract a set of edge cases and bugs so your team can build with confidence.&lt;/p&gt;

&lt;p&gt;Here are some important characteristics to consider as you build or buy a CDC solution:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery guarantees:&lt;/strong&gt; Can you expect every change in the database to be delivered. In many CDC implementations, it’s possible for a message to be dropped (at-most-once delivery) or delivered more than once (at-least-once delivery). Ideally, the CDC system will provide an &lt;a href="https://blog.sequinstream.com/at-most-once-at-least-once-and-exactly-once-delivery/" rel="noopener noreferrer"&gt;“exactly-once processing guarantee”&lt;/a&gt; by providing sophisticated deduplication and a method for acknowledging when a change has been delivered and processed. For Postgres CDC use cases where consistency is critical (i.e. replication, cache invalidation, etc), then an at-least-once delivery guarantees are a bare minimum, but exactly-once processing is ideal. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Throughput:&lt;/strong&gt; What volume of change events can the CDC system handle within a given timeframe. This is best measured as bandwidth (i.e. MB/sec), but operations per second can be a simple proxy. Importantly, your CDC solution should be able to keep pace with the peak IOPS of your Postgres database, and ideally with &lt;a href="https://sequinstream.com/docs/performance" rel="noopener noreferrer"&gt;room to spare&lt;/a&gt;. If your CDC solution is too slow, it’s not uncommon for CDC solutions to fall behind during peak load, and then never recover.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency:&lt;/strong&gt; The time delay between a change being made in your database and then appearing in your target system. For some use cases (like analytics or reporting) a latency of hours or even days is sufficient. For other customer-facing use cases (i.e. updating a search index) millisecond latency ensures that results are accurate. It’s best to measure the p95 or p99 latency of your CDC service to understand the tolerances you can tune your system to. It’s also important to evaluate latency in the context of throughput to understand how your CDC system can fall behind, which is the ultimate impact to latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retries and error handling:&lt;/strong&gt; How does your system handle transient errors. One poison pill message can immediately jam and crash your CDC pipeline - which in some circumstances (i.e. replication) can be ideal. Other implementations will queue the errant message into a dead letter queue (DLQ), log an alert, and attempt to redeliver with exponential retries. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordering:&lt;/strong&gt; Maintaining the correct order of change events is critical to avoid data corruption or inconsistencies. This is particularly true when there are interdependent operations that can affect the state of record. Enforcing strict ordering often comes at the expense of throughput and latency as enforcing ordering often makes parallel processing harder. Great CDC options allow you to configure what kinds of changes need to be processed in order to strike the right balance between speed and structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema evolution:&lt;/strong&gt; The ability to handle modifications to the PostgreSQL database schema, such as adding, altering, or removing columns. These kinds of changes can break downstream services and either need to be handled gracefully (i.e. propagate the change down stream)  or should proactively halt the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snapshots:&lt;/strong&gt; How does the CDC service capture the initial state of the Postgres table. Snapshotting (a.k.a backfilling) isn’t just about initializing the CDC process - it’s often the first thing you’ll need to do after an incident. The ability to both completely snapshot a table or target just a subset of the table (using SQL) is very helpful. A great CDC system needs to be able to &lt;a href="https://blog.sequinstream.com/using-watermarks-to-coordinate-change-data-capture-in-postgres/" rel="noopener noreferrer"&gt;snapshot a massive table while simultaneously capturing new changes&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database load:&lt;/strong&gt; How much CPU and storage capacity does the CDC solution consume on the database. Systems that poll or use triggers to detect changes can slow the database down significantly. Others that read through the WAL add virtually no overhead to the database in normal operation, but if they disconnect from the replication slot can rapidly consume available storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and observability:&lt;/strong&gt; Change data capture often becomes a critical component of your application - so you’ll want to be able to measure and observe each of the characteristics mentioned above. Great CDC solutions can &lt;a href="https://sequinstream.com/docs/reference/metrics" rel="noopener noreferrer"&gt;plug into your existing monitoring and observability&lt;/a&gt; tooling. The difference between cryptic, convoluted logs and clear traceable errors can make all the difference. &lt;/p&gt;

&lt;p&gt;Before building or buying a CDC solution, consider what performance characteristics are required for your use case. Often, picking a great CDC tool from the start can quickly set you up with a simpler architecture, higher performance, and an easier build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build your own Postgres CDC
&lt;/h2&gt;

&lt;p&gt;For developers looking to implement Change Data Capture with PostgreSQL directly, several methods are available, each with its own set of advantages and disadvantages. These approaches offer varying levels of control and complexity, as detailed in our guide &lt;a href="https://blog.sequinstream.com/all-the-ways-to-capture-changes-in-postgres/" rel="noopener noreferrer"&gt;All the ways to capture changes in Postgres&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One common method involves &lt;a href="https://blog.sequinstream.com/all-the-ways-to-capture-changes-in-postgres/#capture-changes-in-an-audit-table" rel="noopener noreferrer"&gt;&lt;strong&gt;using triggers with an audit table&lt;/strong&gt;&lt;/a&gt;. This approach entails creating database triggers that activate upon data modification events (INSERT, UPDATE, DELETE) on the tables of interest. When a change occurs, the trigger fires and typically records the details of the change in a separate audit log table. Triggers are reliable as they operate within the SQL system and offer real-time capture with a high degree of customization for various event types. However, they can introduce performance overhead on the primary database and require careful management of the audit log table.  &lt;/p&gt;

&lt;p&gt;Another straightforward method is &lt;a href="https://blog.sequinstream.com/all-the-ways-to-capture-changes-in-postgres/#poll-the-table" rel="noopener noreferrer"&gt;&lt;strong&gt;polling for changes&lt;/strong&gt;&lt;/a&gt;. This involves adding a dedicated timestamp column to the tables and periodically querying for records that have been modified since the last check, based on the timestamp. This approach is simple to implement for basic scenarios and doesn't require complex database configurations. However, it necessitates the presence of a timestamp column, can be resource-intensive with frequent polling, and cannot reliably capture DELETE events unless soft deletions are employed.&lt;/p&gt;

&lt;p&gt;Depending on your use case, you might be able to get away with &lt;a href="https://blog.sequinstream.com/all-the-ways-to-capture-changes-in-postgres/#listennotify" rel="noopener noreferrer"&gt;&lt;strong&gt;Listen/Notify&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;.&lt;/strong&gt; This built-in PostgreSQL feature implements a publish-subscribe pattern where database sessions can listen on channels and receive real-time notifications of data changes. By setting up triggers that fire on INSERT, UPDATE, or DELETE operations, you can broadcast JSON payloads containing change details to listening applications. While simple to implement and offering immediate notification capabilities, Listen/Notify has key limitations: it provides only "at-most-once" delivery semantics (requiring active connections when notifications occur), limits payloads to 8000 bytes, and offers no persistence for missed messages. These constraints make it suitable primarily for lightweight change detection scenarios or as a complement to more robust CDC methods, rather than for mission-critical systems requiring guaranteed delivery.&lt;/p&gt;

&lt;p&gt;Finally, &lt;a href="https://blog.sequinstream.com/all-the-ways-to-capture-changes-in-postgres/#replication-wal" rel="noopener noreferrer"&gt;&lt;strong&gt;logical replication (via the WAL)&lt;/strong&gt;&lt;/a&gt; stands out as a robust and efficient method. It's a built-in feature of PostgreSQL that allows for the selective replication of data changes at the table level. Leveraging log-based CDC, it provides real-time, event-driven capture of all change types with minimal impact on the database and broad support across different PostgreSQL environments. Logical replication has become the favored approach for PostgreSQL CDC due to its optimal balance of efficiency, reliability, and comprehensive change capture capabilities. &lt;/p&gt;

&lt;p&gt;While each of these approaches captures changes in the database, that is just the first step in building a CDC system. You then need to deliver those changes to other systems with sufficient performance guarantees (see above). This often adds significant complexity and work, which is why most teams choose an off the shelf approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Off the shelf Postgres CDC
&lt;/h2&gt;

&lt;p&gt;To implement PostgreSQL CDC without the complexities of building a solution from scratch, a variety of robust tools and platforms are available, each offering unique features and capabilities. A detailed overview of these options can be found at &lt;a href="https://blog.sequinstream.com/choosing_the_right_real_time_postgres_cdc_platform/" rel="noopener noreferrer"&gt;https://blog.sequinstream.com/choosing_the_right_real_time_postgres_cdc_platform/&lt;/a&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Open source
&lt;/h3&gt;

&lt;p&gt;Among the open-source tools, &lt;strong&gt;Sequin&lt;/strong&gt; and &lt;strong&gt;Debezium&lt;/strong&gt; are the go to options. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://sequinstream.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Sequin&lt;/strong&gt;&lt;/a&gt; is a fast, simple, modern open source CDC platform tuned specifically to Postgres. It is known as the fastest, highest throughput change data capture solution for Postgres. It supports many different destinations with exactly-once processing guarantees. It’s easy to self-host via Docker and provides prometheus endpoints for monitoring and observation. It comes with an easy to use web console, CLI, and API as well as helpful developer tools. If you don’t want to self-host, you can also use Sequin Cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#debezium" rel="noopener noreferrer"&gt;&lt;strong&gt;Debezium&lt;/strong&gt;&lt;/a&gt; is a widely adopted distributed platform specifically designed for CDC. Built on top of Apache Kafka and Kafka Connect, Debezium captures changes in real-time using logical replication and streams them to Kafka topics, ensuring exactly-once processing. However, it requires the setup and management of Kafka, which can introduce operational complexity. It’s notoriously difficult to configure, maintain, and debug - and its future is unclear since it’s been moved from Red Hat support to the &lt;a href="https://www.commonhaus.org/" rel="noopener noreferrer"&gt;Commonhause Foundation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hosted
&lt;/h3&gt;

&lt;p&gt;For teams seeking the benefits of CDC without the operational overhead, several hosted solutions provide managed CDC capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.sequinstream.com/register" rel="noopener noreferrer"&gt;&lt;strong&gt;Sequin Cloud&lt;/strong&gt;&lt;/a&gt; is a fully hosted, highly available deployment of Sequin. It’s immediately configured to scale and runs in regions all around the world. It’s priced to be the most economical CDC solution for Postgres and comes with additional features around tracing, alerting, team management, and disaster recovery. With Sequin Cloud, you pay for just the data you process through Sequin - completely abstracting the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#decodable" rel="noopener noreferrer"&gt;&lt;strong&gt;Decodable&lt;/strong&gt;&lt;/a&gt; offers a hosted Debezium plus Apache Flink stack, making it ideal for organizations already invested in these technologies but wanting them managed. It captures every database change with strong deliverability guarantees and includes monitoring and schema management tools. While easier than self-hosting, using Decodable still requires learning how to configure pipelines and apply SQL transforms in their dashboard, with Flink pipelines presenting a moderate learning curve. Cost-wise, Decodable is moderately expensive, with one pipeline potentially comprising several "tasks" that can each cost hundreds of dollars monthly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#confluent-debezium" rel="noopener noreferrer"&gt;&lt;strong&gt;Confluent&lt;/strong&gt;&lt;/a&gt; provides two options: Confluent Debezium and Direct JDBC Postgres connector. The Debezium version is a fully hosted implementation with enterprise security, compliance, monitoring, and scaling tools. While you won't need to manage the deployment directly, you'll still handle the complex configuration of Debezium and Kafka topics. The Direct JDBC connector uses polling rather than logical replication, meaning it only captures inserts and updates (not deletes) with some delay. While easier to set up, it's less powerful. Both options come with enterprise pricing, trading engineering time for an expensive infrastructure product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Estuary&lt;/strong&gt; is a fully managed CDC solution that simplifies real-time data pipelines. It provides a wide selection of pre-built connectors that allow you to quickly connect your sources and destinations. Notably, Estuary loads data from into an intermediary data source so that subsequent backfills are fast with minimal overhead on the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamkap&lt;/strong&gt; is a serverless streaming platform designed for real-time CDC. It leverages Apache Kafka and Debezium with connectors to enable fast and efficient data ingestion. Streamkap specializes in delivering CDC data to various destinations, including ClickHouse, Snowflake, and others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#striim" rel="noopener noreferrer"&gt;&lt;strong&gt;Striim&lt;/strong&gt;&lt;/a&gt; positions itself as an enterprise solution with high reliability but at premium cost. It provides CDC with transformations, schema management, delivery guarantees, and security features designed for Fortune 1,000 companies. Using Striim requires learning their proprietary TQL (Transform Query Language) for data transformations and StreamApps framework for pipeline configuration. While well-documented, these tools have a steep learning curve and are unique to Striim's ecosystem. Budget-wise, Striim requires an all-in enterprise contract.&lt;/p&gt;

&lt;h3&gt;
  
  
  ETL Providers
&lt;/h3&gt;

&lt;p&gt;Traditional &lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#etl-providers-fivetran-airbyte-etc" rel="noopener noreferrer"&gt;&lt;strong&gt;ETL providers&lt;/strong&gt;&lt;/a&gt; like Fivetran and Airbyte offer CDC capabilities, but it's important to note that they typically provide batch CDC rather than real-time solutions. Changes may take minutes to appear in your stream or queue, and they may not maintain atomic change records.&lt;/p&gt;

&lt;p&gt;These solutions are primarily intended for non-operational, analytics use cases rather than real-time operational requirements. They offer scheduled, batch delivery of database changes to various destinations, including streams like Kafka. While they're relatively easy to set up, they offer limited configuration options when used for CDC.&lt;/p&gt;

&lt;p&gt;Cost can be a significant factor with these providers, as pricing is typically based on row count, which can quickly become expensive for high-volume databases. If your use case can tolerate batch processing and you're primarily focused on analytics, these tools provide a simple solution with minimal technical expertise required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud providers
&lt;/h3&gt;

&lt;p&gt;The major &lt;a href="https://blog.sequinstream.com/choosing-the-right-real-time-postgres-cdc-platform/#cloud-provider-tools" rel="noopener noreferrer"&gt;&lt;strong&gt;cloud infrastructure providers&lt;/strong&gt;&lt;/a&gt; offer CDC products that work within their ecosystems. AWS DMS (Database Migration Service), GCP Datastream, and Azure Data Factory can all be configured to stream changes from Postgres to other infrastructure within their respective platforms.&lt;/p&gt;

&lt;p&gt;These solutions can be effective if your organization is already committed to a specific cloud provider and comfortable with their tooling. They support logical replication with real-time capture of inserts, updates, and deletes, though delivery guarantees can vary based on configuration and provider.&lt;/p&gt;

&lt;p&gt;Setting up CDC through your cloud provider typically requires navigating their web console, permissions systems, tooling, and logging to establish pipelines. You'll need familiarity with the provider's settings and configurations. From a cost perspective, these solutions typically charge for compute hours and data transfer, which can be difficult to predict and may accumulate quickly. Additionally, these setups can become brittle over time as settings and dependencies are distributed across different services.&lt;/p&gt;

&lt;p&gt;While convenient for teams already invested in a particular cloud ecosystem, these solutions may not offer the same level of specialized functionality as dedicated CDC platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Change Data Capture has become an essential component of modern data architectures, enabling real-time integration across increasingly specialized and distributed systems. It’s evolved from a bespoke approach to data integration, to a more specialized piece of infrastructure with defined performance characteristics and features. This post helps you map your use case and requirements to CDC implementation options and tools. Reach out if you have more questions about Postgres CDC!&lt;/p&gt;

</description>
      <category>database</category>
      <category>architecture</category>
      <category>postgres</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>How Debezium captures changes from Postgres</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 17 Apr 2025 02:25:03 +0000</pubDate>
      <link>https://dev.to/sequin/how-debezium-captures-changes-from-postgres-35bn</link>
      <guid>https://dev.to/sequin/how-debezium-captures-changes-from-postgres-35bn</guid>
      <description>&lt;p&gt;Debezium is a popular solution for streaming changes from Postgres. It was amongst the first generation of tools purpose built to help developers implement a change data capture (CDC) pattern. While Debezium introduces complex dependencies like Kafka to operate, in total it abstracts away many of the challenges around detecting and delivering every change in your database.&lt;/p&gt;

&lt;p&gt;This post aims to help developers understand how Debezium works with Postgres so you can better architect your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Debezium
&lt;/h2&gt;

&lt;p&gt;Debezium captures row-level changes in databases and streams them to Kafka topics. It's implemented as a set of Kafka Connect source connectors, with each connector ingesting changes from a different database system using that database's native logging capabilities.&lt;/p&gt;

&lt;p&gt;Unlike polling or dual-write approaches, Debezium's log-based CDC implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guarantees that all data changes are captured without gaps. Including every insert, update, and delete.
&lt;/li&gt;
&lt;li&gt;Produces change events with minimal delay
&lt;/li&gt;
&lt;li&gt;Requires no modifications to your data models (no "Last Updated" columns)
&lt;/li&gt;
&lt;li&gt;Preserves original record states and transaction metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Debezium then transforms each row-level operation into change event records, which are then streamed to distinct Kafka topics. Applications can consume these event streams to react to data changes in near real-time, enabling a wide range of use cases from cache invalidation to analytics pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Postgres Emit Logs to Debezium
&lt;/h2&gt;

&lt;p&gt;When integrating with Postgres, Debezium interfaces with &lt;a href="https://www.postgresql.org/docs/current/wal-intro.html" rel="noopener noreferrer"&gt;Postgres’ write-ahead log&lt;/a&gt; (WAL). The WAL is essentially a sequential log of all database modifications. Importantly, any change in the Postgres database is &lt;em&gt;first&lt;/em&gt; added to the log and flushed to persistent storage (hence write-ahead) before the change is actually committed. &lt;/p&gt;

&lt;p&gt;Consider what happens when you make an &lt;code&gt;INSERT&lt;/code&gt; in Postgres:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Open transaction:&lt;/strong&gt; Postgres validates the insert statement, permissions, and creates an execution plan. Ultimately, a transaction is opened for the insert.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create WAL record:&lt;/strong&gt; Before modifying any data in memory, Postgres creates a WAL record that contains the details of the operation. The WAL record includes a Log Sequence Number (LSN) that uniquely identifies the position / order of the operation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In-memory buffer:&lt;/strong&gt; The WAL record is written to an in-memory WAL buffer just prior to the actual row being inserted into the in-memory buffer for the table.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flush to disk:&lt;/strong&gt; When the in-memory transaction commits, the WAL buffer is flushed to disk and the transaction is marked as committed. At this point, your client receives a confirmation that the insert completed. This is also when a client connected to the WAL (i.e. Debezium) receives the WAL entry for the insert.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Checkpointing&lt;/strong&gt;: At some later point the change made in memory is also written to the data files.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Debezium connects to Postgres  via a replication slot to read the records in the WAL (at step four above). The WAL records in the slot are decoded from binary using the &lt;a href="https://www.postgresql.org/docs/current/logical-replication-architecture.html" rel="noopener noreferrer"&gt;&lt;code&gt;pgoutput&lt;/code&gt;&lt;/a&gt; plugin. This approach to detecting changes comes with important benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exact order&lt;/strong&gt;: By using the LSN as a cursor, Debezium is able to process changes in order.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Only completed transactions:&lt;/strong&gt; Changes are only flushed to the WAL when the transaction commits, so Debezium never captures partial or rolled-back transactions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimal database overhead:&lt;/strong&gt; Changes are committed to the WAL as a normal part of the transaction. As long as Debezium pulls and flushes messages from the replication slot (foreshadowing!), no new overhead is added to the database. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Debezium Capture Process
&lt;/h2&gt;

&lt;p&gt;When you first connect Debezium to a Postgres table, it follows a well-defined process to capture and stream changes to guarantee consistency:&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial snapshot (a.k.a backfill)
&lt;/h3&gt;

&lt;p&gt;When starting a new stream on a Postgres table, you can configure Debezium to perform a snapshot of the table. This will add the current state of every row in the table to your Kafka topic as &lt;code&gt;read&lt;/code&gt; events. &lt;/p&gt;

&lt;p&gt;To complete the snapshot before streaming new changes (all while ensuring no change is missed in the interim), Debezium does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Starts a SELECT transaction
&lt;/li&gt;
&lt;li&gt;Notes and stores the current LSN in the WAL
&lt;/li&gt;
&lt;li&gt;Scans the configured tables and generates &lt;code&gt;read&lt;/code&gt; events for each row
&lt;/li&gt;
&lt;li&gt;Commits the SELECT transaction and records the snapshot completion&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Continuous streaming
&lt;/h3&gt;

&lt;p&gt;After completing the snapshot, Debezium transitions to streaming changes from the exact WAL position (using the stored LSN) where the snapshot occurred.&lt;/p&gt;

&lt;p&gt;As described above, Debezium leverages built-in logical replication to capture these changes from the database with strict ordering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event transformation
&lt;/h3&gt;

&lt;p&gt;Debezium transforms PostgreSQL's logical decoding events into Debezium's standardized change event format, which includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A consistent structure for inserts, updates, and deletes
&lt;/li&gt;
&lt;li&gt;The table's primary key as the event key
&lt;/li&gt;
&lt;li&gt;Both before and after states of the modified row
&lt;/li&gt;
&lt;li&gt;Metadata about the source transaction and timing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use &lt;a href="https://docs.confluent.io/kafka-connectors/transforms/current/overview.html" rel="noopener noreferrer"&gt;Single Message Transforms&lt;/a&gt; (SMTs - which are a feature of Kafka Connect) to modify message payloads as they flow through the pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka topic routing
&lt;/h3&gt;

&lt;p&gt;Each table's changes are directed to a dedicated Kafka topic, typically named in the pattern &lt;code&gt;&amp;lt;server-name&amp;gt;&lt;/code&gt;.&lt;code&gt;&amp;lt;schema-name&amp;gt;&lt;/code&gt;.&lt;code&gt;&amp;lt;table-name&amp;gt;&lt;/code&gt;. Once delivered to Kafka, the change is in a durable stream that can be processed by consumers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offset tracking
&lt;/h3&gt;

&lt;p&gt;Once a change has been successfully delivered to Kafka, Debezium will periodically update Postgres with the &lt;code&gt;confirmed_flush_lsn&lt;/code&gt; associated with the change. Postgres will then remove WAL records from the replication slot.&lt;/p&gt;

&lt;p&gt;Importantly, if Debezium fails to deliver messages to Kafka, it won’t notify Postgres it can flush messages, which can cause the replication slot to fill up and could crash the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debezium Postgres Benefits
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complete data capture
&lt;/h3&gt;

&lt;p&gt;Critically, Debezium’s approach to CDC guarantees that every complete insert, update, and delete will be captured from your database. By using WAL records with LSN’s as a cursor, it can provide strict ordering guarantees and is able to enrich messages with transaction metadata.&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-intrusive
&lt;/h3&gt;

&lt;p&gt;Debezium requires no schema modifications (i.e. no &lt;code&gt;last_updated&lt;/code&gt; column) and leverages the WAL ( a normal part of every database transaction) to minimize database overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency
&lt;/h3&gt;

&lt;p&gt;With both snapshots and streaming, Debezium is able to deliver every row in a table to a Kafka topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debezium Postgres Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complex dependencies
&lt;/h3&gt;

&lt;p&gt;Debezium introduces complex dependencies like Kafka to achieve even simple CDC use cases. This is a significant operational burden.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replication slot risk
&lt;/h3&gt;

&lt;p&gt;One bad message can lock up Debezium, cause the replication slot to back up, and potentially consume significant resources on the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput constraints
&lt;/h3&gt;

&lt;p&gt;Tuning Debezium to match the throughput of your Postgres database is not easy. You’ll need to be comfortable creating multiple replication slots, tuning Debezium beyond its default config, and getting very familiar with Kafka.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Debezium has set the standard for Postgres change data capture. By leveraging log-based replication and the tooling of Kafka Connect, it’s a complete CDC platform.&lt;/p&gt;

&lt;p&gt;However, because it isn’t tuned to Postgres and requires Kafka as a dependency, Debezium isn’t easy to use or maintain. A single message can jam your pipeline, so you’ll be spending significant time tuning and monitoring the system (in the JVM!) to keep things running smoothly.&lt;/p&gt;




&lt;p&gt;At &lt;a href="https://sequinstream.com" rel="noopener noreferrer"&gt;Sequin&lt;/a&gt;, we’re improving on Debezium’s approach - tuning it specifically for Postgres and removing the Kafka dependency. It’s resulting in a 6.8X improvement in speed with higher availability and resiliency - all with nicer tooling.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>opensource</category>
      <category>database</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Choosing the right, real-time, Postgres CDC platform</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 17 Apr 2025 02:19:30 +0000</pubDate>
      <link>https://dev.to/thisisgoldman/choosing-the-right-real-time-postgres-cdc-platform-4m5g</link>
      <guid>https://dev.to/thisisgoldman/choosing-the-right-real-time-postgres-cdc-platform-4m5g</guid>
      <description>&lt;p&gt;Change Data Capture (CDC) has become a critical component of modern data architectures. Teams use CDC to build event-driven workflows that react to database changes. Or to maintain state across services and data stores.&lt;/p&gt;

&lt;p&gt;As a critical component of the stack, teams need a CDC solution that's fast and reliable.&lt;/p&gt;

&lt;p&gt;In this guide, we'll compare the leading real-time CDC solutions across three key dimensions that matter most: technical capabilities, required expertise, and budget considerations. As we were building Sequin to address many of the challenges teams face with CDC, we couldn’t find a resource that laid out all the options. Since we did the work to try each tool available today, we thought we’d share our findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is real-time CDC?
&lt;/h2&gt;

&lt;p&gt;In real-time CDC, the moment a row changes in your database (&lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, &lt;code&gt;delete&lt;/code&gt;) you want to deliver that change to another system right away. Usually, you want real-time CDC for operational use cases, not analytics. So a cron job won't do.&lt;/p&gt;

&lt;p&gt;You may set up a real-time CDC pipeline for a number of reasons. Those reasons fall into two broad categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-driven workflows:&lt;/strong&gt; When you &lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, or &lt;code&gt;delete&lt;/code&gt; a row in your database you want to send an event to one or more services or processors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replication:&lt;/strong&gt; The state of your database needs to be synchronized with other services - be it a database, a cache, search index, third-party API, or a materialized view. Unlike analytics (per above) you need the two systems to be in-sync fast so, for example, a search doesn’t return an unavailable product, etc.&lt;/p&gt;

&lt;p&gt;For both kinds of use cases, guarantees are critical. In particular, you want the guarantee of &lt;a href="https://blog.sequinstream.com/at-most-once-at-least-once-and-exactly-once-delivery/" rel="noopener noreferrer"&gt;exactly-once processing&lt;/a&gt;. These systems are way easier to build and maintain if you &lt;em&gt;know&lt;/em&gt; that a downstream service will receive a change when it happens. And that if the downstream service fails to process the change, it will be retried until it succeeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The framework
&lt;/h2&gt;

&lt;p&gt;I’ll highlight three characteristics of each platform to help you make the right choice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; I’ll boil this down to three primary considerations that should help you quickly determine if the tool is even close to your needs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What kinds of changes can the system detect in Postgres?
&lt;/li&gt;
&lt;li&gt;What kind of deliverability guarantees does the system provide?
&lt;/li&gt;
&lt;li&gt;What destinations does the system support?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; How easy is the system to set up and maintain? Do you need specific skills to use the tool? How much time will you need to dedicate to it? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget constraints:&lt;/strong&gt; In addition to time, how much will the system cost to operate?&lt;/p&gt;

&lt;p&gt;With that, let’s dig in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free, open source
&lt;/h2&gt;

&lt;p&gt;There are a handful of open source projects that you can run yourself. You get the benefits of using a widely adopted tool with the ability to tailor it to your needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debezium
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://debezium.io/" rel="noopener noreferrer"&gt;https://debezium.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Debezium is a widely deployed CDC platform, but it’s notoriously hard to use and requires Kafka expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Debezium captures all inserts, updates, and deletes. Because it uses Kafka, it comes with exactly-once processing guarantees. Debezium's only destination is Kafka, but you can use Kafka Connect to route from Kafka to other tools and services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; High. Self-hosting Debezium requires deploying and managing the JVM, ZooKeeper, and Kafka. Each requires configuration and ongoing maintenance.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Budget:&lt;/strong&gt; High. Debezium is free and open source, but the real cost is in complexity.  With all the dependencies and configuration the total cost of ownership is significant.&lt;/p&gt;

&lt;p&gt;Note: Debezium launched &lt;a href="https://debezium.io/documentation/reference/stable/operations/debezium-server.html" rel="noopener noreferrer"&gt;Debezium Server&lt;/a&gt; several years ago - which removed the Kafka dependency, and supports more streams/queues. However, it seems pretty basic and has received limited investment - and is hardly being used from what I can tell (only 75 stars on GitHub). It’s worth mentioning, but I don’t see it as a viable offering right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sequin :)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://sequinstream.com/" rel="noopener noreferrer"&gt;https://sequinstream.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; - Sequin is a new Postgres CDC solution that leverages your existing database infrastructure instead of requiring Kafka. It delivers changes directly to popular streams and queues through native connectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Sequin captures every database change using Postgres's native replication capabilities, without requiring additional infrastructure like Kafka. The platform provides exactly-once processing guarantees by leveraging Postgres itself for reliable message delivery. Sequin includes native connectors that deliver changes directly to destinations like Kafka, SQS, NATS, RabbitMQ, and Sequin streams, with built-in transformation capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing guarantees
&lt;/li&gt;
&lt;li&gt;Native connectors for streams and queues like Kafka, NATS, RabbitMQ, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Low. Sequin runs as a single Docker image. Sequin comes with a web console, CLI, and config files to simplify set up and management. Sequin offers a fully hosted option as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Low. Sequin is a free, open source platform. It’s more cost effective compared to alternatives because it uses your existing Postgres infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosted
&lt;/h2&gt;

&lt;p&gt;While there are many hosted, closed  source CDC providers, few offer real-time data capture with stream or queue destinations. Here’s the set to consider:&lt;/p&gt;

&lt;h3&gt;
  
  
  Decodable
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.decodable.co/" rel="noopener noreferrer"&gt;https://www.decodable.co/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR -&lt;/strong&gt; Decodable provides a hosted Debezium plus Flink stack - ideal if you're already invested in these technologies but want them managed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Decodable provides a hosted version of Debezium paired with Apache Flink. It’ll capture every change in your database with strong deliverability guarantees, monitoring, and schema management tools.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing guarantees
&lt;/li&gt;
&lt;li&gt;Kafka and Pulsar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium to low. The hard part is learning how to configure pipelines and apply SQL transforms in the Decodable dashboard. Flink pipelines definitely come with a learning curve - and the slow feedback loop as you wait for changes to apply doesn’t help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Medium. Decodable is fairly expensive. One pipeline can have several different “tasks” that can each cost several hundred dollars per month.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confluent Debezium
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.confluent.io/kafka-connectors/debezium-postgres-source/current/overview.html" rel="noopener noreferrer"&gt;https://docs.confluent.io/kafka-connectors/debezium-postgres-source/current/overview.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; - A fully hosted version of Debezium with enterprise features. A reasonable choice if you're already using Confluent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; A fully hosted version of Debezium that comes with enterprise security, compliance, monitoring, and scaling tools built in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. While you won’t need to directly manage the deployment, JVM, and dependencies - you’ll still be responsible for the complex configuration of Debezium and Kafka topics&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; High, enterprise pricing. You’ll trade engineering time for an expensive enterprise infrastructure product that still requires Kafka expertise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confluent Direct JDBC Postgres
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.confluent.io/cloud/current/connectors/cc-postgresql-source.html" rel="noopener noreferrer"&gt;https://docs.confluent.io/cloud/current/connectors/cc-postgresql-source.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; If you need Kafka and are paying for Confluent, and you just need to replicate postgres rows to a topic - this is for you!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Unlike every other solution thus far, this connector uses polling to capture changes. This means you won’t capture deletes (which may be a dealbreaker). It’s easier to set up but less powerful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polling: &lt;code&gt;inserts&lt;/code&gt; and &lt;code&gt;updates&lt;/code&gt; only with delay
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium / low. The Postgres replication is simplified (but less powerful), and your still managing Kafka’s topic configurations, consumer groups, schema evolution, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Very High. As with other Confluent offerings, you’ll pay a high price for their enterprise tooling. You’ll pay individually for the Postgres connector in addition to your required Kafka instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Striim
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.striim.com/" rel="noopener noreferrer"&gt;https://www.striim.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Striim is known as an enterprise solution with a high cost - but it delivers a reliable CDC solution. It’s intended for very large companies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Striim provides CDC with transformations, schema management, delivery guarantees, and security. It's designed for large Fortune 1,000 companies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Multiple destinations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. You'll need to learn Striim's proprietary TQL (Transform Query Language) for data transformations and their StreamApps framework for pipeline configuration. While well-documented, these tools have a steep learning curve and are unique to Striim's ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Very high. Striim requires an all-in contract designed for large enterprises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud provider tools
&lt;/h2&gt;

&lt;p&gt;The major infrastructure providers offer CDC products that work within their ecosystem. Tools like &lt;a href="https://aws.amazon.com/dms/" rel="noopener noreferrer"&gt;AWS DMS&lt;/a&gt;, &lt;a href="https://cloud.google.com/datastream/docs/overview" rel="noopener noreferrer"&gt;GCP Datastream&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/products/data-factory" rel="noopener noreferrer"&gt;Azure Data Factory&lt;/a&gt; can be configured to stream changes from Postgres to other infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Setting up CDC through your cloud provider can certainly work if you are all-in on one provider and comfortable with their tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; AWS, GCP, and Azure offer CDC capabilities integrated into their infrastructure. For instance with AWS DMS, you can configure your AWS RDS with CDC to send events to AWS SQS. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAL: Inserts, Updates, Deletes in Real-time
&lt;/li&gt;
&lt;li&gt;Variable by configuration and provider
&lt;/li&gt;
&lt;li&gt;Destinations within the provider&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. Setting up this kind of CDC is all about navigating through your provider’s web console, permissions, tooling, and logging to set up pipelines. You’ll need to be familiar with all the potential settings and ready to guess and check with Claude Sonnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Medium. Often you’ll be paying for extra compute hours and data transfer which can add up quickly and are hard to predict. Additionally, these setups can be brittle and hard to maintain as settings and dependencies are strewn about.&lt;/p&gt;

&lt;h2&gt;
  
  
  ETL providers (Fivetran, Airbyte, Etc)
&lt;/h2&gt;

&lt;p&gt;If you need real-time CDC, then these platforms are immediately disqualified because they only offer batch ETL. It may take a minute or two for a change in your database to appear in your stream or queue. And even then, depending on your set up, it may not have atomic changes.&lt;/p&gt;

&lt;p&gt;That said, they are worth mentioning because they do provide CDC to a handful of streams and queues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; if you just need analytics, these are a good option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; ETL tooling provides schedule, batch delivery of changes in your database to streams like Kafka. It’s primarily intended for non-operational, analytics use cases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch CDC on a schedule
&lt;/li&gt;
&lt;li&gt;Variable - often at-most-once guarantees.
&lt;/li&gt;
&lt;li&gt;Multiple destinations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Low. ETL tools are easy to set up - but not very configurable when used for CDC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; High. You’ll pay for every row which can get expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build your own
&lt;/h2&gt;

&lt;p&gt;Of course, you can definitely build this on your own real-time CDC. We’ve written &lt;a href="https://blog.sequinstream.com/build-your-own-sqs-or-kafka-with-postgres/" rel="noopener noreferrer"&gt;several&lt;/a&gt; &lt;a href="https://blog.sequinstream.com/request-reply-in-postgres/" rel="noopener noreferrer"&gt;pieces&lt;/a&gt; on this &lt;a href="https://blog.sequinstream.com/at-most-once-at-least-once-and-exactly-once-delivery/" rel="noopener noreferrer"&gt;topic&lt;/a&gt; to help you get started.&lt;/p&gt;

&lt;p&gt;Creating a custom CDC solution offers the ultimate in flexibility and optimization for specific use cases. You can build exactly what they need, with precisely the delivery guarantees and transformation capabilities you require. And Postgres comes with some helpful capabilities to make this relatively easy&lt;/p&gt;

&lt;p&gt;However, this path demands the highest level of technical expertise. You’ll be in the weeds of the WAL, Postgres, and some sort of approach to buffer and deliver changes. It’s honestly very fun work - but hard to get right. Especially if you require backfills, strict ordering, exactly-once processing, monitoring, and redundancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While CDC is not a new idea, it’s becoming more common as apps become more data intensive. While the options are a little sparse (especially compared to other tooling) - the space is maturing. I’d recommend taking action from here by picking one solution in each category - try an open source solution (like Sequin!), start a free trial of a hosted option, and see if your cloud provider might do the trick. Then you’ll have working knowledge of your option space to make a good decision.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Fri, 06 Dec 2024 01:12:49 +0000</pubDate>
      <link>https://dev.to/thisisgoldman/-11i3</link>
      <guid>https://dev.to/thisisgoldman/-11i3</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/sequin" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F3854%2F73276b3a-b14b-4f1e-b399-a219987d600e.png" alt="Sequin" width="600" height="600"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F312092%2F9b49c375-133f-42f6-824e-f3979ff39db8.jpg" alt="" width="400" height="400"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/sequin/choosing-the-right-real-time-postgres-cdc-platform-3knm" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Choosing the right, real-time, Postgres CDC platform&lt;/h2&gt;
      &lt;h3&gt;Eric Goldman for Sequin ・ Dec 6 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#postgres&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#database&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#eventdriven&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#dataengineering&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Choosing the right, real-time, Postgres CDC platform</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Fri, 06 Dec 2024 01:12:25 +0000</pubDate>
      <link>https://dev.to/sequin/choosing-the-right-real-time-postgres-cdc-platform-3knm</link>
      <guid>https://dev.to/sequin/choosing-the-right-real-time-postgres-cdc-platform-3knm</guid>
      <description>&lt;p&gt;Change Data Capture (CDC) has become a critical component of modern data architectures. Teams use CDC to build event-driven workflows that react to database changes. Or to maintain state across services and data stores.&lt;/p&gt;

&lt;p&gt;As a critical component of the stack, teams need a CDC solution that's fast and reliable.&lt;/p&gt;

&lt;p&gt;In this guide, we'll compare the leading real-time CDC solutions across three key dimensions that matter most: technical capabilities, required expertise, and budget considerations. As we were building Sequin to address many of the challenges teams face with CDC, we couldn’t find a resource that laid out all the options. Since we did the work to try each tool available today, we thought we’d share our findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is real-time CDC?
&lt;/h2&gt;

&lt;p&gt;In real-time CDC, the moment a row changes in your database (&lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, &lt;code&gt;delete&lt;/code&gt;) you want to deliver that change to another system right away. Usually, you want real-time CDC for operational use cases, not analytics. So a cron job won't do.&lt;/p&gt;

&lt;p&gt;You may set up a real-time CDC pipeline for a number of reasons. Those reasons fall into two broad categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-driven workflows:&lt;/strong&gt; When you &lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, or &lt;code&gt;delete&lt;/code&gt; a row in your database you want to send an event to one or more services or processors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replication:&lt;/strong&gt; The state of your database needs to be synchronized with other services - be it a database, a cache, search index, third-party API, or a materialized view. Unlike analytics (per above) you need the two systems to be in-sync fast so, for example, a search doesn’t return an unavailable product, etc.&lt;/p&gt;

&lt;p&gt;For both kinds of use cases, guarantees are critical. In particular, you want the guarantee of &lt;a href="https://blog.sequinstream.com/at-most-once-at-least-once-and-exactly-once-delivery/" rel="noopener noreferrer"&gt;exactly-once processing&lt;/a&gt;. These systems are way easier to build and maintain if you &lt;em&gt;know&lt;/em&gt; that a downstream service will receive a change when it happens. And that if the downstream service fails to process the change, it will be retried until it succeeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The framework
&lt;/h2&gt;

&lt;p&gt;I’ll highlight three characteristics of each platform to help you make the right choice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; I’ll boil this down to three primary considerations that should help you quickly determine if the tool is even close to your needs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What kinds of changes can the system detect in Postgres?
&lt;/li&gt;
&lt;li&gt;What kind of deliverability guarantees does the system provide?
&lt;/li&gt;
&lt;li&gt;What destinations does the system support?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; How easy is the system to set up and maintain? Do you need specific skills to use the tool? How much time will you need to dedicate to it? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget constraints:&lt;/strong&gt; In addition to time, how much will the system cost to operate?&lt;/p&gt;

&lt;p&gt;With that, let’s dig in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free, open source
&lt;/h2&gt;

&lt;p&gt;There are a handful of open source projects that you can run yourself. You get the benefits of using a widely adopted tool with the ability to tailor it to your needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://debezium.io/" rel="noopener noreferrer"&gt;Debezium&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Debezium is a widely deployed CDC platform, but it’s notoriously hard to use and requires Kafka expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Debezium captures all inserts, updates, and deletes. Because it uses Kafka, it comes with exactly-once processing guarantees. Debezium's only destination is Kafka, but you can use Kafka Connect to route from Kafka to other tools and services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; High. Self-hosting Debezium requires deploying and managing the JVM, ZooKeeper, and Kafka. Each requires configuration and ongoing maintenance.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Budget:&lt;/strong&gt; High. Debezium is free and open source, but the real cost is in complexity.  With all the dependencies and configuration the total cost of ownership is significant.&lt;/p&gt;

&lt;p&gt;Note: Debezium launched &lt;a href="https://debezium.io/documentation/reference/stable/operations/debezium-server.html" rel="noopener noreferrer"&gt;Debezium Server&lt;/a&gt; several years ago - which removed the Kafka dependency, and supports more streams/queues. However, it seems pretty basic and has received limited investment - and is hardly being used from what I can tell (only 75 stars on GitHub). It’s worth mentioning, but I don’t see it as a viable offering right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sequinstream.com/" rel="noopener noreferrer"&gt;Sequin&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Sequin is a new Postgres CDC solution that leverages your existing database infrastructure instead of requiring Kafka. It delivers changes directly to popular streams and queues through native connectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Sequin captures every database change using Postgres's native replication capabilities, without requiring additional infrastructure like Kafka. The platform provides exactly-once processing guarantees by leveraging Postgres itself for reliable message delivery. Sequin includes native connectors that deliver changes directly to destinations like Kafka, SQS, NATS, RabbitMQ, and Sequin streams, with built-in transformation capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing guarantees
&lt;/li&gt;
&lt;li&gt;Native connectors for streams and queues like Kafka, NATS, RabbitMQ, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Low. Sequin runs as a single Docker image. Sequin comes with a web console, CLI, and config files to simplify set up and management. Sequin offers a fully hosted option as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Low. Sequin is a free, open source platform. It’s more cost effective compared to alternatives because it uses your existing Postgres infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosted
&lt;/h2&gt;

&lt;p&gt;While there are many hosted, closed  source CDC providers, few offer real-time data capture with stream or queue destinations. Here’s the set to consider:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.decodable.co/" rel="noopener noreferrer"&gt;Decodable&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Decodable provides a hosted Debezium plus Flink stack - ideal if you're already invested in these technologies but want them managed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Decodable provides a hosted version of Debezium paired with Apache Flink. It’ll capture every change in your database with strong deliverability guarantees, monitoring, and schema management tools.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing guarantees
&lt;/li&gt;
&lt;li&gt;Kafka and Pulsar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium to low. The hard part is learning how to configure pipelines and apply SQL transforms in the Decodable dashboard. Flink pipelines definitely come with a learning curve - and the slow feedback loop as you wait for changes to apply doesn’t help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Medium. Decodable is fairly expensive. One pipeline can have several different “tasks” that can each cost several hundred dollars per month.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.confluent.io/kafka-connectors/debezium-postgres-source/current/overview.html" rel="noopener noreferrer"&gt;Confluent Debezium&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; A fully hosted version of Debezium with enterprise features. A reasonable choice if you're already using Confluent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; A fully hosted version of Debezium that comes with enterprise security, compliance, monitoring, and scaling tools built in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. While you won’t need to directly manage the deployment, JVM, and dependencies - you’ll still be responsible for the complex configuration of Debezium and Kafka topics&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; High, enterprise pricing. You’ll trade engineering time for an expensive enterprise infrastructure product that still requires Kafka expertise.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.confluent.io/cloud/current/connectors/cc-postgresql-source.html" rel="noopener noreferrer"&gt;Confluent Direct JDBC Postgres&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; If you need Kafka and are paying for Confluent, and you just need to replicate postgres rows to a topic - this is for you!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Unlike every other solution thus far, this connector uses polling to capture changes. This means you won’t capture deletes (which may be a dealbreaker). It’s easier to set up but less powerful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polling: &lt;code&gt;inserts&lt;/code&gt; and &lt;code&gt;updates&lt;/code&gt; only with delay
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium / low. The Postgres replication is simplified (but less powerful), and your still managing Kafka’s topic configurations, consumer groups, schema evolution, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Very High. As with other Confluent offerings, you’ll pay a high price for their enterprise tooling. You’ll pay individually for the Postgres connector in addition to your required Kafka instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.striim.com/" rel="noopener noreferrer"&gt;Striim&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Striim is known as an enterprise solution with a high cost - but it delivers a reliable CDC solution. It’s intended for very large companies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; Striim provides CDC with transformations, schema management, delivery guarantees, and security. It's designed for large Fortune 1,000 companies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logical replication: &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt; in real-time
&lt;/li&gt;
&lt;li&gt;Exactly-once processing
&lt;/li&gt;
&lt;li&gt;Multiple destinations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. You'll need to learn Striim's proprietary TQL (Transform Query Language) for data transformations and their StreamApps framework for pipeline configuration. While well-documented, these tools have a steep learning curve and are unique to Striim's ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Very high. Striim requires an all-in contract designed for large enterprises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud provider tools
&lt;/h2&gt;

&lt;p&gt;The major infrastructure providers offer CDC products that work within their ecosystem. Tools like &lt;a href="https://aws.amazon.com/dms/" rel="noopener noreferrer"&gt;AWS DMS&lt;/a&gt;, &lt;a href="https://cloud.google.com/datastream/docs/overview" rel="noopener noreferrer"&gt;GCP Datastream&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/products/data-factory" rel="noopener noreferrer"&gt;Azure Data Factory&lt;/a&gt; can be configured to stream changes from Postgres to other infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Setting up CDC through your cloud provider can certainly work if you are all-in on one provider and comfortable with their tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; AWS, GCP, and Azure offer CDC capabilities integrated into their infrastructure. For instance with AWS DMS, you can configure your AWS RDS with CDC to send events to AWS SQS. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAL: Inserts, Updates, Deletes in Real-time
&lt;/li&gt;
&lt;li&gt;Variable by configuration and provider
&lt;/li&gt;
&lt;li&gt;Destinations within the provider&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Medium. Setting up this kind of CDC is all about navigating through your provider’s web console, permissions, tooling, and logging to set up pipelines. You’ll need to be familiar with all the potential settings and ready to guess and check with Claude Sonnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; Medium. Often you’ll be paying for extra compute hours and data transfer which can add up quickly and are hard to predict. Additionally, these setups can be brittle and hard to maintain as settings and dependencies are strewn about.&lt;/p&gt;

&lt;h2&gt;
  
  
  ETL providers (Fivetran, Airbyte, Etc)
&lt;/h2&gt;

&lt;p&gt;If you need real-time CDC, then these platforms are immediately disqualified because they only offer batch ETL. It may take a minute or two for a change in your database to appear in your stream or queue. And even then, depending on your set up, it may not have atomic changes.&lt;/p&gt;

&lt;p&gt;That said, they are worth mentioning because they do provide CDC to a handful of streams and queues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; if you just need analytics, these are a good option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt; ETL tooling provides schedule, batch delivery of changes in your database to streams like Kafka. It’s primarily intended for non-operational, analytics use cases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch CDC on a schedule
&lt;/li&gt;
&lt;li&gt;Variable - often at-most-once guarantees.
&lt;/li&gt;
&lt;li&gt;Multiple destinations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise:&lt;/strong&gt; Low. ETL tools are easy to set up - but not very configurable when used for CDC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget:&lt;/strong&gt; High. You’ll pay for every row which can get expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build your own
&lt;/h2&gt;

&lt;p&gt;Of course, you can definitely build this on your own real-time CDC. We’ve written &lt;a href="https://blog.sequinstream.com/build-your-own-sqs-or-kafka-with-postgres/" rel="noopener noreferrer"&gt;several&lt;/a&gt; &lt;a href="https://blog.sequinstream.com/request-reply-in-postgres/" rel="noopener noreferrer"&gt;pieces&lt;/a&gt; on this &lt;a href="https://blog.sequinstream.com/at-most-once-at-least-once-and-exactly-once-delivery/" rel="noopener noreferrer"&gt;topic&lt;/a&gt; to help you get started.&lt;/p&gt;

&lt;p&gt;Creating a custom CDC solution offers the ultimate in flexibility and optimization for specific use cases. You can build exactly what they need, with precisely the delivery guarantees and transformation capabilities you require. And Postgres comes with some helpful capabilities to make this relatively easy&lt;/p&gt;

&lt;p&gt;However, this path demands the highest level of technical expertise. You’ll be in the weeds of the WAL, Postgres, and some sort of approach to buffer and deliver changes. It’s honestly very fun work - but hard to get right. Especially if you require backfills, strict ordering, exactly-once processing, monitoring, and redundancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While CDC is not a new idea, it’s becoming more common as apps become more data intensive. While the options are a little sparse (especially compared to other tooling) - the space is maturing. I’d recommend taking action from here by picking one solution in each category - try an open source solution (like Sequin!), start a free trial of a hosted option, and see if your cloud provider might do the trick. Then you’ll have working knowledge of your option space to make a good decision.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>eventdriven</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>How to invoke a lambda function from your database</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Fri, 27 Sep 2024 20:47:34 +0000</pubDate>
      <link>https://dev.to/sequin/how-to-invoke-a-lambda-function-from-your-database-5bjp</link>
      <guid>https://dev.to/sequin/how-to-invoke-a-lambda-function-from-your-database-5bjp</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; is a serverless compute service that lets you run code without provisioning or managing servers. It automatically scales your applications in response to incoming requests.&lt;/p&gt;

&lt;p&gt;Often, you want to trigger an AWS Lambda function when a database row changes. For example, you may want to trigger a function as a &lt;a href="https://dev.to/use-cases/side-effects"&gt;side-effect&lt;/a&gt; of a database change, or &lt;a href="https://dev.to/use-cases/fan-out"&gt;fan out work&lt;/a&gt; to multiple services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjqnpuxo3bvsm7y50uec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjqnpuxo3bvsm7y50uec.png" alt="How Sequin works" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this guide, you will learn how to setup Sequin to trigger an AWS Lambda function when a database row changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You are about to create a simple AWS Lambda function that logs a message to the console. Sequin will trigger this function by sending a HTTP POST request to the function's URL with the payload of the database row that changed.&lt;/p&gt;

&lt;p&gt;You'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;li&gt;A Sequin account&lt;/li&gt;
&lt;li&gt;A Postgres database (Sequin works with any Postgres database version 12+ - including &lt;a href="https://dev.to/guides/rds"&gt;RDS&lt;/a&gt;) with a basic &lt;code&gt;users&lt;/code&gt; table containing an &lt;code&gt;id&lt;/code&gt; column and a &lt;code&gt;name&lt;/code&gt; column.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a Lambda function
&lt;/h2&gt;

&lt;p&gt;Start by creating a new AWS Lambda function that takes in a Sequin change event as a payload and logs the payload to the console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a new Lambda function
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the AWS Lambda console and click "Create function".&lt;/li&gt;
&lt;li&gt;Choose "Author from scratch".&lt;/li&gt;
&lt;li&gt;Give your function a name (e.g., "newUserHandler").&lt;/li&gt;
&lt;li&gt;Select Node.js as the runtime and which ever architecture you want to support (e.g., "arm64").&lt;/li&gt;
&lt;li&gt;Click "Create function":&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t9zuz0f9b4s4o3m4llq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9t9zuz0f9b4s4o3m4llq.png" alt="Create a lambda" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add function code
&lt;/h3&gt;

&lt;p&gt;Replace the default code in the Lambda function editor with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Verify the Sequin webhook secret&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;authHeader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authorization&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;authHeader&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;authHeader&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SEQUIN_WEBHOOK_SECRET&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unauthorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
          &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Hello &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;No name found in the payload.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;

          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Success&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
          &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error processing request:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Internal Server Error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
          &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function first checks the authorization header to make sure the request is coming from Sequin. Then it processes the payload (which contains the database row that changed) and logs the name from the payload to the console.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Deploy&lt;/strong&gt; to save the function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the SEQUIN_WEBHOOK_SECRET environment variable
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the Lambda function &lt;strong&gt;Configuration&lt;/strong&gt; tab, scroll down to the "Environment variables" section.&lt;/li&gt;
&lt;li&gt;Click "Edit" and then "Add environment variable".&lt;/li&gt;
&lt;li&gt;Set the key as &lt;code&gt;SEQUIN_WEBHOOK_SECRET&lt;/code&gt; and the value to a secure secret of your choice.&lt;/li&gt;
&lt;li&gt;Click "Save".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will need to use this secret value in the Sequin dashboard when you create the push consumer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Lambda Function URL
&lt;/h3&gt;

&lt;p&gt;To make your Lambda function accessible via HTTP, you need to create a Function URL:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your Lambda function in the AWS Console.&lt;/li&gt;
&lt;li&gt;In the "Configuration" tab, click on "Function URL" in the left sidebar.&lt;/li&gt;
&lt;li&gt;Click "Create function URL".&lt;/li&gt;
&lt;li&gt;For "Auth type", select "NONE".&lt;/li&gt;
&lt;li&gt;Under "Configure cross-origin resource sharing (CORS)", check "Configure CORS".&lt;/li&gt;
&lt;li&gt;In the "Allowed origins" field, enter "*" (without quotes) to allow all origins for now. You can restrict this later.&lt;/li&gt;
&lt;li&gt;Click "Save".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After saving, you'll see a Function URL. This is the URL you'll use to configure your Sequin consumer in the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Sequin Push Consumer
&lt;/h2&gt;

&lt;p&gt;Create a new Sequin push consumer that detects changes to the &lt;code&gt;users&lt;/code&gt; table and sends a HTTP POST request to the Lambda function's URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect Sequin to your database
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Login to your Sequin account and click the &lt;strong&gt;Add New Database&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Enter the connection details for your Postgres database.&lt;/li&gt;
&lt;li&gt;Follow the instructions to create a publication and a replication slot by running two SQL commands in your database:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;sequin_pub&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;all&lt;/span&gt; &lt;span class="n"&gt;tables&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;pg_create_logical_replication_slot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'sequin_slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pgoutput'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Name your database and click the &lt;strong&gt;Connect Database&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sequin will connect to your database and ensure that it's configured properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Push Consumer
&lt;/h3&gt;

&lt;p&gt;Create a push consumer that will capture users from your database and deliver them to your Lambda function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;Consumers&lt;/strong&gt; tab and click the &lt;strong&gt;Create Consumer&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Select your &lt;code&gt;users&lt;/code&gt; table (i.e &lt;code&gt;public.users&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;For this guide, you want to capture all changes to the &lt;code&gt;users&lt;/code&gt; table. To do this, select &lt;strong&gt;Changes&lt;/strong&gt; and click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select to capture &lt;code&gt;inserts&lt;/code&gt;, &lt;code&gt;updates&lt;/code&gt;, and &lt;code&gt;deletes&lt;/code&gt;. No need to add a filter for now. Click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the next screen, select &lt;strong&gt;Push&lt;/strong&gt; to have Sequin send the events to your webhook URL. Click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Now, give your consumer a name (i.e. &lt;code&gt;users_push_consumer&lt;/code&gt;) and in the &lt;strong&gt;HTTP Endpoint&lt;/strong&gt; section, click &lt;strong&gt;New HTTPE Endpoint&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter the Lambda Function URL you obtained earlier. Then click to &lt;strong&gt;Add Encrypted Header&lt;/strong&gt; and add an encryption header with the key &lt;code&gt;Authorization&lt;/code&gt; and the value &lt;code&gt;Bearer SEQUIN_WEBHOOK_SECRET&lt;/code&gt;, using the secret value you set in your Lambda function's environment variables. Click &lt;strong&gt;Create Endpoint&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Back in the tab where you were creating your consumer, click the refresh button by the &lt;strong&gt;Endpoints&lt;/strong&gt; section and select the endpoint you just created. Click &lt;strong&gt;Create Consumer&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Test end-to-end
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a new user in your database"&amp;gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;insert&lt;/span&gt; &lt;span class="k"&gt;into&lt;/span&gt;
&lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;values&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="s1"&gt;'John Doe'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Trace the change in the Sequin dashboard
&lt;/h3&gt;

&lt;p&gt;In the Sequin console, open the &lt;strong&gt;Messages&lt;/strong&gt; tab on your consumer and confirm that a message was delivered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15rwrpeo9fzwayk5g2wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15rwrpeo9fzwayk5g2wg.png" alt="Trace the change in Sequin" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Confirm the event was received by your Lambda function"&amp;gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the AWS Lambda console and navigate to your function.&lt;/li&gt;
&lt;li&gt;Click on the "Monitor" tab and then "View CloudWatch logs"&lt;/li&gt;
&lt;li&gt;In the most recent log stream, you should see a log entry: &lt;code&gt;Hello John Doe&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazqg76nhavviz5kisk8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazqg76nhavviz5kisk8p.png" alt="See the log in AWS - the lambda ran" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;Modify this example to suit your needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create Lambda functions to perform &lt;a href="https://dev.to/use-cases/side-effects"&gt;side-effects&lt;/a&gt;, &lt;a href="https://dev.to/use-cases/fan-out"&gt;fan out work&lt;/a&gt;, and more.&lt;/li&gt;
&lt;li&gt;If you need to run long-running jobs, consider using &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions&lt;/a&gt; in tandem with Lambda functions.&lt;/li&gt;
&lt;li&gt;Tune your consumer configuration to suit your volume of work.&lt;/li&gt;
&lt;li&gt;Implement additional security measures, such as &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html" rel="noopener noreferrer"&gt;API Gateway request validation&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>aws</category>
      <category>lambda</category>
      <category>webdev</category>
    </item>
    <item>
      <title>All the ways to react to changes in Supabase</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Wed, 11 Sep 2024 23:45:51 +0000</pubDate>
      <link>https://dev.to/thisisgoldman/all-the-ways-to-react-to-changes-in-supabase-3glh</link>
      <guid>https://dev.to/thisisgoldman/all-the-ways-to-react-to-changes-in-supabase-3glh</guid>
      <description>&lt;p&gt;This video covers all the ways to capture changes in Supabase. &lt;/p&gt;

&lt;p&gt;It's a video deep dive into all the topics covered in the original blog post: &lt;a href="https://blog.sequinstream.com/all-the-ways" rel="noopener noreferrer"&gt;https://blog.sequinstream.com/all-the-ways&lt;/a&gt;...&lt;/p&gt;

&lt;p&gt;Read about all the ways to capture changes in Postgres, more generally: &lt;a href="https://blog.sequinstream.com/all-the-ways" rel="noopener noreferrer"&gt;https://blog.sequinstream.com/all-the-ways&lt;/a&gt;...&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>postgres</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Automatically creating Salesforce contacts when a user signs up</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 06 Jul 2023 21:23:16 +0000</pubDate>
      <link>https://dev.to/sequin/automatically-creating-salesforce-contacts-when-a-user-signs-up-2iki</link>
      <guid>https://dev.to/sequin/automatically-creating-salesforce-contacts-when-a-user-signs-up-2iki</guid>
      <description>&lt;p&gt;This guide will show you how to use &lt;a href="https://sequin.io" rel="noopener noreferrer"&gt;Sequin&lt;/a&gt; to automatically create a Salesforce contact when a user signs up for your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases
&lt;/h2&gt;

&lt;p&gt;Creating a new Salesforce Contact when a user signs up for your app is a common use case. It allows you to keep your Salesforce instance up to date with the latest user data so your sales, support, and operations teams can work with reliable customer data.&lt;/p&gt;

&lt;p&gt;Building a reliable integration with Salesforce and your application using Sequin comes with several benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintainability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Simplifying your integration into a single transaction makes your code easier to maintain. You don't need to track down errors in your client-side code or investigate issues with your data pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guarantees&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A user missing its paired Salesforce contact is a regular, painful problem. There are many reasons why a user might not be synced to Salesforce, but a very common one is that your client-side code (often provided by an analytics tool like Segment) is being blocked by a browser extension. Or, your pipeline / ETL is hitting an error. By creating a direct server-side integration between Salesforce and your application you couple your logic for creating a user with your logic for creating a Salesforce contact. This ensures that a user is always synced to Salesforce when they are created.&lt;/p&gt;

&lt;p&gt;Also, when you create a user and a Salesforce contact in the same transaction, you can ensure that the data in your Salesforce instance is clean and consistent. Any error that would cause drift can be handled in-line. For instance, you can ensure that their email is valid before letting the user move on. You can catch duplicates, or, you can ensure that the user's first name and last name are synced to the contact to create personalization. This is especially important if you are using Salesforce as a source of truth for sales and support communications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;This tutorial will assume you've followed the Salesforce setup guide to connect your Salesforce instance and database to Sequin. If you haven't, check out our &lt;a href="https://dev.to/docs/guides/salesforce-setup"&gt;guide on syncing Salesforce with Sequin&lt;/a&gt;. Ensure you are syncing the &lt;code&gt;Contact&lt;/code&gt; collection to your database.&lt;/p&gt;

&lt;p&gt;This tutorial will use node.js and the &lt;a href="https://node-postgres.com/" rel="noopener noreferrer"&gt;node-postgres library&lt;/a&gt; to automatically create a Salesforce contact when a user signs up for your app. We'll assume you already have a node project setup with &lt;code&gt;pg&lt;/code&gt; installed. If you don't, you can follow the &lt;a href="https://node-postgres.com/guides/getting-started" rel="noopener noreferrer"&gt;node-postgres getting started guide&lt;/a&gt; to get up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Setup pg clients
&lt;/h2&gt;

&lt;p&gt;First, create a connection to your database using &lt;code&gt;pg&lt;/code&gt;. As a best practice, we recommend configuring two Postgres clients in your application so you can route Salesforce queries through the Sequin proxy and route other queries directly to your database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct client
&lt;/h3&gt;

&lt;p&gt;Configure a &lt;code&gt;pg&lt;/code&gt; client that connects directly to your database. Queries through this client will be executed directly against your database without any Sequin dependency. You should use this client for all read and write operations that don't touch your Salesforce data - for instance, creating a user account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;direct&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;your.database.host&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yourDatabaseName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DIRECT_DB_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Friendly reminder: never store your database credentials in your application code. Use environment variables to store your credentials and load them into your application at runtime. For node.js, you can use a &lt;code&gt;.env&lt;/code&gt; file and the &lt;a href="https://www.npmjs.com/package/dotenv" rel="noopener noreferrer"&gt;dotenv&lt;/a&gt; library.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Sequin proxy client
&lt;/h3&gt;

&lt;p&gt;Configure another 'pg' client - but this time configure it to connect to the Sequin Postgres Proxy.&lt;/p&gt;

&lt;p&gt;You'll find the connection details for your Sequin proxy in the Sequin console. Select your sync and then navigate to the &lt;strong&gt;Connection instructions&lt;/strong&gt; tab to find your proxy user, host, and password. Then enter them into another &lt;code&gt;pg&lt;/code&gt; client configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;salesforceProxy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sequin_▒▒▒▒▒▒.▒▒▒▒▒▒&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proxy.sequindb.io&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yourDatabaseName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SALESFORCE_DB_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Queries through this client will pass through Sequin to ensure any mutations are applied to Salesforce. You should use this client for write operations that touch your Salesforce data - in this tutorial, creating a new contact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create contact
&lt;/h2&gt;

&lt;p&gt;Now that you have two &lt;code&gt;pg&lt;/code&gt; clients configured, you can write a query to create a new contact in Salesforce.&lt;/p&gt;

&lt;p&gt;You'll use the &lt;code&gt;salesforce&lt;/code&gt; Postgres client to execute this query so that Sequin can sync the new contact to Salesforce. Here is a simple function that will create a new contact in Salesforce:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createNewContact&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;salesforceProxy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`INSERT INTO salesforce.contact (first_name, last_name, email, user_id__c) VALUES ('&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;');`&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error creating contact:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;release&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function is designed to be called after a user is created. It takes in the &lt;code&gt;user&lt;/code&gt; object and the user's &lt;code&gt;id&lt;/code&gt;, opens a connection to the database, and then inserts a new contact into Salesforce. The &lt;code&gt;user_id__c&lt;/code&gt; field is a custom field on the Salesforce contact object that will link the Salesforce contact to the user. If the contact is successfully created, the connection to the database is closed. If there is an error - for instance a duplicate user is found - a new error will be thrown. This allows you to handle the error in your application code - perhaps letting the new user know that they already have an account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create user
&lt;/h2&gt;

&lt;p&gt;Now, incorporate the &lt;code&gt;createNewContact()&lt;/code&gt; function into your user creation flow. Here is a simple example of a user creation flow that uses the &lt;code&gt;createNewContact&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createNewUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;direct&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;BEGIN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;INSERT INTO users (first_name, last_name, email) VALUES ($1, $2, $3) RETURNING id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createNewContact&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;newUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;COMMIT&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;New user created successfully!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ROLLBACK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error creating user:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;release&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function is designed to be called when a user signs up for your app. It takes in the &lt;code&gt;user&lt;/code&gt; object, opens a connection to the database, and starts a new transaction. It then attempts to create a new user before calling the &lt;code&gt;createNewContact&lt;/code&gt; function to create a new contact in Salesforce. If both the user and the contact are successfully created, the transaction is committed. If there is an error - for instance the user's email is invalid - the transaction is rolled back. This ensures that the user and the contact are always created together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;You've successfully created a new Salesforce contact when a user signs up for your app. This approach ensures every user will be paired to a Salesforce contact upon creation. It also allows you to leverage your business logic in Salesforce to ensure new users have valid emails and are not duplicated, for example.&lt;/p&gt;

&lt;p&gt;From here, you can also use this approach to create other Salesforce objects when a user signs up - for instance, mapping the user to a new or existing Salesforce account. Or automatically creating a new opportunity and assigning it to a sales rep.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Integrate with HubSpot using Prisma</title>
      <dc:creator>Eric Goldman</dc:creator>
      <pubDate>Thu, 06 Apr 2023 20:56:17 +0000</pubDate>
      <link>https://dev.to/sequin/integrate-with-hubspot-using-prisma-54a1</link>
      <guid>https://dev.to/sequin/integrate-with-hubspot-using-prisma-54a1</guid>
      <description>&lt;p&gt;Sequin lets you sync your HubSpot data to your database in real-time. You can then use ORMs, like &lt;a href="https://www.prisma.io/" rel="noopener noreferrer"&gt;Prisma&lt;/a&gt;, to rapidly build your HubSpot integration.&lt;/p&gt;

&lt;p&gt;In this playbook, you'll learn how to set up HubSpot to work with Prisma using Sequin. You'll then write your first queries using the Prisma client and explore the development lifecycle as you run migrations and scale your integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F001_hubspot_sequin_prisma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F001_hubspot_sequin_prisma.png" alt="Data flow from HubSpot to Postgres to Prisma" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting with an existing project
&lt;/h2&gt;

&lt;p&gt;You're likely already using Prisma as your ORM. So this playbook starts from an existing Prisma project and shows you how to add HubSpot data to your stack.&lt;/p&gt;

&lt;p&gt;We assume you've already followed &lt;a href="https://www.prisma.io/docs/getting-started/quickstart" rel="noopener noreferrer"&gt;Prisma's quickstart&lt;/a&gt; to create your TypeScript project, install Prisma, connect your database, introspect your schema, and query with the Prisma Client.&lt;/p&gt;

&lt;p&gt;Specifically, this playbook builds on top of an existing Prisma project connected to a PostgreSQL database with one schema called &lt;code&gt;public&lt;/code&gt; containing two tables: &lt;code&gt;users&lt;/code&gt; and &lt;code&gt;orgs&lt;/code&gt;. Each &lt;code&gt;user&lt;/code&gt; is a part of an &lt;code&gt;org&lt;/code&gt; — represented in the database with a foreign key relationship:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F002_initial_erd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F002_initial_erd.png" alt="Original schema" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This schema is represented as a Prisma model in the &lt;code&gt;schema.prisma&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generator&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prisma-client-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;datasource&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;postgresql&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;url&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;orgs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                  &lt;span class="nx"&gt;Int&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;subscription_status&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;users&lt;/span&gt;               &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;         &lt;span class="nx"&gt;Int&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;first_name&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;last_name&lt;/span&gt;  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;email&lt;/span&gt;      &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org_id&lt;/span&gt;     &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org&lt;/span&gt;        &lt;span class="nx"&gt;orgs&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;org_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;users_orgs_id_fk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each table has been translated into a model. And the foreign key &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-schema/relations" rel="noopener noreferrer"&gt;relationship&lt;/a&gt; between &lt;code&gt;users&lt;/code&gt; and &lt;code&gt;orgs&lt;/code&gt; has been defined with a &lt;code&gt;@relation&lt;/code&gt; scalar.&lt;/p&gt;

&lt;p&gt;From the foundation of this existing Prisma project, you'll now add your HubSpot schema to Prisma using Sequin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup your HubSpot sync
&lt;/h2&gt;

&lt;p&gt;To build a sync between HubSpot and your database, Sequin will guide you through the process of authenticating with HubSpot, selecting the data you want to sync, and connecting to your database. Read our &lt;a href="https://dev.to/integrations/hubspot/setup"&gt;HubSpot setup guide&lt;/a&gt; for step-by-step instructions.&lt;/p&gt;

&lt;p&gt;For the purposes of this playbook, you'll want to sync at least two HubSpot objects mapped together with an &lt;code&gt;association&lt;/code&gt;. For instance, you can sync the &lt;code&gt;Contact&lt;/code&gt; object, &lt;code&gt;Deal&lt;/code&gt; object, and &lt;code&gt;Contact with Deal&lt;/code&gt; associations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F003_select_objects.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F003_select_objects.png" alt="Select objects" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get comfortable with your schema and Prisma's workflows, you also don't need to sync every HubSpot property. In the Sequin Console, configure your sync to include a handful of properties:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F004_select_properties.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F004_select_properties.png" alt="Select columns" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For instance, sync some standard properties for the &lt;code&gt;Contact&lt;/code&gt; object. For reference, these are the &lt;code&gt;Contact&lt;/code&gt; properties used in the remainder of this playbook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;industry&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;associatedcompanyid&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;numemployees&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;website&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;company&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jobtitle&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;lastname&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;firstname&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As a helpful example, we include a custom property called &lt;code&gt;user_id&lt;/code&gt;, which maps each HubSpot &lt;code&gt;contact&lt;/code&gt; to a &lt;code&gt;user&lt;/code&gt;. We'll explore this relationship more later in the playbook.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Do the same for the &lt;code&gt;Deals&lt;/code&gt; object. Here are the &lt;code&gt;Deal&lt;/code&gt; properties used in this playbook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;createdate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;closedate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pipeline&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dealstage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;amount&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dealname&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hs_priority&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; You can also edit and lock the column name for each property you sync. These tools allow you to create simple naming conventions in your database and buffer your database from breaking changes in HubSpot.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With HubSpot configured in Sequin, you can now &lt;a href="https://dev.to/self-hosted"&gt;connect your database&lt;/a&gt; to Sequin and create your sync.&lt;/p&gt;

&lt;p&gt;After you click &lt;strong&gt;Create&lt;/strong&gt;, Sequin will begin backfilling all the &lt;code&gt;Contacts&lt;/code&gt;, &lt;code&gt;Deals&lt;/code&gt;, and &lt;code&gt;Contact - Deal&lt;/code&gt; associations in your HubSpot instance to your database.&lt;/p&gt;

&lt;p&gt;Within a minute, you'll see the new &lt;code&gt;hubspot&lt;/code&gt; schema and tables in your database:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F005_full_schema.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F005_full_schema.png" alt="Complete schema" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now configure Prisma to work with your HubSpot data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add HubSpot to your Prisma Schema
&lt;/h2&gt;

&lt;p&gt;Next, you'll update your &lt;code&gt;prisma.schema&lt;/code&gt; file and then re-generate the Prisma Client to work with the HubSpot data in your database.&lt;/p&gt;

&lt;p&gt;Before you do, it's worth building a mental model of this process.&lt;/p&gt;

&lt;p&gt;Your database now contains two schemas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A &lt;code&gt;public&lt;/code&gt; schema&lt;/strong&gt; that &lt;em&gt;you&lt;/em&gt; own and control. You're probably used to using &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-migrate/mental-model#what-is-prisma-migrate" rel="noopener noreferrer"&gt;Prisma's Migrate tools&lt;/a&gt; to make changes to this schema. In Prisma, these are called "model/entity-first migrations." (Some Prisma users prefer to &lt;code&gt;create&lt;/code&gt; new tables and &lt;code&gt;update&lt;/code&gt; columns in the database via SQL. These are called "database-first migrations.")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A &lt;code&gt;hubspot&lt;/code&gt; schema&lt;/strong&gt; that &lt;em&gt;Sequin&lt;/em&gt; owns and controls to maintain your HubSpot sync. All migrations in this schema are done directly to the database via SQL commands that originate from Sequin. So, these are database-first migrations &lt;em&gt;that Sequin runs&lt;/em&gt;. If you add or drop columns in this schema using Prisma, you'll break the sync. Hence, all migrations in this schema are performed in the Sequin console. After Sequin applies the migrations to your database, you'll add the changes to Prisma.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, when working with your &lt;code&gt;hubpost&lt;/code&gt; schema, you'll always follow a &lt;strong&gt;database-first migration&lt;/strong&gt; pattern in Prisma. This means you'll &lt;code&gt;pull&lt;/code&gt; the schema into Prisma models as opposed to pushing the schema from Prisma models into your database. Here is how.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F006_migration_flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F006_migration_flow.png" alt="Flow diagram" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Turn on multiple schema support
&lt;/h3&gt;

&lt;p&gt;Your database now contains two schemas. To configure Prisma to work with more than one schema, you need to turn on &lt;a href="https://www.prisma.io/docs/guides/database/multi-schema" rel="noopener noreferrer"&gt;multi schema support&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To do so, update your &lt;code&gt;schema.prisma&lt;/code&gt; file as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generator&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prisma-client-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;previewFeatures&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;multiSchema&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;datasource&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;postgresql&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;url&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;schemas&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;orgs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                  &lt;span class="nx"&gt;Int&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;subscription_status&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;users&lt;/span&gt;               &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;         &lt;span class="nx"&gt;Int&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;first_name&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;last_name&lt;/span&gt;  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;email&lt;/span&gt;      &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org_id&lt;/span&gt;     &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org&lt;/span&gt;        &lt;span class="nx"&gt;orgs&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;org_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;users_orgs_id_fk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;previewFeatures = ["multiSchema"]&lt;/code&gt; in the &lt;code&gt;generator&lt;/code&gt; block. This turns on multi schema support.&lt;/li&gt;
&lt;li&gt;List your schemas in the &lt;code&gt;datasource&lt;/code&gt; block. In this case: &lt;code&gt;schemas  = ["public", "hubspot"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Designate which schema each of your Prisma models belongs to with a &lt;code&gt;@@schema('public')&lt;/code&gt; attribute&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prisma is now ready to handle multiple schemas when you begin to introspect your database in the next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introspect your database
&lt;/h3&gt;

&lt;p&gt;The Prisma CLI provides &lt;a href="https://www.prisma.io/docs/concepts/components/introspection#introspection-with-an-existing-schema" rel="noopener noreferrer"&gt;introspection tools&lt;/a&gt; to automatically update your &lt;code&gt;schema.prisma&lt;/code&gt; models to reflect the schema in your database.&lt;/p&gt;

&lt;p&gt;To do so, ensure you have the &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-cli/installation" rel="noopener noreferrer"&gt;Prisma CLI installed&lt;/a&gt;, navigate to the root directory of your project, and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;prisma db pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prisma will then retrieve the schema from your database and map your tables, columns, indexes, and constraints into Prisma models, fields, indexes, and attributes in your &lt;code&gt;schema.prisma&lt;/code&gt; file.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Caution:&lt;/strong&gt; If you've manually altered your &lt;code&gt;schema.prisma&lt;/code&gt; file, some changes will be &lt;a href="https://www.prisma.io/docs/concepts/components/introspection#introspection-with-an-existing-schema" rel="noopener noreferrer"&gt;over-written&lt;/a&gt; when you run &lt;code&gt;db pull&lt;/code&gt;. This includes any manually created &lt;code&gt;@relation&lt;/code&gt; scalars you've defined in your model. You can instead introspect just your &lt;code&gt;hubspot&lt;/code&gt; schema to avoid any issues. Learn how in the &lt;a href="https://www.prisma.io/docs/concepts/components/introspection#introspecting-only-a-subset-of-your-database-schema" rel="noopener noreferrer"&gt;Prisma docs&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After introspecting your database, your &lt;code&gt;schema.prisma&lt;/code&gt; file will be updated to include your &lt;code&gt;hubspot&lt;/code&gt; schema and the underlying &lt;code&gt;contact&lt;/code&gt;, &lt;code&gt;deal&lt;/code&gt;, and &lt;code&gt;associations_contact_deal&lt;/code&gt; models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;generator&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prisma-client-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;previewFeatures&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;multiSchema&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;datasource&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;postgresql&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;url&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;schemas&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;orgs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                  &lt;span class="nx"&gt;Int&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;subscription_status&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;users&lt;/span&gt;               &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;         &lt;span class="nx"&gt;Int&lt;/span&gt;       &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;autoincrement&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="nx"&gt;first_name&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;last_name&lt;/span&gt;  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;email&lt;/span&gt;      &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org_id&lt;/span&gt;     &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;org&lt;/span&gt;        &lt;span class="nx"&gt;orgs&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;org_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NoAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;users_orgs_id_fk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;contact&lt;/span&gt;    &lt;span class="nx"&gt;contact&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;contact_id&lt;/span&gt;       &lt;span class="nb"&gt;String&lt;/span&gt;
  &lt;span class="nx"&gt;contact&lt;/span&gt;          &lt;span class="nx"&gt;contact&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;contact_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;deal_id&lt;/span&gt;          &lt;span class="nb"&gt;String&lt;/span&gt;
  &lt;span class="nx"&gt;deal&lt;/span&gt;             &lt;span class="nx"&gt;deal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;deal_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;sync_hash&lt;/span&gt;        &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_hash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt; &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;  &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;       &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;labels&lt;/span&gt;           &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;contact_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;deal_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;associations_contact_deal_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;contact&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                        &lt;span class="nb"&gt;String&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CONTACT_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt;          &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;           &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;                &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;associatedcompanyid&lt;/span&gt;       &lt;span class="nx"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Decimal&lt;/span&gt;
  &lt;span class="nx"&gt;company&lt;/span&gt;                   &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;firstname&lt;/span&gt;                 &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;industry&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;jobtitle&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;lastname&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;numemployees&lt;/span&gt;              &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;user_id&lt;/span&gt;                   &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt;                      &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;website&lt;/span&gt;                   &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt; &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;index&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;updated_at&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at_idx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;deal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                        &lt;span class="nb"&gt;String&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DEAL_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt;          &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;           &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;                &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;amount&lt;/span&gt;                    &lt;span class="nx"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Decimal&lt;/span&gt;
  &lt;span class="nx"&gt;closedate&lt;/span&gt;                 &lt;span class="nx"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;createdate&lt;/span&gt;                &lt;span class="nx"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;dealname&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;dealstage&lt;/span&gt;                 &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;hs_priority&lt;/span&gt;               &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;pipeline&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt; &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Define relationships
&lt;/h3&gt;

&lt;p&gt;Out of the box, the schema generated via &lt;code&gt;prisma db pull&lt;/code&gt; is almost entirely workable. But because Sequin doesn't enforce foreign key constraints, Prisma can't detect the relationships that exist across your &lt;code&gt;contact&lt;/code&gt; and &lt;code&gt;deal&lt;/code&gt; tables via the &lt;code&gt;associations_contact_deal&lt;/code&gt; relation table. You'll add these relationships to your &lt;code&gt;schema.prisma&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;To define a many-to-many relationship using a &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-schema/relations/many-to-many-relations#relation-tables" rel="noopener noreferrer"&gt;relation table&lt;/a&gt;, you need to tell Prisma that the &lt;code&gt;deal_id&lt;/code&gt; and &lt;code&gt;contact_id&lt;/code&gt; fields in the &lt;code&gt;associations_contact_deal&lt;/code&gt; model relate to the &lt;code&gt;id&lt;/code&gt; field on the &lt;code&gt;deal&lt;/code&gt; and &lt;code&gt;contact&lt;/code&gt; models respectively. You'll do this by adding the two relation scalars:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;contact&lt;/span&gt;          &lt;span class="nx"&gt;contact&lt;/span&gt;  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;contact_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;contact_id&lt;/span&gt;       &lt;span class="nb"&gt;String&lt;/span&gt;
  &lt;span class="nx"&gt;deal&lt;/span&gt;             &lt;span class="nx"&gt;deal&lt;/span&gt;     &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;deal_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;deal_id&lt;/span&gt;          &lt;span class="nb"&gt;String&lt;/span&gt;
  &lt;span class="nx"&gt;sync_hash&lt;/span&gt;        &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_hash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt; &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;  &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;       &lt;span class="nx"&gt;DateTime&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;labels&lt;/span&gt;           &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;contact_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;deal_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;associations_contact_deal_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, on the &lt;code&gt;deal&lt;/code&gt; and &lt;code&gt;contact&lt;/code&gt; models, you need to define the other side of this many-to-many relationship by pointing a new field back to the &lt;code&gt;associations_contact_deal&lt;/code&gt; model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;deal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;               &lt;span class="nb"&gt;String&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DEAL_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt; &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;  &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;       &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;amount&lt;/span&gt;           &lt;span class="nx"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Decimal&lt;/span&gt;
  &lt;span class="nx"&gt;closedate&lt;/span&gt;        &lt;span class="nx"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;createdate&lt;/span&gt;       &lt;span class="nx"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                   &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;dealname&lt;/span&gt;         &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;dealstage&lt;/span&gt;        &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;hs_priority&lt;/span&gt;      &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;pipeline&lt;/span&gt;         &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;contacts&lt;/span&gt;         &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;contact&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CONTACT_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt;    &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;     &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;          &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;associatedcompanyid&lt;/span&gt; &lt;span class="nx"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Decimal&lt;/span&gt;
  &lt;span class="nx"&gt;company&lt;/span&gt;             &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;firstname&lt;/span&gt;           &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;industry&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;jobtitle&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;lastname&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;numemployees&lt;/span&gt;        &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;user_id&lt;/span&gt;             &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;website&lt;/span&gt;             &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;deals&lt;/span&gt;               &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;index&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;updated_at&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at_idx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you may recall, you can also define a &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-schema/relations/one-to-one-relations" rel="noopener noreferrer"&gt;one-to-one relationship&lt;/a&gt; between the &lt;code&gt;user_id&lt;/code&gt; field in the &lt;code&gt;contact&lt;/code&gt; model with the &lt;code&gt;id&lt;/code&gt; field on the &lt;code&gt;users&lt;/code&gt; model. This will make querying across your internal data model and HubSpot data possible. To do so, add one more relation scalar to your &lt;code&gt;contact&lt;/code&gt; model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="nx"&gt;contact&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt;                  &lt;span class="nb"&gt;String&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CONTACT_pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_inserted_at&lt;/span&gt;    &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_inserted_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;sync_updated_at&lt;/span&gt;     &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_sync_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;updated_at&lt;/span&gt;          &lt;span class="nx"&gt;DateTime&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;associatedcompanyid&lt;/span&gt; &lt;span class="nx"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Decimal&lt;/span&gt;
  &lt;span class="nx"&gt;company&lt;/span&gt;             &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;firstname&lt;/span&gt;           &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;industry&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;jobtitle&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;lastname&lt;/span&gt;            &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;numemployees&lt;/span&gt;        &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;user_id&lt;/span&gt;             &lt;span class="nx"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt;                &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;                      &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;references&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;website&lt;/span&gt;             &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
  &lt;span class="nx"&gt;deals&lt;/span&gt;               &lt;span class="nx"&gt;associations_contact_deal&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;index&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;updated_at&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;_updated_at_idx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@@&lt;/span&gt;&lt;span class="nd"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hubspot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generate your Prisma Client
&lt;/h3&gt;

&lt;p&gt;All the relationships in your data model are now defined in your &lt;code&gt;schema.prisma&lt;/code&gt; file. The last step before writing your first query is to re-generate your Prisma client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;prisma generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Query HubSpot using the Prisma Client
&lt;/h2&gt;

&lt;p&gt;Prisma gives you a modern, intuitive API for querying your HubSpot data. For instance, you can return all your HubSpot deals in one simple query:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```js index.js&lt;br&gt;
async function get_all_deals() {&lt;br&gt;
  const deals = await prisma.deal.findMany();&lt;/p&gt;

&lt;p&gt;console.log(deals);&lt;br&gt;
}&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


No pagination. No authentication token. No metering your requests through a rate limiter.

Or, you can return all the deals at a certain stage as well as all the contacts associated with those deals:



```js index.js
async function get_qualified_deals() {
  const deals = await prisma.deal.findMany({
    where: {
      dealstage: "qualifiedtobuy",
    },
    include: {
      contacts: {
        include: {
          contact: true,
        },
      },
    },
  });

  console.log(deals);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You'll note that in your IDE, you get helpful type warnings and type-ahead support as you write these queries—niceties missing from the HubSpot API and SDK.&lt;/p&gt;

&lt;p&gt;More impactful to your productivity, you can query HubSpot and your internal data together. For instance, you can query for all the &lt;code&gt;deals&lt;/code&gt; associated to a specific &lt;code&gt;user&lt;/code&gt; in your &lt;code&gt;public&lt;/code&gt; schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;get_user_deals&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user_deals&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUnique&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eric&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;sequin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;contact&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;deals&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;deal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user_deals&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query builds on the relationships you defined in your &lt;code&gt;schema.prisma&lt;/code&gt; file to return the deals related to &lt;code&gt;eric@sequin.io&lt;/code&gt;. In one Prisma query, you do the work of a SQL query paired with three nested calls to the HubSpot API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrations
&lt;/h2&gt;

&lt;p&gt;Inevitably, the data you need from HubSpot will change, and you'll need to migrate your Prisma schema. While a more &lt;a href="https://rtpg.co/2021/06/07/changes-checklist.html" rel="noopener noreferrer"&gt;comprehensive guide to migrations&lt;/a&gt; should be referenced to avoid downtime in your application, here are the order of operations to consider when using Sequin with Prisma.&lt;/p&gt;

&lt;p&gt;As noted above, your &lt;code&gt;hubspot&lt;/code&gt; schema is managed by Sequin. For simplicity, all migrations start in the Sequin Console and are applied to Prisma as database-first migrations. As a result, you will not use Prisma Migrate when making changes to the HubSpot objects and properties syncing to your database.&lt;/p&gt;

&lt;p&gt;Here are some common scenarios:&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding or removing a HubSpot property
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; If you're about to remove a HubSpot property, first remove it from your app and your Prisma Client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Add/remove the property to/from your sync by editing your table and column mapping in the Sequin Console.&lt;/p&gt;

&lt;p&gt;When you click &lt;strong&gt;Update&lt;/strong&gt;, Sequin will immediately migrate your database to add the new property as a new column or drop the column related to the property you removed. In the case where you added a new property, Sequin will begin the backfill process, syncing the property to your database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; If you are adding a new property, you'll now &lt;strong&gt;manually&lt;/strong&gt; add the field associated with the new column/property in the appropriate model in your &lt;code&gt;schema.prisma&lt;/code&gt; file. It is easier to do this by hand as opposed to introspecting your database so that you can preserve the other relationship scalars.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Run the &lt;code&gt;prisma generate&lt;/code&gt; command in your terminal to update the Prisma Client.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding or removing a HubSpot Object
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; If you're about to remove a HubSpot object, first remove it from your app and your Prisma Client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Add/remove the object to/from your sync by editing your table and column mapping in the Sequin Console.&lt;/p&gt;

&lt;p&gt;When you click &lt;strong&gt;Update&lt;/strong&gt;, Sequin will run a migration to create or drop the tables related to your change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; If you are adding a new HubSpot object to your sync, you'll now add these new tables as models in your &lt;code&gt;schema.prisma&lt;/code&gt; file. You have two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Manually&lt;/strong&gt; add or remove the appropriate models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚠️ Partial Introspect Workaround ⚠️:&lt;/strong&gt; Partially introspect your database by first saving your existing &lt;code&gt;schema.prisma&lt;/code&gt; file. Then follow Prisma's guide for &lt;a href="https://www.prisma.io/docs/concepts/components/introspection#introspecting-only-a-subset-of-your-database-schema" rel="noopener noreferrer"&gt;introspecting just a subset of your database&lt;/a&gt;. In this case, you'll create a new database user that can only access the new tables added to your database by Sequin. Then, append the new models generated by your introspection to your saved &lt;code&gt;prisma.schema&lt;/code&gt; file. This is often a more straightforward path when adding an object to your HubSpot sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Run the &lt;code&gt;prisma generate&lt;/code&gt; command in your terminal to update the Prisma Client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development lifecycle
&lt;/h2&gt;

&lt;p&gt;You aren't using Prisma Migrate to track the migrations related to changes in your &lt;code&gt;hubspot&lt;/code&gt; schema and the underlying Sequin sync. So how does this all work across dev and staging environments?&lt;/p&gt;

&lt;p&gt;To support development and staging environments, you'll set up a dev HubSpot account and a dev Sequin sync. You'll then use Sequin's &lt;strong&gt;copy config&lt;/strong&gt; setting to copy changes from the dev sync and database to your production sync and database. Here is the flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F007_dev_flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.sequin.io%2Fassets%2Fhubspot-prisma%2F007_dev_flow.png" alt="Migrations" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Setup a &lt;a href="https://developers.hubspot.com/" rel="noopener noreferrer"&gt;HubSpot development account&lt;/a&gt; to create a new HubSpot instance you can use for development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Sync your HubSpot development account as a new sync in Sequin. Follow the steps at the start of this guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Make changes and migrations using this development account and sync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; When you are ready to deploy these changes to production, copy the changes from your development sync in Sequin to your production sync:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, make sure any new properties or settings in your dev HubSpot instance have been applied to your production instance.&lt;/li&gt;
&lt;li&gt;Then, click to edit your production sync in Sequin. In the &lt;strong&gt;Select tables and columns&lt;/strong&gt; section, click the &lt;strong&gt;Copy config from...&lt;/strong&gt; button. Select your dev sync to apply your development sync configuration to your production sync.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Update&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sequin will run a migration to apply any schema changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; In Prisma, update your &lt;strong&gt;schema.prisma&lt;/strong&gt; file to reflect your dev environment, then run the &lt;code&gt;prisma generate&lt;/code&gt; command to update your client.&lt;/p&gt;

</description>
      <category>prisma</category>
      <category>tutorial</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
