<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sue enuga</title>
    <description>The latest articles on DEV Community by sue enuga (@sueenuga).</description>
    <link>https://dev.to/sueenuga</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sueenuga"/>
    <language>en</language>
    <item>
      <title>Migrate Your Schema into CockroachDB Cloud with Prisma Migrate</title>
      <dc:creator>sue enuga</dc:creator>
      <pubDate>Fri, 28 Jul 2023 20:58:54 +0000</pubDate>
      <link>https://dev.to/sueenuga/migrate-your-schema-into-cockroachdb-cloud-with-prisma-migrate-lde</link>
      <guid>https://dev.to/sueenuga/migrate-your-schema-into-cockroachdb-cloud-with-prisma-migrate-lde</guid>
      <description>&lt;h1&gt;
  
  
  Migration to CockroachDB
&lt;/h1&gt;

&lt;p&gt;Great!! You are either in the process of realizing that CRDB is the best distributed relational database out there or you've already figured that out and So, you are in the process of Migrating to CRDB.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are three steps into migrating to CRDB.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Migrate your Schema into CRDB&lt;/li&gt;
&lt;li&gt;Migrate your Data &lt;/li&gt;
&lt;li&gt;Switch your application&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this article, I would like to focus entirely on how you can migrate your schema into CRDB using Prisma Migrate. If you would like to understand more about Migration, our docs are an excellent resource, Please follow along the &lt;a href="https://www.cockroachlabs.com/docs/stable/migration-overview.html"&gt;Official CRDB Migration Documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating Schema into CockroachDB
&lt;/h2&gt;

&lt;p&gt;Migrating your schema into CRDB can be done in multiple ways.&lt;br&gt;
The first option is to use a Schema Conversion tool, You can use this option when you have a &lt;code&gt;.sql&lt;/code&gt; dump file of your schema and you'll be able to upload this file into your cloud console &lt;code&gt;Migrations&lt;/code&gt; tab and its an interactive walk through of the process where you can fix any errors that you might run into and also retry the migration after.&lt;/p&gt;

&lt;p&gt;An alternative to this approach is to use Prisma Migrate.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Prisma Migrate?
&lt;/h2&gt;

&lt;p&gt;Prisma Migrate is a new feature that was released recently which enables you to migrate your schema to the database seamlessly.It also keeps your database schema in sync with your Prisma schema as it evolves by maintaining existing schema changes in your database. Prisma Migrate generates a history of .sql migration files and tracks the migration state.The workflow when using Prisma Migrate in your project would be iterative and look something like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Local development environment&lt;br&gt;
Evolve your Prisma schema&lt;br&gt;
Use either prisma migrate dev or prisma db push to sync your Prisma schema with the database schema of your local development database&lt;/p&gt;

&lt;p&gt;Preview/ staging environment &lt;br&gt;
Push your changes to the feature pull request&lt;br&gt;
Use a CI system (e.g. GitHub Actions) to sync your Prisma schema and migration history with your preview database using prisma migrate deploy&lt;/p&gt;

&lt;p&gt;Production&lt;br&gt;
Merge your application code from the feature branch to your main branch&lt;br&gt;
Use a CI system (e.g. GitHub Actions) to sync your Prisma schema and migration history with your production database using prisma migrate deploy&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For Further information on this, Please checkout the &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-migrate"&gt;Official Prisma Documentation&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Migrating using Prisma Migrate
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Pre-requisites&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CockroachDB cloud cluster (Dedicated or Serverless)&lt;/li&gt;
&lt;li&gt;Node.js V16 or higher&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Prisma Project Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a project Directory and navigate to the path.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir hello-prisma
cd hello-prisma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Initialize a Typescript project and add the Primsa CLI as a dependency to that, this should also create &lt;code&gt;package.json&lt;/code&gt; file with an initial setup ready for your Typescript app
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y
npm install prisma typescript ts-node @types/node --save-dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Now initialize Typescript and Prisma cli.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx tsc --init
npx prisma init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As part of the above initializing, a new directory called &lt;code&gt;Prisma&lt;/code&gt; that contains a file called &lt;code&gt;schema.prisma&lt;/code&gt; which basically has all your schema models &amp;amp; database connection information in it. It should also create a &lt;code&gt;.env&lt;/code&gt; file in the root of the project directory. &lt;/p&gt;
&lt;h3&gt;
  
  
  Connecting your DB
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;schema.prisma&lt;/code&gt; file under the &lt;code&gt;Prisma&lt;/code&gt; directory, you should find &lt;code&gt;datasource&lt;/code&gt; block. See below code sample for reference. Here, you can find the information like &lt;code&gt;provider&lt;/code&gt; and &lt;code&gt;url&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See below image for reference of the URL and how it should look in your &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Fo2Jwf6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3nbxechwviccg1x1z2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Fo2Jwf6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3nbxechwviccg1x1z2l.png" alt="envfile" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also look at Prisma's official docs for more information on how to connect to other databases. &lt;br&gt;
Once the connection has been made, you'll now be able to migrate the schema directly with cockroachDB without any other tools.&lt;/p&gt;
&lt;h2&gt;
  
  
  Migration using Prisma Migrate
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;schema.prisma&lt;/code&gt; file that was created under the &lt;code&gt;prisma&lt;/code&gt; folder should have all the changes you want to make in your new database.To map your data model to the database schema, you need to first initialize the prisma migrate on the environment and in our case, &lt;code&gt;dev&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx prisma migrate dev --name init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This command does two things:&lt;br&gt;
  It creates a new SQL migration file for this migration&lt;br&gt;
  It runs the SQL migration file against the database.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Please see below image for reference&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Af49mt3_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjohzg2xulgejzu67e9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Af49mt3_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjohzg2xulgejzu67e9h.png" alt="migrate" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any changes that will be made here on from the file will be considered and a new &lt;code&gt;.migration&lt;/code&gt; file will be created under the migrations folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lKC4qilK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd7uszeig4lzprcgggn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lKC4qilK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd7uszeig4lzprcgggn9.png" alt="folder" width="570" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Conclusion, we have several tools to migrate your schema to cockroachDB and my fav is MOLT. But,if you are a prisma native ORM shop and you'd like to leverage Primsa migrate to manage and apply your schema changes to the database, this is effective and is very simple and straightforward to use.&lt;/p&gt;

</description>
      <category>prisma</category>
      <category>cockroachdb</category>
      <category>migration</category>
    </item>
    <item>
      <title>CockroachDB integration with Kafka Connectors</title>
      <dc:creator>sue enuga</dc:creator>
      <pubDate>Fri, 28 Jul 2023 20:19:37 +0000</pubDate>
      <link>https://dev.to/sueenuga/cockroachdb-integration-with-kafka-connectors-1c2f</link>
      <guid>https://dev.to/sueenuga/cockroachdb-integration-with-kafka-connectors-1c2f</guid>
      <description>&lt;h2&gt;
  
  
  CockroachDB and Kafka Connectors
&lt;/h2&gt;

&lt;p&gt;In this article I want to go over the steps on how to integrate cockroachDB with Kafka Connectors to import data into Kafka topics. CockroachDB has an inbuilt feature CDC/Changefeeds to push data to any sink that you can talk natively without having to integrate with a third party connector.CDC is a robust and mature feature of cockroachDB and it will let you stream data in a distributed architecture more efficiently.For instance, you can stream data changes to Kafka by just enabling rangefeeds at the row level and get the batched changes streamed to the kafka broker without another hop. For more information on how to do this please read more on our official docs. &lt;a href="https://www.cockroachlabs.com/docs/stable/change-data-capture-overview" rel="noopener noreferrer"&gt;CDC&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Disclaimer
&lt;/h3&gt;

&lt;p&gt;Although the official docs say that CRDB is not supported, I was able to connect and stream changes. So, please test throughly when using this feature.&lt;a href="https://docs.confluent.io/cloud/current/connectors/cc-postgresql-source.html#limitations" rel="noopener noreferrer"&gt;Limitations&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka Connectors
&lt;/h3&gt;

&lt;p&gt;If you don't already know what a kafka connector is, I took this official explanation out of their docs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kafka connectors are ready-to-use components, which can help us to import data from external systems into Kafka topics and export data from Kafka topics into external systems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, as we all know, Apache Kafka is a framework for event streaming. Events are contained in streams that can be produced, consumed, or processed. So, connectors acts as a bridge and ensure compatibility of event-streaming and non-event streaming pieces of technology. Data can be easily streamed into Kafka from a variety of sources and out of Kafka to a variety of targets thanks to Connectors.&lt;/p&gt;

&lt;p&gt;So, today lets look at how to integrate CRDB with Kafka's PostgreSQL source connector.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL source connector
&lt;/h3&gt;

&lt;p&gt;The Kafka Connect PostgreSQL Source connector for Confluent Cloud can obtain a snapshot of the existing data in a compatible PostgreSQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that deleted records are not captured and the changes are being pulled from the DB. &lt;br&gt;
Please refer to their official docs for more information.&lt;a href="https://docs.confluent.io/cloud/current/connectors/cc-postgresql-source.html#postgresql-source-jdbc-connector-for-ccloud" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Integrating PostgreSQL source Connector with CRDB dedicated.
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Pre-requisites:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Working CockroachDB connection with DB console access&lt;/li&gt;
&lt;li&gt;Working confluent cloud console access
&amp;gt;Note: you can always sign up for trial clusters for both of the above if you'd like to test any feature.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a First step, get all the connection information from your cockroach DB console.&lt;/p&gt;

&lt;p&gt;Goto your cluster, click on &lt;code&gt;connect&lt;/code&gt; and get the connection info similar to the screenshot here below and save it. You'll also need the SSL cert to upload into the confluent connector authentication. So, on the same pop up page you should see an option to download the ca cert information from the &lt;code&gt;command line&lt;/code&gt; section. This is your SSL cert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkqoltkw0c4455z5o6lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkqoltkw0c4455z5o6lu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, Go to confluent cloud and create a cluster by clicking on the &lt;code&gt;+ Add a cluster&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5o3bb49s99gqhpyrk4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5o3bb49s99gqhpyrk4m.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should have a cluster similar to above once you finish provisioning the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the connectors from the cluster overview page to create a connector.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzeonk57pjtor4c1wn5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzeonk57pjtor4c1wn5q.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the connector marketplace, search for postgresql and you should see the option to integrate the cluster with it, Continue selecting the topic prefix and provide kafka Credentials, You can either create new credentials or provide existing kafka API key &amp;amp; Secret. &lt;/p&gt;

&lt;p&gt;Now, you should see the Authentication pane which should ask you for the postgres DB information, in our case CRDB's connection info that we captured from the earlier step. See below information for reference. Note, for SSL cert, you can make a copy of the cert to any folder on your page and upload it here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixiiiwmb55pld1xcr7cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixiiiwmb55pld1xcr7cx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick the configuration that you'd like for the messages and the table information for this connector&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1e5j1ev26g92lg3ba5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1e5j1ev26g92lg3ba5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, you can make advanced changes to the connector and also apply any transformations for the messages. Once you click on &lt;code&gt;finish&lt;/code&gt; your connector should be connected and start streaming the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasjwuglz8o8r9byk6zxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasjwuglz8o8r9byk6zxx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please refer to the above image for the final overview of a created connector. For quick review of the messages that are being streamed from CRDB to Kakfa, you can go to the topic that was created and select &lt;code&gt;messages&lt;/code&gt; tab. you should be able to see all the current messages that are being streamed like in the below screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kaq4fqy885u5uce2ykc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kaq4fqy885u5uce2ykc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for going through the article. &lt;/p&gt;

</description>
      <category>crdb</category>
      <category>postgres</category>
      <category>kafkaconnectors</category>
      <category>datastreaming</category>
    </item>
    <item>
      <title>CockroachDB Integration with Superset</title>
      <dc:creator>sue enuga</dc:creator>
      <pubDate>Wed, 26 Apr 2023 21:13:24 +0000</pubDate>
      <link>https://dev.to/sueenuga/superset-integration-with-cockroachdb-6l9</link>
      <guid>https://dev.to/sueenuga/superset-integration-with-cockroachdb-6l9</guid>
      <description>&lt;h2&gt;
  
  
  What is Superset ??
&lt;/h2&gt;

&lt;p&gt;Superset is a fast, lightweight modern web application that is loaded with options that make it easy for users of all skill sets to explore and visualize their data, from simple pie charts to highly detailed deck.gl geospatial charts. so, in simple words its used for analyzing your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Integrate it with CockroachDB ??
&lt;/h2&gt;

&lt;p&gt;CockraochDB is a highly available &amp;amp; reliable database which makes consistency as a priority and is best suited for all your OLTP needs.However, to analyze this data you can use any intelligent visualization tools. One of such easy lightweight tool is Apache Superset. Together with Superset &amp;amp; CockroachDB now you have a Datastore with analytical capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Superset with docker
&lt;/h3&gt;

&lt;p&gt;Pre-requisite:Docker&lt;br&gt;
There seems to be multiple ways to install superset. I wanted to try out the installation using Docker as its the most recommended approach. You can follow the Installation instructions from their &lt;a href="https://superset.apache.org/docs/installation/installing-superset-using-docker-compose"&gt;docs&lt;/a&gt;.&lt;br&gt;
See below images for reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uieFf5MK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4vd3x01ntxdgqyw5s5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uieFf5MK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4vd3x01ntxdgqyw5s5t.png" alt="docker-terminal" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YQQu9Z0M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vnze1gwgbu7g6mddeeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YQQu9Z0M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vnze1gwgbu7g6mddeeo.png" alt="docker-container" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E-2E0Dnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c959gjsh8uuij3vc7iqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E-2E0Dnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c959gjsh8uuij3vc7iqq.png" alt="superset-intro" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: superset is not officially supported in Windows as of 04/23, So, if you need to test this out, quickly spin up a linux/mac instance on hyperV or something similar.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Integrating Superset with cockroachDB
&lt;/h3&gt;

&lt;p&gt;Pre-requisite: Working instance of CockroachDB and its connection parameters.&lt;br&gt;
CockroachDB has multiple offerings &amp;amp; you can use any of these to integrate with superset. I used the serverless instance of CRDB here for the exercise.&lt;br&gt;
&lt;em&gt;For folks who are familiar with &lt;code&gt;Cockroach workload&lt;/code&gt;, I loaded my serverless instance with &lt;code&gt;MOVR&lt;/code&gt; workload&lt;/em&gt; &lt;br&gt;
According to Superset &lt;a href="https://superset.apache.org/docs/databases/installing-database-drivers"&gt;documentation&lt;/a&gt; to integrate Superset with cockroachDB, Superset requires a python DB-API database driver and a SQL Alchemy dialect to be installed for cockroachDB. Now, lets see how to install the driver.&lt;/p&gt;
&lt;h4&gt;
  
  
  Installing the database driver to support cockroachDB
&lt;/h4&gt;

&lt;p&gt;1.Create requirements-local.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# From the repo root...
touch ./docker/requirements-local.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Add the &lt;code&gt;cockroachdb&lt;/code&gt; driver&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "cockroachdb" &amp;gt;&amp;gt; ./docker/requirements-local.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Rebuild your local docker image with new driver&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose build --force-rm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Start Superset via Docker Compose&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose -f docker-compose-non-dev.yml pull
docker-compose -f docker-compose-non-dev.yml up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, superset should have all the necessary packages installed for connecting with CockroachDB. &lt;/p&gt;

&lt;h4&gt;
  
  
  Adding Cockroach DB as a data store in Superset
&lt;/h4&gt;

&lt;p&gt;1.Go to Superset UI at &lt;code&gt;localhost:8088&lt;/code&gt; and login.&lt;br&gt;
2.On the right top corner, next to &lt;code&gt;settings&lt;/code&gt; you'll see a &lt;code&gt;+&lt;/code&gt; icon. click on it &amp;amp; select &lt;code&gt;Data&lt;/code&gt; and navigate to &lt;code&gt;connect database&lt;/code&gt;. See below image for reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rpfJs2Jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gz79zbiaxmsnz7dwxbz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rpfJs2Jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gz79zbiaxmsnz7dwxbz0.png" alt="susperset-UI" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the window you'll be prompted to &lt;code&gt;Select a database to connect&lt;/code&gt; and here you'll need to go to the &lt;code&gt;Supported Databases&lt;/code&gt; and choose &lt;code&gt;cockroachDB&lt;/code&gt;. See below image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wqkVxSpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zhote8rlk3frnhmik6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wqkVxSpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zhote8rlk3frnhmik6c.png" alt="select-db" width="800" height="1376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In second screen after selecting cockroachDB, you need to provide the connection details. Enter the primary Credentials, select &lt;code&gt;Basic&lt;/code&gt; and enter the &lt;code&gt;SQLAlCHEMY URI&lt;/code&gt; which should be formatted specifically for cockroach DB. See below on how to format it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cockroachdb://root:pass@{hostname}:{port}/{database}?sslmode=verify-full

example: cockroachdb://sue:samplepassword@sue-sampletest-host:26257/movr?sslmode=verify-full
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gH33dk5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3789qjomh69vmle3ko7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gH33dk5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3789qjomh69vmle3ko7x.png" alt="Connect-db" width="800" height="1338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I'm using a secure serverless cluster,I'm required to provide my &lt;code&gt;root.crt&lt;/code&gt; in the &lt;code&gt;Security&lt;/code&gt; section, under &lt;code&gt;Root Certificate&lt;/code&gt;. So, I copied the root cert which is by default located in the &lt;code&gt;$HOME/.postgresql/root.crt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h_vHb--7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2ne28ath0mxbgveepz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h_vHb--7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2ne28ath0mxbgveepz8.png" alt="adv-sec" width="800" height="1356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Once the connection is successful, you'll see &lt;code&gt;CockroachDB&lt;/code&gt; in the Superset &lt;code&gt;Databases&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ISJpQ-4---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xc7tj4vhoxtj8lvhc5e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ISJpQ-4---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xc7tj4vhoxtj8lvhc5e6.png" alt="integration-db" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Dashboards in Superset
&lt;/h3&gt;

&lt;p&gt;Creating Dashboards in Superset is super simple, you can follow their &lt;a href="https://superset.apache.org/docs/creating-charts-dashboards/creating-your-first-dashboard"&gt;docs&lt;/a&gt; for creating your first dashboard. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I created a bunch of dashboards for my dataset, see below image for reference. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gcke2mik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kh9ebsh6qqm8k892ztyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gcke2mik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kh9ebsh6qqm8k892ztyp.png" alt="dashboard-sue" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yay !! There you go, you have now successfully integrated cockroachDB with Superset and created some dashboards as well.&lt;/p&gt;

</description>
      <category>cockroachdb</category>
      <category>superset</category>
      <category>database</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Online Schema Changes with CRDB</title>
      <dc:creator>sue enuga</dc:creator>
      <pubDate>Mon, 24 Apr 2023 08:51:19 +0000</pubDate>
      <link>https://dev.to/sueenuga/online-schema-changes-with-crdb-3ah2</link>
      <guid>https://dev.to/sueenuga/online-schema-changes-with-crdb-3ah2</guid>
      <description>&lt;h3&gt;
  
  
  What exactly is the Online Schema Change Problem ?
&lt;/h3&gt;

&lt;p&gt;Traditionally Changing a table's schema on the go has been one of the most challenging problems in relational databases, In today's fast paced applications &amp;amp; agile dev cycles, Database crew often find that they need to make changes to the schema frequently &amp;amp; sometimes on a weekly basis. Running an &lt;code&gt;Alter Table&lt;/code&gt; or any DDL statement will put the table in an inaccessible state, including reads. This is the kind of downtime which most applications cannot simply afford to have. That drives the need for a no downtime Online schema solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  CockroachDB's Online Schema Change Overview
&lt;/h3&gt;

&lt;p&gt;CockroachDB was actually architected from ground-up to be highly available &amp;amp; Resilient for all OLTP needs and while we had the chance to address this Online Schema changes Problem, we agreed that we want to provide a simpler way to update the schema using any DDL statements and make sure the system doesn't suffer any negative consequences during this change. The schema change engine is a built-in feature requiring no additional tools, resources, or ad-hoc sequencing of operations.&lt;br&gt;
For instance, when you run &lt;code&gt;Alter&lt;/code&gt; on a table in CRDB, a background job gets kicked off in the background and CRDB engine seamlessly takes care of converting the old schema with the new changes without holding locks on the underlying table data.&lt;br&gt;
So basically, Your application's queries can run normally without affecting any read/write latency and the data is in a consistent state throughout the entire schema change process allowing you to access the old data until the new changes are rolled out. &lt;br&gt;
For detailed Information of how this is done internally you can checkout our&lt;br&gt;
&lt;a href="https://www.cockroachlabs.com/docs/stable/online-schema-changes.html"&gt;Official documentation&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Testing out Online Schema Changes with CRDB's Dedicated cloud
&lt;/h3&gt;

&lt;p&gt;Now, lets see how cockroach DB can handle online schema changes in reality. For the exercise here, the goal is to make changes to the schema in an active system and see if CRDB engine can handle the changes gracefully with no downtime.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pre-work before the exercise&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created a Dedicated Cluster in CRDB with the below configuration
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--92n3lYOh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6wjop83iepf0bfdkfcw.png" alt="crdb_cluster" width="800" height="642"&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jADnm9sV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbj6omiwr2hwpksdacul.png" alt="consoleOverview" width="800" height="388"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Created a new user management schema with some relational tables like User, Appointments, Locations etc and loaded the tables with at-least 1M records per table.&lt;br&gt;
We inserted data using the patterns adopted from the &lt;a href="https://dev.to/jhatcher9999/making-queries-sargable-2e4c"&gt;reference blog&lt;/a&gt;  by my co-author &lt;a class="mentioned-user" href="https://dev.to/jhatcher9999"&gt;@jhatcher9999&lt;/a&gt; :&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To simulate the db to behave as an active system with constant reads/writes, we used CRDB's open source tool &lt;a href="https://www.cockroachlabs.com/docs/stable/cockroach-workload.html"&gt;Workload tool- QueryBench&lt;/a&gt;(tailored to this schema).&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Oll5ALHc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9ypfv7tpoc670ivjxrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Oll5ALHc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9ypfv7tpoc670ivjxrb.png" alt="query-bench" width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tkestWW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87x7fakdos2czoksivl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tkestWW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87x7fakdos2czoksivl2.png" alt="DB-console-workload" width="800" height="227"&gt;&lt;/a&gt;&lt;br&gt;
In the above Database Console, you can notice the QPS is about 10K &amp;amp; the system is now active by accepting both reads/writes through the workload tool, Query Bench.&lt;/p&gt;

&lt;p&gt;After the pre-work is completed, You can use one of many ways to connect to the clusters' sql shell to test the different scenarios for Altering the schema.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alter Table to Add a New Column&lt;/strong&gt;&lt;br&gt;
For the first scenario, let's add a column to an existing table 'users' and see how the db handles it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE USERS ADD COLUMN user_marital_status CHAR(1) NULL;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observations:The total time for executing the &lt;code&gt;alter&lt;/code&gt; was around 120ms and as soon as the alter command was issued, a background job was created under the &lt;code&gt;jobs&lt;/code&gt; section in the console &amp;amp; you could monitor the status of this DDL statement from here.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1M5y3gN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilt0bbs9ocisbyxmjjvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1M5y3gN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilt0bbs9ocisbyxmjjvx.png" alt="alter-column-sql" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gwwlFQS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3enx5v832pmpwuj71vmt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gwwlFQS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3enx5v832pmpwuj71vmt.png" alt="job-status" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tZLOOakr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4yqadxet1el9tz8su3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tZLOOakr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4yqadxet1el9tz8su3w.png" alt="statement-time" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alter Table to Add a New Column with Backfill&lt;/strong&gt;&lt;br&gt;
For this second scenario, we are adding a column to &lt;code&gt;users&lt;/code&gt; table with a backfill.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE USERS ADD COLUMN user_subscribed_status CHAR(1) NOT NULL DEFAULT 'y';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observations: Users table has 6M records and Adding a column with backfill to this table took a little longer than the previous scenario &amp;amp; I noticed that the background job was created for this statement as well and I could monitor/manage the status through the console. Also, the active reads/writes to the system were never affected during this change as you can see in the image below from the DB console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7az-2m84--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhdud1vurgsmzqtt6sir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7az-2m84--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhdud1vurgsmzqtt6sir.png" alt="job-status" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LUdazLw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9pz3ayf89tdavo5pvr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LUdazLw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9pz3ayf89tdavo5pvr2.png" alt="job-stat1-sql" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ro6HAWoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/opcblgqrpg2g9dfegq86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ro6HAWoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/opcblgqrpg2g9dfegq86.png" alt="metric-job" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alter the Table's Primary Key&lt;/strong&gt;&lt;br&gt;
For this scenario, We'll be testing how the system reacts to changing a primary key for the table while the system is actively taking requests for reads/writes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE appointments ALTER PRIMARY KEY USING COLUMNS (id, userid);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observations: Altering the primary key was almost instantaneous &amp;amp; seamless. Total time of execution was under 120ms.There was a background job which took care of the changes under the covers &amp;amp; none of the reads/writes were affected while doing so. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q4N_EgwI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfna4kwz8l1thwu2v6e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q4N_EgwI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfna4kwz8l1thwu2v6e1.png" alt="pk-change-sql" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QLMNYH66--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ezv9o1v4lps3g93vmdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QLMNYH66--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ezv9o1v4lps3g93vmdi.png" alt="metrics-pk" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drop the newly added columns in the table&lt;/strong&gt;&lt;br&gt;
This scenario is just to test dropping any unused columns form the table. In our case, we wanted to drop the two columns which we just added, one without a backfill &amp;amp; the other with a backfill.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE USERS DROP COLUMN user_marital_status;
ALTER TABLE USERS DROP COLUMN user_subscribed_status;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observations: We ran both the commands back to back &amp;amp; the system did create a job on the background to monitor the statement and did not notice any errors while the columns were being dropped and the requested were still served without any issues by the DB. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hgiJiex_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esnqojtkozv9snwf7f4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hgiJiex_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esnqojtkozv9snwf7f4z.png" alt="db-exec" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TcVVGoWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jlvkl4hnw4x301a2gulz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TcVVGoWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jlvkl4hnw4x301a2gulz.png" alt="console-db" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revert the primary key to original&lt;/strong&gt;&lt;br&gt;
For the final scenario, we just reverted the primary key to go back to using just the Id for the appointments table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALTER TABLE appointments ALTER PRIMARY KEY USING COLUMNS (id);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Observations: After running the &lt;code&gt;Alter&lt;/code&gt;, I immediately noticed that a background job was created for processing the changes &amp;amp; the sql shell returned as soon the changes were completed. There were no errors in serving the reads/writes for this scenario as well. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5nQWQ1dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1od6g0v55qjos8s6nxrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5nQWQ1dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1od6g0v55qjos8s6nxrl.png" alt="sql-pk" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--diRiRjB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sp4hont7al7msxvmis41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--diRiRjB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sp4hont7al7msxvmis41.png" alt="pk-revert" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Overall, we ran the above scenarios and the database handled them exceptionally well. We did not run into any errors or retries for the reads/writes from the workload, so the system was continuously able to handle requests during the schema changes. Also, these jobs that were being created for these DDL statements in the background can be both monitored &amp;amp; managed so that you can pause/resume them on the go. &lt;br&gt;
you can refer to the below screenshots for total list of statements with their execution timings and notice zero sql errors during execution. so, that's how the online schema change problem was resolved for good by CockroachDB &amp;amp; the entire DB kingdom lived happily ever after. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0v90LbQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2blywl9kbgk07tftfdwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0v90LbQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2blywl9kbgk07tftfdwm.png" alt="Execution_all" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FmeTzpNJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90df5ngw7rh2iil3pkg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FmeTzpNJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90df5ngw7rh2iil3pkg6.png" alt="erros-none" width="800" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Special Thanks to my friend &lt;a class="mentioned-user" href="https://dev.to/jhatcher9999"&gt;@jhatcher9999&lt;/a&gt; for collaborating on this article with me and making this exercise more fun &amp;amp; exciting. &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>cockroachdb</category>
      <category>database</category>
    </item>
  </channel>
</rss>
