<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amit Chauhan</title>
    <description>The latest articles on DEV Community by Amit Chauhan (@akscjo).</description>
    <link>https://dev.to/akscjo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akscjo"/>
    <language>en</language>
    <item>
      <title>How to capture the statements from one database Cluster and run them on another YugabyteDB cluster</title>
      <dc:creator>Amit Chauhan</dc:creator>
      <pubDate>Fri, 18 Aug 2023 06:18:12 +0000</pubDate>
      <link>https://dev.to/akscjo/how-to-capture-the-statements-from-one-cluster-and-run-them-on-another-yugabytedb-cluster-44i1</link>
      <guid>https://dev.to/akscjo/how-to-capture-the-statements-from-one-cluster-and-run-them-on-another-yugabytedb-cluster-44i1</guid>
      <description>&lt;h3&gt;
  
  
  What is the problem we are trying to solve here?
&lt;/h3&gt;

&lt;p&gt;Often you may want to simulate the transactions from production environment in your PreProd/Dev environment to see if things like database upgrades, configuration changes, etc won't cause any regression. Essentially, We want to record the behavior in one environment and replay it in another environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  What are we doing here?
&lt;/h3&gt;

&lt;p&gt;Step by Step guide to capture statements from one YugabyteDB database cluster and replay them on another YugabyteDB database cluster. We are going to use &lt;a href="https://github.com/laurenz/pgreplay"&gt;pgreplay&lt;/a&gt;. This guide uses &lt;a href="https://www.yugabyte.com/"&gt;YugabyteDB&lt;/a&gt;. But similar steps can be utilized for &lt;a href="https://www.postgresql.org/"&gt;Postgres&lt;/a&gt; database as well.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I will run a &lt;a href="https://github.com/YugabyteDB-Samples/yb-workload-simulator"&gt;workload simulator&lt;/a&gt; to create 3 tables and seed them with 10K rows. You can run your App to carry bunch of database transactions.&lt;/li&gt;
&lt;li&gt;Then we will capture postgres log file which will have all the statements (select, inserts, etc)
&lt;/li&gt;
&lt;li&gt;We will install a pgreplay on an app server and will push this file to that server.&lt;/li&gt;
&lt;li&gt;Spin a new cluster. Run the pgreplay with the postgres file and replay the transactions on the new database cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  High level flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GGhTM0M_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hk52ikwxfbfawl9namik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GGhTM0M_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hk52ikwxfbfawl9namik.png" alt="Architecture" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Capture statement logs from database Cluster 1
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We need to ensure that all statements are logged. &lt;/li&gt;
&lt;li&gt;Create a tserver.flagfile file like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--ysql_pg_conf_csv=log_destination=csvlog,log_statement=all,log_min_messages=error,log_min_error_statement=log,log_connections=on,log_disconnections=on 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start the yugabyted with the flag file (You can set these as gflags from YBA UI)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yugabyted start --tserver_flags=flagfile=/Users/amitchauhan/Downloads/pgreplay/tserver.flagfile 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now logs in this directory /var/logs/tserver/postgres*  will start showing/capturing the statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MkPI2W99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ybtehcj19sig8gw3xp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MkPI2W99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ybtehcj19sig8gw3xp3.png" alt="logs" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run some transactions. I ran a workload simulator which created 3 tables and added 10K records.&lt;/li&gt;
&lt;li&gt;For simplicity, I combined couple of postgresql-*.csv file into single amit.csv file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a new database cluster and use pgreplay to replay the logs from the previous cluster.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;I will go ahead and create a new cluster in AWS.&lt;/li&gt;
&lt;li&gt;Spin a new machine (or use existing VM) and install pgreplay (you can build the pgreplay from source. I am going to use their docker image):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/laurenz/pgreplay.git
cd pgreplay
docker build -t laurenz/pgreplay -f Dockerfile .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;I have copied over my postgres log file to this machine and will now run the pgreplay container, pass in the new database connection details.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run --rm -ti -v /home/centos:/app -w /app laurenz/pgreplay pgreplay -h DATABASE-IP-ADDRESS -p 5433 -W XXXXPASSXXXX  -d 3 -c /app/amit.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Pgreplay will replay all the statements in my log file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OdSCEXcv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrne4d3bsrbzxcdhz42h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OdSCEXcv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrne4d3bsrbzxcdhz42h.png" alt="pgreplay-output-1" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HxhgEfNy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3f44rkwdcxccx5odd7dx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HxhgEfNy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3f44rkwdcxccx5odd7dx.png" alt="pgreplay-output-2" width="800" height="704"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that same number of rows are inserted in table:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h8OgT_JC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ck3yng4f3yiyq4vps3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h8OgT_JC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ck3yng4f3yiyq4vps3b.png" alt="table-record-count" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>pgreplay</category>
      <category>yugabytedb</category>
      <category>postgres</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Using pgcli with YugabyteDB</title>
      <dc:creator>Amit Chauhan</dc:creator>
      <pubDate>Tue, 27 Jun 2023 23:27:17 +0000</pubDate>
      <link>https://dev.to/akscjo/using-pgcli-with-yugabytedb-69c</link>
      <guid>https://dev.to/akscjo/using-pgcli-with-yugabytedb-69c</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/dbcli/pgcli"&gt;pgcli&lt;/a&gt; is popular postgres client that does auto completion and syntax highlighting. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/yugabyte/yugabyte-db"&gt;YugabyteDB&lt;/a&gt; is popular open source distributed database. If you see the architecture diagram below, YugabyteDB has Postgres Compatible YSQL API.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oBvI9_Jw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ces9si6ww3uhnowdsay7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oBvI9_Jw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ces9si6ww3uhnowdsay7.png" alt="YugabyteDB Architecture" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this means is that most clients which work for Postgres work out of box with YugabyteDB. Let's take pgcli as an example. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the pgcli. On mac, if you are using brew you can simply do this (for linux see instructions &lt;a href="https://github.com/dbcli/pgcli#quick-start"&gt;here&lt;/a&gt;):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install pgcli 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once installed, let's connect to the YugabyteDB. Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pgcli -h 127.0.0.1 -p 5433 -U yugabyte

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;That's it - enjoy all the goodies offered by pgcli on YugabyteDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wOi8En0Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8j9u78t1amq7p3z6eda4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wOi8En0Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8j9u78t1amq7p3z6eda4.png" alt="Screenshot showing PGCLI in action using YugabyteDB" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>pgcli</category>
      <category>yugabytedb</category>
      <category>potsgres</category>
    </item>
    <item>
      <title>Automating Jar File Creation and Release Artifact Uploads with GitHub Actions</title>
      <dc:creator>Amit Chauhan</dc:creator>
      <pubDate>Tue, 27 Jun 2023 23:03:56 +0000</pubDate>
      <link>https://dev.to/akscjo/automating-jar-file-creation-and-release-artifact-uploads-with-github-actions-1h7b</link>
      <guid>https://dev.to/akscjo/automating-jar-file-creation-and-release-artifact-uploads-with-github-actions-1h7b</guid>
      <description>&lt;p&gt;You have this great java project and every time you make changes to the codebase you need to manually create the jar file and upload it as a release. Let's see how you can automate this release workflow. For this post, I am assuming that your repo sits in github. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We are going to use &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;github actions&lt;/a&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a .github/workflows directory in your repository on GitHub if this directory does not already exist.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the .github/workflows directory, create a file named release.yml.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the following code in release.yml&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Java CI to create and upload release on pull request
on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

env:
  build-number: ${GITHUB_RUN_NUMBER}

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-java@v3
        with:
          java-version: '11'
          distribution: 'temurin'
          cache: 'maven'
      - run: mvn clean package -DskipTests
      - run: mkdir staging &amp;amp;&amp;amp; mv target/yb-workload-sim.jar target/yb-workload-sim-${{ env.build-number }}.jar &amp;amp;&amp;amp; cp target/*.jar staging
      - uses: actions/upload-artifact@v3
        with:
          name: Package
          path: staging
          retention-days: 1
      - uses: marvinpinto/action-automatic-releases@latest
        with:
          repo_token: "${{ secrets.YOUR-GITHUB-TOKEN }}"
          automatic_release_tag: "${{ github.run_number }}"
          title: "Automated Build ${{ github.run_number }}"
          prerelease: true
          files: staging/*.jar



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's discuss what we are doing in above code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;on:&lt;/code&gt; tag indicates when this action should be triggered. In this example, I am triggering it when someone creates a pull request or when code is pushed to the main branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;jobs:&lt;/code&gt;&lt;br&gt;
This steps defines the various steps involved in this job. We will use ubuntu-latest os version. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then we will use prebuilt actions checkout to get the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that we setup java for compiling the jar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We are using maven to build the jar file. We will fire that up to create a jar file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When jar file is created, we will do some massaging to add the release number and then upload it as artifact using upload-artifact action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Last, we will use marvinpinto/action-automatic-releases action to create and upload new release. There are multiple actions available in the &lt;a href="https://github.com/marketplace?type=actions" rel="noopener noreferrer"&gt;marketplace&lt;/a&gt; to create the release. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it!! Now every time you push new code, github actions will create and upload jar for you.&lt;/p&gt;

&lt;p&gt;You can see the working example &lt;a href="https://github.com/YugabyteDB-Samples/yb-workload-simulator/actions/workflows/release.yml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfpfk9cdqy7ylsiwtn7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfpfk9cdqy7ylsiwtn7o.png" alt="Screenshot showcasing the automated CI build and release process"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>springboot</category>
      <category>java</category>
      <category>cicd</category>
      <category>githubactions</category>
    </item>
  </channel>
</rss>
