<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kiran (AK) Adapa</title>
    <description>The latest articles on DEV Community by Kiran (AK) Adapa (@akaak).</description>
    <link>https://dev.to/akaak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akaak"/>
    <language>en</language>
    <item>
      <title>Refresh Your Neon Cloud Database from Local Postgres (Without the Data API)</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Sat, 07 Feb 2026 16:46:58 +0000</pubDate>
      <link>https://dev.to/akaak/refresh-your-neon-cloud-database-from-local-postgres-without-the-data-api-361f</link>
      <guid>https://dev.to/akaak/refresh-your-neon-cloud-database-from-local-postgres-without-the-data-api-361f</guid>
      <description>&lt;p&gt;If you need to load your Neon Postgresql database with data, then is simple and you can make it easy for yourself on one of two ways.&lt;/p&gt;

&lt;p&gt;Here is a situation: You’ve been iterating on your app against a local PostgreSQL database and now you want to push that data up to your &lt;a href="https://neon.tech" rel="noopener noreferrer"&gt;Neon&lt;/a&gt; cloud instance. Your first thought might be: “Neon has a Data API—I’ll use that.” For a one-off or periodic &lt;strong&gt;bulk refresh&lt;/strong&gt;, there’s a simpler and faster way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Skip the Data API for This?
&lt;/h2&gt;

&lt;p&gt;Neon’s &lt;a href="https://neon.com/docs/data-api/get-started" rel="noopener noreferrer"&gt;Data API&lt;/a&gt; is great for application-driven CRUD over HTTP—think frontends or serverless functions talking to the database with optional JWT auth and Row-Level Security. For that, it’s the right tool.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;syncing an entire table&lt;/strong&gt; (or several) from local to cloud, you’d end up reading every row locally, then sending them over the wire via many HTTP requests. You’d also need to handle auth, batching, and rate limits. That’s a lot of moving parts for a job that’s really “copy data from A to B.”&lt;/p&gt;

&lt;p&gt;Neon speaks standard PostgreSQL. You can use normal connection strings and standard tooling. No Data API required for bulk refresh.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Better Options
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Option 1: pg_dump + psql&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Export from local (data-only, if the schema already exists in Neon):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_dump &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-U&lt;/span&gt; your_user &lt;span class="nt"&gt;-d&lt;/span&gt; your_db &lt;span class="nt"&gt;-t&lt;/span&gt; your_table &lt;span class="nt"&gt;--data-only&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt; p &lt;span class="nt"&gt;-f&lt;/span&gt; data.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then import into Neon using your Neon connection string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;psql &lt;span class="s2"&gt;"postgresql://user:password@your-project.neon.tech/neondb?sslmode=require"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; data.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Truncate the target table in Neon first if you’re replacing data to avoid duplicate-key errors. For multiple tables with foreign keys, export in dependency order or adjust the dump/restore sequence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2: A small script with two connections&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use your language of choice (e.g. Python with &lt;code&gt;psycopg2&lt;/code&gt;) to open two connections: one to local Postgres, one to Neon. Read from the local table(s), truncate the target table(s) in Neon, then bulk-insert (e.g. &lt;code&gt;execute_values&lt;/code&gt; or COPY). No HTTP layer, no JWT—just two Postgres connections and a straightforward copy. This approach is easy to rerun and fits well into scripts or small CLI tools.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Option 2 is what I use.&lt;/strong&gt;&lt;/em&gt; There are times that I need to refresh the schema as well as load the data. I have a generic script that I use from time to time. If the schema changes or the data refresh is required, I point my &lt;em&gt;friendly coding agent&lt;/em&gt; to it with a few instructions and my local .env and viola it is all taken care of!&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;For &lt;strong&gt;bulk refresh&lt;/strong&gt; from local Postgres to Neon, use &lt;strong&gt;pg_dump/psql&lt;/strong&gt; or a &lt;strong&gt;script with two DB connections&lt;/strong&gt;. Reserve the Data API for runtime CRUD from your app. You’ll get the job done with less setup and better performance.&lt;/p&gt;

&lt;p&gt;Hope this is helpful. Let me know on how you have used Neon Data API for your projects.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>database</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Note taking using CLI. I like to stay in the terminal or at the command line. I use `jrnl` now. With genai powered coding tools galore, wanted to build something that would better suit my needs. Lets see how it goes. I will write in here with my updates.</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Fri, 02 Jan 2026 23:30:28 +0000</pubDate>
      <link>https://dev.to/akaak/note-taking-using-cli-i-like-to-stay-in-the-terminal-or-at-the-command-line-i-use-jrnl-now-2g13</link>
      <guid>https://dev.to/akaak/note-taking-using-cli-i-like-to-stay-in-the-terminal-or-at-the-command-line-i-use-jrnl-now-2g13</guid>
      <description></description>
      <category>ai</category>
      <category>cli</category>
      <category>devjournal</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Slide decks with Markdown</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Tue, 18 Nov 2025 13:04:37 +0000</pubDate>
      <link>https://dev.to/akaak/slide-decks-with-markdown-g5d</link>
      <guid>https://dev.to/akaak/slide-decks-with-markdown-g5d</guid>
      <description>&lt;p&gt;If you are a developer or developer minded and creating presentations or slides with Powerpoint or Google Slides, then give Marp.app a try. Once you use it, you will never go back.&lt;/p&gt;

&lt;p&gt;Write and version all your presentations in git!&lt;/p&gt;

&lt;p&gt;If you love writing in Markdown, Marp.app lets you create beautiful slide decks quickly and easily using Markdown syntax. Here's how to get started and see your changes in real-time:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write Your Slides in Markdown&lt;/strong&gt;&lt;br&gt;
Use --- (three dashes) to separate slides. Marp supports all common Markdown features like headers, lists, images, and code blocks. Add a YAML frontmatter at the top with marp: true to enable slide rendering, plus options like theme and pagination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preview Your Slides Instantly&lt;/strong&gt;&lt;br&gt;
Use the Marp CLI or Visual Studio Code Marp extension for live preview. Run marp -w yourslides.md with the CLI to open a localhost preview in your browser that updates as you edit your Markdown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply Themes and Custom Styles&lt;/strong&gt;&lt;br&gt;
Marp comes with built-in themes like default, gaia, and uncover. You can set the theme in the frontmatter (theme: uncover). For custom styling, Marp supports adding CSS styles directly in your Markdown or creating your own theme.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Export to Multiple Formats&lt;/strong&gt;&lt;br&gt;
When ready, export your slides to PDF, HTML, or PowerPoint with one command or click, making it easy to share or present.&lt;/p&gt;

&lt;p&gt;Marp.app turns your Markdown documents into presentation-ready slides with minimal fuss, perfect for developers who prefer editing text over slide design tools! &lt;/p&gt;

&lt;p&gt;Try it out: create a .md file, add --- between slides, open in VS Code with Marp extension or use the CLI, and start presenting from Markdown!&lt;/p&gt;

&lt;p&gt;Great Video on how to work with Marp by &lt;a href="https://www.dougmercer.dev/" rel="noopener noreferrer"&gt;Doug Mercer&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=EzQ-p41wNEE" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=EzQ-p41wNEE&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>productivity</category>
      <category>tooling</category>
      <category>writing</category>
    </item>
    <item>
      <title>Excalidraw - Browser based app for hand drawn like diagrams</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Sat, 01 Nov 2025 19:46:23 +0000</pubDate>
      <link>https://dev.to/akaak/excalidraw-browser-based-app-for-hand-drawn-like-diagrams-248m</link>
      <guid>https://dev.to/akaak/excalidraw-browser-based-app-for-hand-drawn-like-diagrams-248m</guid>
      <description>&lt;p&gt;Online Whiteboard made simple. That's what Excalidraw promotes its app as and it is true. Excalidraw is a great app to create and share diagrams for your next project. &lt;/p&gt;

&lt;p&gt;I use it locally on my Mac and love its simplicity.&lt;/p&gt;

&lt;p&gt;Excalidraw positions itself as a "simple online whiteboard," and its architecture reflects that philosophy. Built with React and TypeScript, Excalidraw provides a minimal, collaborative environment for diagramming—ideal for technical teams, architects, and anyone who needs to quickly visualize system designs, workflows, or data models.&lt;/p&gt;

&lt;p&gt;As an open-source project, Excalidraw can be deployed locally, giving teams full control over data privacy and customization. Here’s how to set it up for local use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt;&lt;br&gt;
Execute &lt;code&gt;git clone https://github.com/excalidraw/excalidraw.git&lt;/code&gt; to download the source code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install dependencies&lt;/strong&gt;&lt;br&gt;
Navigate to the project directory and run &lt;code&gt;npm install&lt;/code&gt; to install all required Node.js packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launch the development server&lt;/strong&gt;&lt;br&gt;
Start the app with &lt;code&gt;npm start&lt;/code&gt;. By default, Excalidraw will be available at &lt;code&gt;http://localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Begin diagramming and sharing&lt;br&gt;
Use the web interface to create diagrams, export them in multiple formats. &lt;/p&gt;

&lt;p&gt;Right out of the base installation, you can even use &lt;strong&gt;Mermaid&lt;/strong&gt; (&lt;a href="https://mermaid.js.org/" rel="noopener noreferrer"&gt;https://mermaid.js.org/&lt;/a&gt;) to use text to draw diagrams.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>opensource</category>
      <category>tooling</category>
    </item>
    <item>
      <title>How to Push Changes from a Cloned GitHub Repo to Your Own Repository</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Tue, 08 Jul 2025 13:15:58 +0000</pubDate>
      <link>https://dev.to/akaak/how-to-push-changes-from-a-cloned-github-repo-to-your-own-repository-47hn</link>
      <guid>https://dev.to/akaak/how-to-push-changes-from-a-cloned-github-repo-to-your-own-repository-47hn</guid>
      <description>&lt;p&gt;You starred a great github (open source) project. You love this project and wanted to work on it on your own.&lt;/p&gt;

&lt;p&gt;You clone that project from github to your local.&lt;br&gt;
Now, what?&lt;/p&gt;

&lt;p&gt;Make a brand new github &lt;code&gt;repository&lt;/code&gt; under your own account and start using that repo.&lt;/p&gt;

&lt;p&gt;For many, it is a vary basic activity and requires just a couple of steps. But, how do you do it?&lt;/p&gt;

&lt;p&gt;When you clone a repository from GitHub, the remote named origin points to the source repository. If you want to make changes and push them to your own GitHub repository (not the original), follow these steps&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new repo on GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Change the remote URL in your local clone.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push your changes to your new repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm everything on GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That second step is the key step.&lt;/p&gt;

&lt;p&gt;When you clone a repo, the project's git remote is still pointed to the &lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ git remote -v&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin  https://github.com/open-source-org/great-project.git (fetch)
origin  https://github.com/open-source-org/great-project.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You update the &lt;code&gt;origin&lt;/code&gt; to your newly created github repository url!  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ git remote set-url origin https://github.com/your-gh-profile/great-project.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, if you display the git remote using the following command &lt;br&gt;
&lt;code&gt;$ git remote -v&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You see this...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin  https://github.com/your-gh-profile/great-project.git (fetch)
origin  https://github.com/your-gh-profile/great-project.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! &lt;/p&gt;

&lt;p&gt;From now on, whenever you make changes and push changes from your local these updates/additions go to your github repository.&lt;/p&gt;

&lt;p&gt;Reference(s):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/get-started/git-basics/managing-remote-repositories" rel="noopener noreferrer"&gt;https://docs.github.com/en/get-started/git-basics/managing-remote-repositories&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>ER Diagrams for your Database</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Sat, 07 Jun 2025 15:19:10 +0000</pubDate>
      <link>https://dev.to/akaak/er-diagrams-for-your-database-11pk</link>
      <guid>https://dev.to/akaak/er-diagrams-for-your-database-11pk</guid>
      <description>&lt;p&gt;DbSchema is a good tool to generate ER Diagrams for your database&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dbschema.com/" rel="noopener noreferrer"&gt;https://dbschema.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A screenshot of Table Properties within the DbSchema&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmyv5qe3qy4oowthxa1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmyv5qe3qy4oowthxa1o.png" alt=" " width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tools</category>
      <category>database</category>
    </item>
    <item>
      <title>Working with Parquet files</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Sat, 05 Apr 2025 17:11:31 +0000</pubDate>
      <link>https://dev.to/akaak/working-with-parquet-files-1b30</link>
      <guid>https://dev.to/akaak/working-with-parquet-files-1b30</guid>
      <description>&lt;p&gt;Parquet files offer significant advantages over traditional formats like CSV or JSON. This is more relevant in analytical workloads and processing. &lt;/p&gt;

&lt;p&gt;Tools like &lt;code&gt;parquet-tools&lt;/code&gt; and DuckDB make it easy to create, manipulate, and query these files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;parquet-tools&lt;/code&gt; &lt;a href="https://github.com/hangxie/parquet-tools" rel="noopener noreferrer"&gt;https://github.com/hangxie/parquet-tools&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DuckDB &lt;a href="https://duckdb.org/docs/stable/data/parquet/overview" rel="noopener noreferrer"&gt;https://duckdb.org/docs/stable/data/parquet/overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1) parquet-tool&lt;/strong&gt;&lt;br&gt;
display data to terminal in json (default) or jsonl, or csv&lt;/p&gt;

&lt;p&gt;&lt;code&gt;parquet-tools cat data_file.parquet | jq .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;em&gt;display in &lt;code&gt;jsonl&lt;/code&gt; only two lines/records&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;parquet-tools cat --format jsonl --limit 2 data_file.parquet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Get the meta data about the parquet file&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;parquet-tools meta data_file.parquet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;**2) DuckDB&lt;/p&gt;

&lt;p&gt;DuckDB is an embedded SQL database that supports reading and writing Parquet files. &lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Generate a Parquet file:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="s1"&gt;'example'&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;col1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="s1"&gt;'data_file.parquet'&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FORMAT&lt;/span&gt; &lt;span class="s1"&gt;'parquet'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Read a Parquet file:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;read_parquet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_file.parquet'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lately, I have been using DuckDB for most of my analytics (dealing with Gigabytes of data) and it can handle both local and cloud-based files efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the Parquet file format?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read about the Apache project's Overview/Motivation page &lt;a href="https://parquet.apache.org/docs/overview/motivation/" rel="noopener noreferrer"&gt;https://parquet.apache.org/docs/overview/motivation/&lt;/a&gt; and the project Documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;For a good easy reading, go to this great blog article:&lt;a href="https://blog.matthewrathbone.com/2019/12/20/parquet-or-bust.html" rel="noopener noreferrer"&gt;https://blog.matthewrathbone.com/2019/12/20/parquet-or-bust.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>data</category>
      <category>tools</category>
      <category>parquet</category>
      <category>duckdb</category>
    </item>
    <item>
      <title>DevOps with AWS CodePipeline</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Fri, 14 Mar 2025 01:34:23 +0000</pubDate>
      <link>https://dev.to/akaak/devops-with-aws-codepipeline-29ij</link>
      <guid>https://dev.to/akaak/devops-with-aws-codepipeline-29ij</guid>
      <description>&lt;h1&gt;
  
  
  Automating DevOps with GitLab and AWS CodePipeline Integration
&lt;/h1&gt;

&lt;p&gt;As a developer or DevOps engineer, streamlining your workflow is critical for efficient delivery of high-quality software. Integrate GitLab/Github repositories with AWS CodePipeline to get a robust solution for automating the CI/CD (Continuous Integration/Continuous Deployment) process. I use this setup for my DevOps workflows and highly recommend it to others looking to simplify their software development lifecycle.&lt;/p&gt;

&lt;p&gt;My latest illustration of AWS CodePipeline Stages:&lt;/p&gt;

&lt;p&gt;A Visual representation AWS CodePipeline of the CI/CD process, divided into three key stages: &lt;strong&gt;Source&lt;/strong&gt;, &lt;strong&gt;Build&lt;/strong&gt;, and &lt;strong&gt;Deploy&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibedw3nmp5aw5rdc9ujk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibedw3nmp5aw5rdc9ujk.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most powerful features of this integration is its ability to automatically trigger pipeline actions whenever code changes are pushed to GitLab. I configure  a webhook between GitLab and AWS CodePipeline, and any new commit to the 'develop'/'release' branch in the repository initiates the pipeline workflow. Super convenient and increases your iteration speeds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the benefits of automation and scalability using GitLab with AWS CodePipeline for DevOps.
&lt;/h3&gt;

&lt;p&gt;I’ve found this setup invaluable for managing backend repositories efficiently in my DevOps workflows. If you’re looking to simplify your CI/CD process while maintaining high reliability, integrating GitLab with AWS CodePipeline is an excellent choice.&lt;/p&gt;

&lt;p&gt;By adopting this approach, you can focus more on writing great code while letting automation handle the heavy lifting of building and deploying your applications.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How to Sync two GIT repositories?</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Tue, 04 Mar 2025 12:43:50 +0000</pubDate>
      <link>https://dev.to/akaak/mar4-how-to-sync-two-git-repos-23nb</link>
      <guid>https://dev.to/akaak/mar4-how-to-sync-two-git-repos-23nb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Steps for synching one Github repository with another Github repository using your local Macbook&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before you start…&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lets say, on your local macbook, you have a folder called “CODE”. &lt;/li&gt;
&lt;li&gt;You already have the first or source repo (ak-repo1) on your local (on a feature branch — ‘feat/sync-1’ for the below example)&lt;/li&gt;
&lt;li&gt;You don’t have a second or target repo created yet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyznpkf5tprq56huulvdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyznpkf5tprq56huulvdp.png" alt="Sync two git repos - use your local" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 0) Create a Target (blank repository)
&lt;/h2&gt;

&lt;p&gt;Create a new repository (ak-repo2) on Github&lt;br&gt;
In this step, this ak-repo2 can stay as a blank repo&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1) Clone target repository to your local
&lt;/h2&gt;

&lt;p&gt;Clone ak-repo2 from github to your local CODE folder&lt;br&gt;
&lt;code&gt;git clone https://github.com/akaak/ak-repo2.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify the ‘remote’ for this target repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODE/ak-repo2 - &amp;lt;main&amp;gt; $ git remote -v
origin  https://github.com/akaak/ak-repo2.git (fetch)
origin  https://github.com/akaak/ak-repo2.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2) Set Repo2 (target) as “remote” for  Repo1 (source) on your local
&lt;/h2&gt;

&lt;p&gt;From Repo1 (ak-repo1), set remote for repo2&lt;br&gt;
&lt;code&gt;CODE/ak-repo1 - &amp;lt;main&amp;gt; $ git remote add ak-repo2 https://github.com/akaak/ak-repo2.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify the remotes on your Source repo&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODE/ak-repo1 - &amp;lt;feat/sync-1&amp;gt; $ git remote -v
ak-repo2    https://github.com/akaak/ak-repo2.git (fetch)
ak-repo2    https://github.com/akaak/ak-repo2.git (push)
origin  https://github.com/akaak/ak-repo1.git (fetch)
origin  https://github.com/akaak/ak-repo1.git (push)
CODE/ak-repo1 - &amp;lt;feat/sync-1&amp;gt; $
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there are two remotes on this source (ak-repo1). One for the newly added ‘ak-repo2’ and other other for ‘origin’&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3) Push from repo1 to repo2
&lt;/h2&gt;

&lt;p&gt;From the Repo1 execute the command “git push  ” to push to Repo 2&lt;/p&gt;

&lt;p&gt;While on the branch ‘feat/sync-1’ of repo1, push repo1 branch contents to repo2&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CODE/ak-repo1 - &amp;lt;feat/sync-1&amp;gt; $ git push ak-repo2 feat/sync-1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here the repo2 name of &lt;code&gt;ak-repo2&lt;/code&gt; is recognized by this ‘git push’ command and pushes the code to github’s repo2. repo2 is set up as remote in Step 2 above.&lt;/p&gt;

&lt;p&gt;the above command gives an output similar to the one given below…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODE/ak-repo1 - &amp;lt;feat/sync-1&amp;gt; $ git push ak-repo2 feat/sync-1
Enumerating objects: 213, done.
Counting objects: 100% (213/213), done.
Delta compression using up to 8 threads
Compressing objects: 100% (155/155), done.
Writing objects: 100% (210/210), 365.70 KiB | 121.90 MiB/s, done.
Total 210 (delta 38), reused 210 (delta 38), pack-reused 0
remote: Resolving deltas: 100% (38/38), done.
remote:
remote: Create a pull request for 'feat/sync-1' on GitHub by visiting:
remote:      https://github.com/akaak/ak-repo2/pull/new/feat/sync-1
remote:
To https://github.com/akaak/ak-repo2.git
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4) Verify Repo 2 on Github
&lt;/h2&gt;

&lt;p&gt;Go to Repo2 on Github and you should see the feature ‘branch’ &lt;code&gt;feat/sync-1&lt;/code&gt; show up as a new branch. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5) Refresh (Fetch) Repo2 on your local
&lt;/h2&gt;

&lt;p&gt;Go to your Repo2 folder on your local&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODE/ak-repo2 - &amp;lt;main&amp;gt; $ git fetch
remote: Enumerating objects: 214, done.
remote: Counting objects: 100% (214/214), done.
remote: Compressing objects: 100% (156/156), done.
remote: Total 211 (delta 38), reused 210 (delta 38), pack-reused 0 (from 0)
Receiving objects: 100% (211/211), 366.55 KiB | 4.17 MiB/s, done.
Resolving deltas: 100% (38/38), done.
From https://github.com/akaak/ak-repo2
   1f31f5b..592155f  main        -&amp;gt; origin/main
 * [new branch]      feat/sync-1 -&amp;gt; origin/feat/sync-1
CODE/ak-repo2 - &amp;lt;main&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Switch to the feature branch on your local&lt;/strong&gt;&lt;br&gt;
You can switch to the pushed feature branch on your local:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODE/ak-repo2 - &amp;lt;main&amp;gt; $ git checkout feat/sync-1
branch 'feat/sync-1' set up to track 'origin/feat/sync-1'.
Switched to a new branch 'feat/sync-1'
CODE/ak-repo2 - &amp;lt;feat/sync-1&amp;gt; $ 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may also use ‘git log’ command on to check and compare the git log on both the repositories to ensure that they both look the same.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CODE/ak-repo1 - &amp;lt;feat/sync-3&amp;gt; $ git log --oneline&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Now, both Repo1 and Repo2 are synched both at Github and on your Local!&lt;/p&gt;

</description>
      <category>development</category>
      <category>devops</category>
      <category>git</category>
    </item>
    <item>
      <title>Work with data files from your terminal - visidata</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Thu, 27 Feb 2025 01:23:26 +0000</pubDate>
      <link>https://dev.to/akaak/work-with-data-files-from-your-terminal-visidata-550d</link>
      <guid>https://dev.to/akaak/work-with-data-files-from-your-terminal-visidata-550d</guid>
      <description>&lt;p&gt;If you are working with data files and want to wrangle data then you should add 'visidata' to your arsenal.&lt;/p&gt;

&lt;p&gt;I have been using it for sometime and really love this tool. &lt;/p&gt;

&lt;p&gt;from &lt;a href="https://www.visidata.org/" rel="noopener noreferrer"&gt;https://www.visidata.org/&lt;/a&gt; website:&lt;/p&gt;

&lt;h2&gt;
  
  
  Data exploration at your fingertips.
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;VisiData is an interactive multitool for tabular data. It combines the clarity of a spreadsheet, the efficiency of the terminal, and the power of Python, into a lightweight utility which can handle millions of rows with ease.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>datascience</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Javascript projects and optimize space on your laptop</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Mon, 24 Feb 2025 02:22:29 +0000</pubDate>
      <link>https://dev.to/akaak/javascript-project-and-optimize-space-on-your-laptop-421j</link>
      <guid>https://dev.to/akaak/javascript-project-and-optimize-space-on-your-laptop-421j</guid>
      <description>&lt;p&gt;&lt;em&gt;(last updated Apr 5, 2025)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are a javascript/nodejs developer, then changes are &lt;code&gt;node_modules&lt;/code&gt; from your projects is going out of control. &lt;/p&gt;

&lt;p&gt;Get your space back on your macbook or to better manage space consumed by "node_modules" for various Node.js and React.js applications, you can try one of two things given here:&lt;/p&gt;

&lt;h2&gt;
  
  
  Remove Unused node_modules
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Use npkill
&lt;/h3&gt;

&lt;p&gt;This tool allows you to easily identify and remove unnecessary node_modules folders.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npx npkill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will go through all the projects/directories and lists the "Releasable Space" in GB. It lists all node_modules directories and their sizes, allowing you to delete them selectively. You may use the up/down arrow on your keyboard to select a project and just hit the 'space bar' on the select that project's node_modules be cleaned up. All the folders that are deleted are marked and the "Space saved" in GB is updated.&lt;/p&gt;

&lt;p&gt;See the screenshot here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfu2xke2eihmzd9wsfoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfu2xke2eihmzd9wsfoi.png" alt="npkill screenshot" width="788" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use a shell script:
&lt;/h3&gt;

&lt;p&gt;Create a script to automatically delete node_modules folders in inactive projects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
   find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"node_modules"&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; d &lt;span class="nt"&gt;-mtime&lt;/span&gt; +30 &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; +
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script removes node_modules folders that haven't been modified in the last 30 days.&lt;/p&gt;

</description>
      <category>tools</category>
      <category>node</category>
      <category>devex</category>
    </item>
    <item>
      <title>Useful too to work with your JSON files - jq</title>
      <dc:creator>Kiran (AK) Adapa</dc:creator>
      <pubDate>Thu, 26 Dec 2024 16:42:00 +0000</pubDate>
      <link>https://dev.to/akaak/useful-too-to-work-with-your-json-files-jq-4eb4</link>
      <guid>https://dev.to/akaak/useful-too-to-work-with-your-json-files-jq-4eb4</guid>
      <description>&lt;p&gt;If you are working with JSON files then &lt;code&gt;jq&lt;/code&gt; could be a very valuable tool. &lt;code&gt;jq&lt;/code&gt; has proven to be a valuable tool for JSON processing. I've integrated it into several projects to streamline API response handling and data transformation tasks. &lt;br&gt;
In our CI/CD pipelines, jq helps automate configuration updates across environments. If you have JSON-formatted log files, then you can greatly benefit from using 'jq' to search those log files.&lt;/p&gt;

&lt;p&gt;jq has its own query language and it is quite powerful. Mastering jq's query language takes some time. But, with some basic queries you can get a lot using this tool and significantly improves your productivity when working with JSON data. I have been using it and added it to my toolkit, complementing other command-line utilities for data manipulation and analysis.&lt;/p&gt;

&lt;p&gt;"jq is a lightweight and flexible command-line JSON processor" from the jq &lt;a href="https://stedolan.github.io/jq/" rel="noopener noreferrer"&gt;https://stedolan.github.io/jq/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[1] jq manual&lt;br&gt;
&lt;a href="https://jqlang.github.io/jq/manual/" rel="noopener noreferrer"&gt;https://jqlang.github.io/jq/manual/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] Reference 2&lt;br&gt;
&lt;a href="https://www.example.com" rel="noopener noreferrer"&gt;https://www.example.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jq</category>
      <category>json</category>
    </item>
  </channel>
</rss>
