<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karan Jagtiani</title>
    <description>The latest articles on DEV Community by Karan Jagtiani (@karanjagtiani).</description>
    <link>https://dev.to/karanjagtiani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karanjagtiani"/>
    <language>en</language>
    <item>
      <title>Skyflo: AI agent for Cloud &amp; DevOps</title>
      <dc:creator>Karan Jagtiani</dc:creator>
      <pubDate>Wed, 31 Dec 2025 15:13:44 +0000</pubDate>
      <link>https://dev.to/karanjagtiani/skyflo-ai-agent-for-cloud-devops-3a7f</link>
      <guid>https://dev.to/karanjagtiani/skyflo-ai-agent-for-cloud-devops-3a7f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1zbae1xat6l0unt42km.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1zbae1xat6l0unt42km.png" alt="Hero banner showing Skyflo.ai as an AI agent for Cloud and DevOps, positioned as a centralized command interface that brings control, visibility, and safety to Kubernetes and CI/CD operations." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cloud and DevOps work is not hard because the commands are hard.&lt;/p&gt;

&lt;p&gt;It is hard because your context is fragmented.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terminals, dashboards, logs, CI, rollouts&lt;/li&gt;
&lt;li&gt;Different auth models per system&lt;/li&gt;
&lt;li&gt;Different failure modes per workflow&lt;/li&gt;
&lt;li&gt;No shared audit trail of what happened and why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://skyflo.ai" rel="noopener noreferrer"&gt;Skyflo.ai&lt;/a&gt; is my attempt to reduce that fragmentation.&lt;/p&gt;

&lt;p&gt;Skyflo is an open-source AI agent for Cloud and DevOps.&lt;br&gt;
It unifies Kubernetes operations and CI/CD behind a natural language interface, with approvals built in.&lt;/p&gt;

&lt;p&gt;The important part is not “chat with prod”.&lt;/p&gt;

&lt;p&gt;The important part is control.&lt;/p&gt;
&lt;h2&gt;
  
  
  What Skyflo is
&lt;/h2&gt;

&lt;p&gt;Skyflo is a command center, an agent engine, and a standardized tool layer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Command Center (UI)&lt;/strong&gt;: a chat interface that streams every step in real time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engine&lt;/strong&gt;: a service that runs a LangGraph workflow and turns intent into safe tool calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server&lt;/strong&gt;: a tool server exposing standardized integrations for Kubernetes and CI/CD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You express intent in plain English.&lt;/p&gt;

&lt;p&gt;What executes is always a validated tool call.&lt;/p&gt;
&lt;h2&gt;
  
  
  What Skyflo is not
&lt;/h2&gt;

&lt;p&gt;Skyflo is not “give an LLM kubectl and pray”.&lt;/p&gt;

&lt;p&gt;If you want autonomous mutation in production without approvals, you are optimizing for demos.&lt;br&gt;
You are not optimizing for reliability.&lt;/p&gt;

&lt;p&gt;Skyflo is designed for operators who want automation without losing control.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SREs&lt;/li&gt;
&lt;li&gt;DevOps engineers&lt;/li&gt;
&lt;li&gt;cloud architects&lt;/li&gt;
&lt;li&gt;platform teams&lt;/li&gt;
&lt;li&gt;security minded operators&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The safety model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxpd24ur8wxavd1a7cbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxpd24ur8wxavd1a7cbp.png" alt="Diagram illustrating Skyflo’s safety model, showing the agent workflow phases Plan, Execute, and Verify, with feedback loops and explicit verification to enforce controlled, human-approved operations." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Skyflo is built around a simple policy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agent can propose and prepare. You approve mutations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That policy shows up in four places.&lt;/p&gt;
&lt;h3&gt;
  
  
  1) Human in the loop for mutations
&lt;/h3&gt;

&lt;p&gt;Any &lt;strong&gt;WRITE&lt;/strong&gt; operation requires explicit approval.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apply&lt;/code&gt;, rollout promote or cancel&lt;/li&gt;
&lt;li&gt;Helm upgrade or rollback&lt;/li&gt;
&lt;li&gt;actions that stop or cancel builds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read only workflows can run end to end.&lt;/p&gt;
&lt;h3&gt;
  
  
  2) Plan → Execute → Verify
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuoehlze7bst1s23sqd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuoehlze7bst1s23sqd5.png" alt="LangGraph workflow diagram showing Skyflo’s Plan → Execute → Verify loop, from input sanitization and tool selection through gated execution and final result verification with retry paths.&amp;lt;br&amp;gt;
" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the only loop that matters in operations.&lt;/p&gt;

&lt;p&gt;Skyflo runs an iterative workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Plan&lt;/strong&gt;: interpret intent and perform lightweight discovery when needed
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute&lt;/strong&gt;: call tools, then pause for approval if the next step is a write
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt;: validate outcomes against intent, then continue or stop
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  3) Everything streams in real time
&lt;/h3&gt;

&lt;p&gt;Operators do not trust black boxes.&lt;/p&gt;

&lt;p&gt;Skyflo streams what it is doing as it does it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model output&lt;/li&gt;
&lt;li&gt;tool progress&lt;/li&gt;
&lt;li&gt;tool results&lt;/li&gt;
&lt;li&gt;workflow events&lt;/li&gt;
&lt;li&gt;approvals and decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is operational safety.&lt;br&gt;
It is also how you debug the agent.&lt;/p&gt;
&lt;h3&gt;
  
  
  4) Standardized tool execution via MCP
&lt;/h3&gt;

&lt;p&gt;Integration work is where most agent projects fail.&lt;/p&gt;

&lt;p&gt;Skyflo uses MCP so tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;discoverable&lt;/li&gt;
&lt;li&gt;typed&lt;/li&gt;
&lt;li&gt;validated&lt;/li&gt;
&lt;li&gt;documented&lt;/li&gt;
&lt;li&gt;separable from the agent logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Engine does not know kubectl.&lt;br&gt;
It knows tools.&lt;/p&gt;
&lt;h2&gt;
  
  
  Supported tools (today)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcrnpvj921dtoczlz4dy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcrnpvj921dtoczlz4dy.png" alt="Unified agent diagram showing Skyflo’s MCP server exposing standardized integrations for Kubernetes, Jenkins, Helm, and Argo, allowing a single agent to safely operate across core Cloud and DevOps tools." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Skyflo ships with standardized tools for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes&lt;/strong&gt;: discovery, get and describe, logs and exec, safe apply and diff flows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Argo Rollouts&lt;/strong&gt;: status, pause and resume, promote and cancel, progressive delivery visibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;: search, install, upgrade, rollback with dry run and diff first safety&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jenkins&lt;/strong&gt;: jobs, builds, logs, SCM context, identity, secure auth, CSRF handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Write operations always require approval.&lt;/p&gt;
&lt;h2&gt;
  
  
  What real workflows look like
&lt;/h2&gt;

&lt;p&gt;These prompts map to real on call muscle memory.&lt;/p&gt;
&lt;h3&gt;
  
  
  Debug a production issue
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Show me the last 200 lines of logs for checkout in production. If there are errors, summarize them. Then check if any rollout is in progress.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you should see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;discovery of namespace, deployment, and pods&lt;/li&gt;
&lt;li&gt;logs and events&lt;/li&gt;
&lt;li&gt;rollout state inspection&lt;/li&gt;
&lt;li&gt;a structured summary&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Progressive delivery with guardrails
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Canary rollout auth-backend in dev through 10/25/50/100 steps. Pause at 25% if error rate increases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you should see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a rollout plan and read only checks first&lt;/li&gt;
&lt;li&gt;an approval gate before any mutation&lt;/li&gt;
&lt;li&gt;verification after each step&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Jenkins investigation
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Pull logs for the last failed build of job X. Extract the first failing stage and tell me what changed since the last green build.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you should see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build discovery&lt;/li&gt;
&lt;li&gt;log retrieval&lt;/li&gt;
&lt;li&gt;structured extraction&lt;/li&gt;
&lt;li&gt;a concrete next step&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;

&lt;p&gt;Install Skyflo.ai into a Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://raw.githubusercontent.com/skyflo-ai/skyflo/main/deployment/install.sh &lt;span class="nt"&gt;-o&lt;/span&gt; install.sh
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x install.sh
./install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Skyflo supports multiple LLM providers via LiteLLM, including self hosted models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contributing: step into the operator’s seat
&lt;/h2&gt;

&lt;p&gt;The best way to contribute to Skyflo is to first use it the way it is meant to be used.&lt;/p&gt;

&lt;p&gt;Not by reading issues.&lt;br&gt;
Not by scanning PRs.&lt;br&gt;
By running it, observing it, and tracing how decisions flow through the system.&lt;/p&gt;

&lt;p&gt;Think of this as stepping into the contributor’s shoes.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Run Skyflo locally and watch it work
&lt;/h3&gt;

&lt;p&gt;Start by cloning the repo and running Skyflo on your own machine or cluster.&lt;/p&gt;

&lt;p&gt;Do not rush to change anything yet.&lt;/p&gt;

&lt;p&gt;Use it as an operator would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;issue a few realistic prompts&lt;/li&gt;
&lt;li&gt;watch how intent becomes a plan&lt;/li&gt;
&lt;li&gt;observe where approvals are enforced&lt;/li&gt;
&lt;li&gt;see how tools are discovered and executed&lt;/li&gt;
&lt;li&gt;follow the streamed events in the UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, the goal is intuition, not contribution.&lt;br&gt;
You should be able to explain what the agent is doing at each step without reading the code.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Trace the agentic loop end to end
&lt;/h3&gt;

&lt;p&gt;Once you are comfortable with the surface, go deeper.&lt;/p&gt;

&lt;p&gt;Add logs and traces to the engine.&lt;/p&gt;

&lt;p&gt;Follow a single request through the entire lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;intent parsing&lt;/li&gt;
&lt;li&gt;planning and discovery&lt;/li&gt;
&lt;li&gt;tool selection and execution&lt;/li&gt;
&lt;li&gt;approval gates&lt;/li&gt;
&lt;li&gt;verification and termination&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where most understanding is built.&lt;/p&gt;

&lt;p&gt;You will see where state lives, how LangGraph nodes transition, and why certain steps are deliberately slow or blocked.&lt;br&gt;
You will also see why “just let it run” is not acceptable in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Study closed issues and merged PRs
&lt;/h3&gt;

&lt;p&gt;Before picking something new, look at what has already shipped.&lt;/p&gt;

&lt;p&gt;Read closed issues and their corresponding PRs.&lt;/p&gt;

&lt;p&gt;Focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what problem was being solved&lt;/li&gt;
&lt;li&gt;what safety constraints shaped the solution&lt;/li&gt;
&lt;li&gt;how tools were extended or restricted&lt;/li&gt;
&lt;li&gt;how streaming, approvals, and verification were preserved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives you a strong signal of project standards.&lt;br&gt;
You will quickly see what kinds of changes are welcomed and which ones are rejected.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Pick a good first issue, or create one
&lt;/h3&gt;

&lt;p&gt;At this point, picking an issue becomes straightforward.&lt;/p&gt;

&lt;p&gt;There are almost always good first issues available.&lt;br&gt;
If you cannot find one that matches your understanding, create one yourself.&lt;/p&gt;

&lt;p&gt;Good issues usually come from observations like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a workflow that feels clunky when used&lt;/li&gt;
&lt;li&gt;a missing verification step&lt;/li&gt;
&lt;li&gt;a tool that exposes too much power&lt;/li&gt;
&lt;li&gt;a safety check that should exist but does not&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open the issue.&lt;br&gt;
Propose the shape of the solution.&lt;br&gt;
Then start implementing it.&lt;/p&gt;

&lt;p&gt;If you follow this path once, you will not just contribute to Skyflo.&lt;/p&gt;

&lt;p&gt;You will understand how production grade agent systems are built, debugged, and kept safe.&lt;br&gt;
That experience is far more valuable than shipping another isolated feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get involved
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7elysceb97xir51bgmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7elysceb97xir51bgmz.png" alt="Call-to-action banner inviting readers to get involved with Skyflo.ai, encouraging contributors to join the open-source mission and help shape a safer AI agent for Cloud and DevOps." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick an issue and start contributing. Or create a new issue and start a discussion.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/skyflo-ai/skyflo" rel="noopener noreferrer"&gt;https://github.com/skyflo-ai/skyflo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Issues: &lt;a href="https://github.com/skyflo-ai/skyflo/issues" rel="noopener noreferrer"&gt;https://github.com/skyflo-ai/skyflo/issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/kCFNavMund" rel="noopener noreferrer"&gt;https://discord.gg/kCFNavMund&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connect with me
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbqvrnntvw1zum2n13j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbqvrnntvw1zum2n13j1.png" alt="Personal banner highlighting karan.social, directing readers to connect with Karan Jagtiani, the founder of Skyflo.ai, and follow his work in Cloud, DevOps, and agentic systems." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am Karan Jagtiani, founder of Skyflo.ai.&lt;br&gt;
You can find me at &lt;a href="https://karan.social" rel="noopener noreferrer"&gt;karan.social&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Drop a comment if you have a good first issue or runbook idea for Skyflo.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
    <item>
      <title>QuickNode: Supercharging Your Node.js, TypeScript, and PostgreSQL Projects</title>
      <dc:creator>Karan Jagtiani</dc:creator>
      <pubDate>Sun, 18 Jun 2023 06:49:35 +0000</pubDate>
      <link>https://dev.to/karanjagtiani/quicknode-supercharging-your-nodejs-typescript-and-postgresql-projects-1130</link>
      <guid>https://dev.to/karanjagtiani/quicknode-supercharging-your-nodejs-typescript-and-postgresql-projects-1130</guid>
      <description>&lt;p&gt;Hello, fellow developers!&lt;/p&gt;

&lt;p&gt;Today, I'm thrilled to introduce &lt;a href="https://github.com/KaranJagtiani/quick-node" rel="noopener noreferrer"&gt;QuickNode&lt;/a&gt;, an open-source starter kit for Node.js, TypeScript, and PostgreSQL projects. This comprehensive pack is designed to kickstart your backend development, setting up a robust, scalable, and secure server environment in no time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is QuickNode?
&lt;/h2&gt;

&lt;p&gt;QuickNode is not just another starter kit; it's a complete ecosystem that provides a solid foundation for your backend projects, combining the power of Node.js, TypeScript, PostgreSQL, Express.js, Sequelize, Docker, and a host of other cutting-edge technologies.&lt;/p&gt;

&lt;p&gt;The project's primary aim is to streamline the initial setup process, letting you focus on what truly matters: writing impactful application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highlights of QuickNode
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Express.js&lt;/strong&gt;: Provides a ready-to-go HTTP server with predefined /user routes, giving your project a running start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequelize ORM&lt;/strong&gt;: Simplifies PostgreSQL database interactions, making it easier for developers to manage database operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speedy Compilation&lt;/strong&gt;: Employs SWC, the fastest TypeScript compiler available, accelerating your development process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Integration&lt;/strong&gt;: Comes pre-configured with an alpine image for efficient, production-grade deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced Security&lt;/strong&gt;: Integrates JWT and Bcrypt for encrypted user authentication and password storage, ensuring the secure handling of sensitive user data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Testing&lt;/strong&gt;: Utilizes Jest and Supertest to ensure code integrity and comprehensive test coverage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Who can benefit from QuickNode?
&lt;/h2&gt;

&lt;p&gt;Whether you're a seasoned backend developer looking to save time on project setup or a beginner diving into the world of Node.js, TypeScript, and PostgreSQL, QuickNode is built for you. By abstracting away the complex setup process, QuickNode provides a clean slate to start building your applications straight away.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get started?
&lt;/h2&gt;

&lt;p&gt;Getting started with QuickNode is as simple as cloning the repository, installing the dependencies, and building the project. And just like that, you're ready to start developing!&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;QuickNode aims to be a constantly evolving project that grows with the community's needs. I welcome contributions, suggestions, and feedback to improve and expand the project. If you find QuickNode useful, please consider starring ⭐️ the repository on GitHub - your support means a lot!&lt;/p&gt;

&lt;p&gt;Check out QuickNode on GitHub: &lt;a href="https://github.com/KaranJagtiani/quick-node" rel="noopener noreferrer"&gt;QuickNode&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy coding, and may QuickNode serve you well in your coding journey!&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>postgres</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Building a GitHub Repository Cloner and Commit Crawler with Go</title>
      <dc:creator>Karan Jagtiani</dc:creator>
      <pubDate>Sat, 10 Jun 2023 06:10:49 +0000</pubDate>
      <link>https://dev.to/karanjagtiani/building-a-github-repository-cloner-and-commit-crawler-with-go-219o</link>
      <guid>https://dev.to/karanjagtiani/building-a-github-repository-cloner-and-commit-crawler-with-go-219o</guid>
      <description>&lt;p&gt;Hello everyone!&lt;/p&gt;

&lt;p&gt;In this post, I'm excited to share a project I've been working on: a GitHub Repository Cloner and Commit Crawler. This Go application is designed to clone a user-provided list of repositories and then crawl through the commit history of each, all without utilizing GitHub APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does It Do?
&lt;/h2&gt;

&lt;p&gt;Our application has a set of specific features that make it both versatile and easy to use:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Repository Cloning&lt;/strong&gt;: Clone multiple GitHub repositories using SSH. This is a secure and efficient way to fetch repositories for local analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Commit Crawling&lt;/strong&gt;: Traverse the commit history of each repository, providing valuable insight into past code changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt;: You can specify how many days in the past you want to crawl and for which author.&lt;br&gt;
Security: The app uses your personal SSH keys for secure operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Uses your personal SSH keys for secure operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Did I Build This?
&lt;/h2&gt;

&lt;p&gt;When working with open-source projects or conducting codebase analysis, you often need to examine the commit history of multiple repositories. GitHub APIs can provide this data, but there are limitations and complexity in handling API responses.&lt;/p&gt;

&lt;p&gt;Building a tool that uses Git directly to clone repositories and crawl commit history bypasses these restrictions and offers greater flexibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;Here's a quick rundown of the steps involved in using the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Installation&lt;/strong&gt;: First, you need to clone the repository and build the project.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:KaranJagtiani/go-git-cloner.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup SSH Key&lt;/strong&gt;: Copy your SSH key that has access to the repositories you wish to crawl in the &lt;code&gt;ssh_key&lt;/code&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration&lt;/strong&gt;: The &lt;code&gt;config.yaml&lt;/code&gt; file is your control center. Here, you specify the repositories to clone, the author email, and the days you wish to crawl in the past.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt;: Build the project as a binary.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go build &lt;span class="nt"&gt;-o&lt;/span&gt; out/go-git-cloner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Execution&lt;/strong&gt;: Run the built binary.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./out/go-git-cloner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! Your specified repositories are cloned, and the commit history is crawled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Contribution
&lt;/h2&gt;

&lt;p&gt;The project is open-source and contributions are always welcome! To contribute, simply fork the project, create your feature branch, commit your changes, and open a pull request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;The GitHub Repository Cloner and Commit Crawler offers an efficient and secure method to clone and crawl GitHub repositories, providing a flexible tool for codebase analysis. I hope it helps in your development journey!&lt;/p&gt;

&lt;p&gt;The project is open-source and I welcome any contributions, suggestions, and feedback. You can find the project &lt;a href="https://github.com/KaranJagtiani/go-git-cloner" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you have any questions, want to connect with me, or are interested in checking out my other work, feel free to visit my website: &lt;a href="https://karanjagtiani.com" rel="noopener noreferrer"&gt;https://karanjagtiani.com&lt;/a&gt;. I'm always excited to connect with fellow developers and open-source enthusiasts. Looking forward to hearing from you!&lt;/p&gt;

</description>
      <category>go</category>
      <category>github</category>
      <category>git</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Supercharge Your Web Development with NextJS: 9 Key Advantages</title>
      <dc:creator>Karan Jagtiani</dc:creator>
      <pubDate>Fri, 07 Apr 2023 15:45:43 +0000</pubDate>
      <link>https://dev.to/karanjagtiani/supercharge-your-web-development-with-nextjs-9-key-advantages-4921</link>
      <guid>https://dev.to/karanjagtiani/supercharge-your-web-development-with-nextjs-9-key-advantages-4921</guid>
      <description>&lt;p&gt;Since Facebook (now Meta) first released React in 2013, it's been a go-to choice for building web applications. But recently, a challenger has emerged – NextJS! Developed by Vercel, NextJS is a powerful React-based framework that comes packed with awesome features to make your web applications highly scalable. Ready for nine reasons why NextJS leaves React in the dust for creating scalable web apps with exceptional user experiences, lightning-fast load times, and killer SEO features? Let's dive in!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. SEO Magic Out of the Box&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS is a superhero when it comes to Search Engine Optimization (SEO). It's got features like server-side rendering (SSR) and static site generation (SSG) in its arsenal, helping your web app rank higher and score more organic traffic. Plus, NextJS makes managing the head tag, meta tags, and structured data a breeze! React does support SEO, but you'll need third-party libraries and extra configs to get the job done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. SSR: Your Web App's Secret Weapon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS has built-in support for server-side rendering (SSR), a game-changer that lets web applications render pages on the server before sending them to the client. This slashes the initial load times and boosts SEO performance. Meanwhile, React apps by default render the HTML pages client-side, which can lead to longer load times and subpar SEO.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. SSG: The Need for Speed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Static Site Generation (SSG) is where NextJS really shines. With SSG, your web pages are pre-rendered as static HTML files, giving you blazingly fast load times, which results in better SEO performance. React also has SSG support with 3rd party tools, but it's just not as smooth and integrated as it is in NextJS!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. ISG: Keep Your Content Fresh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Incremental Site Generation (ISG) is a unique NextJS feature that lets you update static pages incrementally, so you don't have to rebuild your entire site. It's a game-changer for big web apps with frequently updated content, cutting down on build times and ensuring users see the latest and greatest. Sadly, React doesn't have a built-in ISG solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Auto Code Splitting for the Win&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS automatically breaks your app's code into smaller chunks, so only the necessary code is loaded for each page. This means faster page loads and happier users! React supports code splitting too, but you'll have to set it up manually – and who has time for that?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. TypeScript Support: No Sweat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS is all about making life easier with first-class TypeScript support. Just a minimal config, and you're ready to enjoy TypeScript's benefits like better code quality and refactoring. React supports TypeScript too, but it requires extra setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. File-System Based Routing: Easy Peasy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS's file-system based routing system is a breeze, automatically generating routes based on your app's file structure. With React, you'll need third-party libraries like React Router to handle routing – adding complexity to your development process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Auto Performance Optimization: Just Works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS auto-optimizes your web app's performance using techniques like prefetching, inlining critical CSS, and lazy loading images. This means a smoother user experience, faster loads, and less resource consumption. React has some performance optimization features, but they're not as comprehensive or automatic as NextJS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Experimental Features? More Like Cutting-Edge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NextJS has a faster release cycle than React, which means experimental features often become stable sooner. That's great news for developers who want to stay ahead of the curve and keep their web apps fresh and competitive. React does release new features, but its slower release cycle can be a bit of a downer for devs who crave the latest and greatest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Still not Convinced?
&lt;/h2&gt;

&lt;p&gt;If you are still not convinced by the points mentioned above, React themselves list NextJS as the no. #1 choice for creating new React applications. Recently, the team at React replaced the notorious &lt;code&gt;create-react-app&lt;/code&gt; with NextJS, Remix, Gatsby and Expo as options for creating a brand new React app on the &lt;a href="https://react.dev/learn/start-a-new-react-project" rel="noopener noreferrer"&gt;Start a New React Project&lt;/a&gt; page!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Verdict: NextJS Takes the Crown&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There's no denying it – NextJS is the takes the crown when it comes to building scalable web applications. With its impressive arsenal of features, including built-in SSR, automatic code splitting, SSG, ISG, TypeScript support, file-system based routing, automatic performance optimization, rapid adoption of experimental features, and out-of-the-box SEO support. NextJS makes creating cutting-edge web apps a breeze. So if you're considering developing a scalable, high-performance, react-based web application, it's time to give NextJS a shot – you won't be disappointed!&lt;/p&gt;

&lt;p&gt;If there are any points I missed that make NextJS a compelling and go-to framework to use for production applications, please do mention them in the comments.&lt;/p&gt;

&lt;p&gt;Please don't hesitate to reach out to me directly. I'm always happy to hear from readers and help in any way I can.&lt;/p&gt;

&lt;p&gt;Thanks again for taking you time to read this article, see you next time :)&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Send Logs from Go to Logstash in the ELK Stack with Docker Setup</title>
      <dc:creator>Karan Jagtiani</dc:creator>
      <pubDate>Sun, 26 Feb 2023 19:55:31 +0000</pubDate>
      <link>https://dev.to/karanjagtiani/send-logs-from-go-to-logstash-in-the-elk-stack-with-docker-setup-16eo</link>
      <guid>https://dev.to/karanjagtiani/send-logs-from-go-to-logstash-in-the-elk-stack-with-docker-setup-16eo</guid>
      <description>&lt;p&gt;Logging is a critical component of any application, allowing developers to easily identify and troubleshoot issues that arise during runtime. However, as applications become more complex, it can be challenging to manage and analyze logs in a way that is both efficient and effective. This is where Logstash comes in. Logstash is a powerful tool that simplifies the process of collecting, processing, and storing logs in a centralized location. In this blog post, we will explore how to push logs from a Go app to Logstash using the go-logstash package.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Logstash?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/logstash/" rel="noopener noreferrer"&gt;Logstash&lt;/a&gt; is an open-source tool that allows developers to easily ingest, process, and store logs. It is part of the Elastic Stack (also known as ELK), which includes Elasticsearch and Kibana. Logstash provides a variety of input and output plugins that allow it to collect logs from a wide range of sources, including files, TCP/UDP sockets, and messaging systems like Kafka and RabbitMQ. Once collected, Logstash can process and enrich the logs before forwarding them to Elasticsearch for storage and analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing go-logstash
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/KaranJagtiani/go-logstash" rel="noopener noreferrer"&gt;go-logstash&lt;/a&gt; is a Golang package that provides a simple interface for pushing logs to Logstash. It supports both TCP and UDP protocols and can output logs in either JSON or string format. go-logstash is easy to use and provides customizable options for configuring the Logstash connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Logstash
&lt;/h2&gt;

&lt;p&gt;Before we can start pushing logs to Logstash, we need to set up a Logstash instance. This can be done by downloading and installing Logstash from the Elastic website or setting it up using Docker Compose.&lt;/p&gt;

&lt;p&gt;Create a new directory, let's call it &lt;code&gt;go-logstash-demo&lt;/code&gt; and let's also create the necessary files that we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;go-logstash-demo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;go-logstash-demo
&lt;span class="nb"&gt;touch &lt;/span&gt;docker-compose.yml
&lt;span class="nb"&gt;touch &lt;/span&gt;docker-setup/logstash/Dockerfile
&lt;span class="nb"&gt;touch &lt;/span&gt;docker-setup/logstash/logstash.conf
&lt;span class="nb"&gt;touch &lt;/span&gt;docker-setup/go-logger/Dockerfile
&lt;span class="nb"&gt;touch &lt;/span&gt;main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the &lt;code&gt;docker-compose.yml&lt;/code&gt; file that creates the ELK stack, Elastic Search, Logstash, and Kibana each into its own container, along with a container for the Go application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker-compose.yml

version: "3.9"

services:
  elasticsearch:
    image: elasticsearch:7.1.0
    volumes:
      - ./esdata:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      - "discovery.type=single-node"
    networks:
      - elk

  logstash:
    build:
      context: .
      dockerfile: docker-setup/logstash/Dockerfile
    ports:
      - 9600:9600
      - 5228:5228
    environment:
      LOGSTASH_PORT: 5228
      LOGSTASH_INDEX: "test-index"
      ELASTIC_HOST: "elasticsearch:9200"
      ELASTIC_USERNAME: "elastic"
      ELASTIC_PASSWORD: "elastic"
    networks:
      - elk
    depends_on:
      - elasticsearch
    links:
      - elasticsearch

  kibana:
    image: kibana:7.1.0
    hostname: kibana
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch
    links:
      - elasticsearch
    environment:
      ELASTIC_HOST: "http://elasticsearch:9200"
      ELASTIC_USERNAME: "elastic"
      ELASTIC_PASSWORD: "elastic"

  go-app:
    container_name: go-app
    build:
      context: .
      dockerfile: docker-setup/go-logger/Dockerfile
    networks:
      - elk

networks:
  elk:
    driver: bridge

volumes:
  esdata:
    driver: local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a custom Dockerfile for the Logstash container because we want to provide a custom &lt;code&gt;logstash.conf&lt;/code&gt; file to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker-setup/logstash/Dockerfile

FROM docker.elastic.co/logstash/logstash-oss:7.1.0

COPY ./docker-setup/logstash/logstash.conf /etc/logstash/conf.d/

CMD logstash -f /etc/logstash/conf.d/logstash.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's an example of a basic &lt;code&gt;logstash.conf&lt;/code&gt; file that creates a Logstash pipeline to collect logs from a TCP or UDP socket and output them to Elasticsearch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker-setup/logstash/logstash.conf

input {
  tcp {
    host =&amp;gt; "0.0.0.0"
    port =&amp;gt; "${LOGSTASH_PORT}"
    codec =&amp;gt; json_lines
  }
  udp {
    host =&amp;gt; "0.0.0.0"
    port =&amp;gt; "${LOGSTASH_PORT}"
    codec =&amp;gt; json_lines
  }
}

output {
  stdout { codec =&amp;gt; json_lines }
  elasticsearch {
      hosts =&amp;gt; [ "${ELASTIC_HOST}" ]
      user =&amp;gt; "${ELASTIC_USERNAME}"
      password =&amp;gt; "${ELASTIC_PASSWORD}"
      codec =&amp;gt; json_lines
      index =&amp;gt; "${LOGSTASH_INDEX}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;logstash.conf&lt;/code&gt; config file is capable of supporting environment variables as well, which we are providing through our &lt;code&gt;docker-compose.yml&lt;/code&gt; file. This pipeline listens for logs on TCP port 5228 and expects them to be in JSON format and outputs the logs to Elasticsearch in JSON.&lt;/p&gt;

&lt;p&gt;We also need to create a Dockerfile for the Go application, as it would be using the internal Docker network to communicate with Logstash in order to push logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker-setup/go-logger/Dockerfile

FROM golang:1.19-alpine

WORKDIR /go-logstash-json

COPY . .

RUN go build -o out/logger *.go

CMD [ "./out/logger" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pushing logs from a Go app to Logstash
&lt;/h2&gt;

&lt;p&gt;Now that we have our ELK set up ready on our local, let's take a look at how we can push logs to it from a Go application using go-logstash.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Go application
&lt;/h3&gt;

&lt;p&gt;In the root directory of &lt;code&gt;go-logstash-demo&lt;/code&gt;, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go mod init example.com/go-logstash-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install &lt;a href="https://github.com/KaranJagtiani/go-logstash" rel="noopener noreferrer"&gt;go-logstash&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The first step is to install go-logstash using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get github.com/KaranJagtiani/go-logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Test the Library
&lt;/h3&gt;

&lt;p&gt;Next, we need to import the go-logstash package in our Go application and test the library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# main.go

package main

import (
    logstash_logger "github.com/KaranJagtiani/go-logstash"
)

func main() {
    logger := logstash_logger.Init("logstash", 5228, "tcp", 5)

    payload := map[string]interface{}{
        "message": "TEST_MSG",
        "error":   false,
    }

    logger.Log(payload) // Generic log
    logger.Info(payload) // Adds "severity": "INFO"
    logger.Debug(payload) // Adds "severity": "DEBUG"
    logger.Warn(payload) // Adds "severity": "WARN"
    logger.Error(payload) // Adds "severity": "ERROR"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logger := logstash_logger.Init("logstash", 5228, "tcp", 5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line creates a new logger that will send logs to a Logstash instance running on &lt;code&gt;logstash&lt;/code&gt; host in Docker through TCP port 5228 with a connection timeout of 5 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, sending logs from a Go application to Logstash in the ELK stack is a powerful way to centralize and manage your application logs. With the help of the &lt;a href="https://github.com/KaranJagtiani/go-logstash" rel="noopener noreferrer"&gt;go-logstash&lt;/a&gt; library, it's easy to push logs to Logstash using either TCP or UDP. Logstash can then process and enrich the logs, making it easier to analyze and troubleshoot issues in your application.&lt;/p&gt;

&lt;p&gt;Thank you for reading this beginner's guide on sending logs from a Go application to Logstash in the ELK stack with Docker setup. I hope that this guide has provided you with a useful overview of the process and has helped you get started with integrating Go and Logstash.&lt;/p&gt;

&lt;p&gt;If you have any questions or feedback, please don't hesitate to leave a comment or reach out to me directly. I'm always happy to hear from readers and help in any way I can.&lt;/p&gt;

&lt;p&gt;Thanks again for reading, and happy logging!&lt;/p&gt;

</description>
      <category>bitcoin</category>
      <category>blockchain</category>
      <category>cryptocurrency</category>
    </item>
  </channel>
</rss>
