<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ricards Taujenis</title>
    <description>The latest articles on DEV Community by Ricards Taujenis (@mozes721).</description>
    <link>https://dev.to/mozes721</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mozes721"/>
    <language>en</language>
    <item>
      <title>Why I'm Heading to Warsaw for NBX 2026</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Thu, 19 Mar 2026 11:56:47 +0000</pubDate>
      <link>https://dev.to/mozes721/why-im-heading-to-warsaw-for-nbx-2026-29fp</link>
      <guid>https://dev.to/mozes721/why-im-heading-to-warsaw-for-nbx-2026-29fp</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vr9qif41hpbihfrdiid.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vr9qif41hpbihfrdiid.jpg" alt=" " width="474" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's no shortage of Web3 conferences these days.&lt;/p&gt;

&lt;p&gt;But every now and then, an event stands out - not because of the hype, but because of the people and opportunities it brings together.&lt;/p&gt;

&lt;p&gt;That's why I'm excited to share that I'll be joining &lt;a href="https://nextblockexpo.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Next Block Expo 2026&lt;/strong&gt;&lt;/a&gt; as an Official Ambassador.&lt;/p&gt;

&lt;h2&gt;
  
  
  Warsaw Is Becoming a Web3 Hotspot
&lt;/h2&gt;

&lt;p&gt;On March 24–25, Warsaw will turn into one of the most important meeting points for the European Web3 ecosystem.&lt;/p&gt;

&lt;p&gt;We're talking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;2,000+ attendees&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;140+ speakers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Builders, investors, founders - all in one place&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But scale isn't what makes an event valuable.&lt;/p&gt;

&lt;p&gt;What really matters is what happens between people.&lt;/p&gt;

&lt;p&gt;The Underrated Advantage: Smart Networking&lt;/p&gt;

&lt;p&gt;One thing that really stood out to me this year is the networking experience.&lt;/p&gt;

&lt;p&gt;As part of the event, attendees get access to a dedicated app designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Schedule 1:1 meetings&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Discover relevant people&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate the event efficiently&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the difference between a "good" conference and a truly valuable one usually comes down to one thing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who you meet - and how intentionally you meet them.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Decided to Get Involved
&lt;/h2&gt;

&lt;p&gt;Joining as an ambassador wasn't just about attending.&lt;/p&gt;

&lt;p&gt;It's about being part of something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects the right people&lt;/li&gt;
&lt;li&gt;Supports early-stage builders&lt;/li&gt;
&lt;li&gt;Pushes the ecosystem forward&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Web3 is at a stage where signal matters more than noise.&lt;/p&gt;

&lt;p&gt;And events like this help surface that signal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohr3tiifbuqp33bcl3du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohr3tiifbuqp33bcl3du.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  See You There?
&lt;/h2&gt;

&lt;p&gt;If you're planning to attend Next Block Expo 2026, I'd love to connect.&lt;/p&gt;

&lt;p&gt;🎟️ You can register here: &lt;a href="https://lnkd.in/d4m4Y4pV" rel="noopener noreferrer"&gt;https://lnkd.in/d4m4Y4pV&lt;/a&gt;&lt;br&gt;
 Use promo code: &lt;strong&gt;Mozes721&lt;/strong&gt;&lt;br&gt;
If not this time - hopefully we'll cross paths at a future edition.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
      <category>networking</category>
      <category>startup</category>
    </item>
    <item>
      <title>OpenClaw AI Agent on Raspberry Pi</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Tue, 24 Feb 2026 11:47:23 +0000</pubDate>
      <link>https://dev.to/mozes721/openclaw-ai-agent-on-raspberry-pi-4o4n</link>
      <guid>https://dev.to/mozes721/openclaw-ai-agent-on-raspberry-pi-4o4n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu0roxx81c04wkqhs0m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu0roxx81c04wkqhs0m4.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenClaw is an open-source personal AI assistant that runs locally on hardware like a Raspberry Pi, keeping your data private and under your control. If you enjoy self-hosting and experimenting with AI agents, this setup is surprisingly straightforward.&lt;/p&gt;

&lt;p&gt;Related video you can find bellow 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/GO5WVQ-lds8" rel="noopener noreferrer"&gt;https://youtu.be/GO5WVQ-lds8&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Raspberry Pi Prep
&lt;/h1&gt;

&lt;p&gt;SSH into your Pi and update the system for stability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Node.js 22+ if missing (required for OpenClaw), plus Python for potential dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install nodejs npm python3 python3-pip -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the one-liner installer, which handles Node detection and onboarding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://openclaw.ai/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Post-install, check setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw doctor
openclaw status
openclaw dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  OpenClaw Overview
&lt;/h1&gt;

&lt;p&gt;In a nutshell it acts as a 24/7 AI agent with persistent memory. &lt;/p&gt;

&lt;p&gt;It integrated with apps like Telegram, WhatsApp, Discord etc. You can choose from multiple LLM providers depending on your needs and setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4qsmcut4zy8ei475dgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4qsmcut4zy8ei475dgp.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Onboarding process
&lt;/h2&gt;

&lt;p&gt;After installation, launch the guided wizard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw onboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  Process Breakdown
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1: LLM Selection&lt;/strong&gt;-- Prompts for model (e.g., Gemini), then API key entry.​&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 2: Channel Setup&lt;/strong&gt;-- Picks interface like Telegram; inputs bot token from &lt;a class="mentioned-user" href="https://dev.to/botfather"&gt;@botfather&lt;/a&gt;.​ &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 3: Provider Link&lt;/strong&gt;-- Links accounts (Google for Gemini, GitHub, etc.) for skills/tools.​&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Telegram Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Message &lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/botfather"&gt;@botfather&lt;/a&gt;&lt;/strong&gt; on Telegram.&lt;/li&gt;
&lt;li&gt;Send &lt;code&gt;/newbot&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Name your bot (e.g., &lt;code&gt;MyOpenClawBot&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Copy the API token provided by BotFather.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Approve pairing
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In Telegram, search for your new bot.&lt;/li&gt;
&lt;li&gt;Send &lt;code&gt;/start&lt;/code&gt; (or any message) to generate a pairing code.&lt;/li&gt;
&lt;li&gt;On your Pi, run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw pairing approve telegram [pairing-code]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now test by messaging the bot - responses should sync via your selected LLM across both the dashboard and Telegram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o9n0bl6uh56qg1adav5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o9n0bl6uh56qg1adav5.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;OpenClaw is still very new and evolving quickly. Some users are already running it on devices like the Mac mini. I'll continue testing it on my Mac and plan to link Google Calendar, Mail, and Drive to explore its full potential.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>raspberrypi</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Kickstart OpenCode with OpenRouter</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Sun, 11 Jan 2026 16:32:16 +0000</pubDate>
      <link>https://dev.to/mozes721/kickstart-opencode-with-openrouter-32o7</link>
      <guid>https://dev.to/mozes721/kickstart-opencode-with-openrouter-32o7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgl38qnc5kwgopg2gg5oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgl38qnc5kwgopg2gg5oe.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI coding tools evolve fast, and OpenCode stands out as an open-source agent rivaling Cursor or Claude Code CLI - fully terminal-native with LSP support, multi-session handling, and 75+ LLM providers via Models.dev.&lt;/p&gt;

&lt;p&gt;Compared to locked-in alternatives that are subscription base you can pick models pay-as-you-go prioritizing privacy and flexibility.&lt;br&gt;
Check my 5-minute setup video for a quick demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/VSN3fCJcoIc" rel="noopener noreferrer"&gt;https://youtu.be/VSN3fCJcoIc&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Installation
&lt;/h1&gt;

&lt;p&gt;Start with the one-liner curl install - no Docker or complex deps needed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -fsSL https://opencode.ai/install | bash ​.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This adds the opencode CLI globally. Verify with opencode --version, then launch via opencode. For VS Code integration, it pairs seamlessly as an LSP client. Full docs: opencode.ai/docs.​​&lt;/p&gt;
&lt;h1&gt;
  
  
  OpenRouter Setup
&lt;/h1&gt;

&lt;p&gt;OpenRouter powers model access via its OpenAI-compatible API - sign up at openrouter.ai, grab a key, and fund credits (pay-per-token, starting free tiers).​​​&lt;br&gt;
Export these env vars (add to ~/.zshrc for persistence):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export OPENAI_API_BASE=https://openrouter.ai/api/v1
export OPENAI_API_KEY=your-openrouter-api-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source your shell &lt;code&gt;(source ~/.zshrc)&lt;/code&gt; and relaunch. Screenshot your API keys page for readers to match. This unlocks 300+ models without switching providers.&lt;br&gt;
I added some credits to OpenRouter just for more advanced models for different projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz35tdanhq779aip3cw6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz35tdanhq779aip3cw6f.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source your shell &lt;code&gt;(source ~/.zshrc)&lt;/code&gt; and relaunch. Screenshot your API keys page for readers to match. This unlocks 300+ models without switching providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl6o9skeyy5myrqla39i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl6o9skeyy5myrqla39i.jpg" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Basic Usage
&lt;/h1&gt;

&lt;p&gt;Run &lt;code&gt;opencode&lt;/code&gt; in a repo to init: It prompts project context, then use slash commands in the TUI.​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/connect&lt;/code&gt;: Link OpenRouter (auto-detects env vars).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/models&lt;/code&gt;: List/select (e.g., Claude 3.5 Sonnet or GPT-4o-filter free ones).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/init&lt;/code&gt;: Scans repo, suggests plan (e.g., "Analyze main.rs and create layers").&lt;/li&gt;
&lt;li&gt;Core flow: &lt;code&gt;/plan&lt;/code&gt; for outline, &lt;code&gt;/build&lt;/code&gt; to generate&lt;code&gt;/run&lt;/code&gt; code, &lt;code&gt;/improve&lt;/code&gt; for refinements.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>automation</category>
      <category>llm</category>
    </item>
    <item>
      <title>Self-Hosting n8n in 5 Minutes: Local Docker or VPS Setup</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Fri, 12 Dec 2025 12:57:10 +0000</pubDate>
      <link>https://dev.to/mozes721/self-hosting-n8n-in-5-minutes-local-docker-or-vps-setup-m80</link>
      <guid>https://dev.to/mozes721/self-hosting-n8n-in-5-minutes-local-docker-or-vps-setup-m80</guid>
      <description>&lt;p&gt;As a powerful open-source workflow automation tool running locally or in own server free of charge brings huge value to build future projects!&lt;br&gt;
For more info check the website &lt;a href="https://docs.n8n.io/" rel="noopener noreferrer"&gt;https://docs.n8n.io/&lt;/a&gt; otherwise lets get started! 🏁🏁🏁&lt;/p&gt;
&lt;h1&gt;
  
  
  Docker Setup
&lt;/h1&gt;

&lt;p&gt;To run locally you just need a simple Docker command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build image
docker build -t my-n8n .

# Run container
docker run -it --rm \
  --name n8n \
  -p 5678:5678 \
  -v ~/.n8n:/home/node/.n8n \
  n8nio/n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;--name&lt;/code&gt; sets the container name, &lt;code&gt;-p&lt;/code&gt; maps the port, and &lt;code&gt;-v&lt;/code&gt; ensures your workflows are saved persistently."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fony2k2y669rkp9qu01w7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fony2k2y669rkp9qu01w7.png" alt=" " width="690" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in some time(because the size is 1.3GB) it should run and in the localhost port will see above internal sign in.&lt;/p&gt;

&lt;p&gt;And that's practically it and you can start building your workflows!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Just don't rebuild the image without saving the volumes as you may delete them as I have before!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Webhooks &amp;amp; ngrok
&lt;/h2&gt;

&lt;p&gt;Locally if you want n8n to communicate with external applications &lt;strong&gt;Webhook&lt;/strong&gt; needs to work, and for that ngrok is required.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ngrok exposes your local n8n instance to the internet, so external apps can reach your webhooks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiubz4b9s23umlorscsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiubz4b9s23umlorscsi.png" alt=" " width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a domain has been created and API key shared the docker file would be better to be replaced with &lt;strong&gt;docker-compose.yml&lt;/strong&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  n8n:
    image: n8nio/n8n:latest
    ports:
      - "5555:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=your-secure-password
      - N8N_HOST=0.0.0.0
      - N8N_PORT=5678
    volumes:
      - n8n_data:/home/node/.n8n

  ngrok:
    image: ngrok/ngrok:latest
    command: ["http", "n8n:5678"]
    environment:
      - NGROK_AUTHTOKEN=your-ngrok-authtoken
    ports:
      - "4040:4040"
    depends_on:
      - n8n

volumes:
  n8n_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ones this is done and rebuild you can use Webhooks. 💪&lt;/p&gt;

&lt;h1&gt;
  
  
  Server setup
&lt;/h1&gt;

&lt;p&gt;On a VPS with a public IP or domain, you don't need ngrok. Just point your DNS to the server and configure n8n with your domain.&lt;/p&gt;

&lt;p&gt;As well let's then use compose file &lt;code&gt;docker-compose.server.yml&lt;/code&gt; (tweak &lt;code&gt;N8N_HOST&lt;/code&gt; to your domain):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  n8n:
    image: n8nio/n8n:latest
    ports:
      - "5555:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=your-secure-password
      - N8N_HOST=0.0.0.0
      - N8N_PORT=5678
    volumes:
      - n8n_data:/home/node/.n8n

  ngrok:
    image: ngrok/ngrok:latest
    command: ["http", "n8n:5678"]
    environment:
      - NGROK_AUTHTOKEN=your-ngrok-authtoken
    ports:
      - "4040:4040"
    depends_on:
      - n8n

volumes:
  n8n_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;restart: always&lt;/code&gt; ensures n8n restarts automatically if the server reboots.&lt;/p&gt;

&lt;p&gt;And run &lt;code&gt;docker compose -f docker-compose.server.yml up -d&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;Whether you're experimenting locally or deploying on a VPS, n8n gives you full control over your workflows. With Docker Compose, you can scale easily, secure your instance, and connect to external apps. Start building your automations today - without subscription costs.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>tooling</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How AI slop impacted content creators</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Mon, 06 Oct 2025 07:44:23 +0000</pubDate>
      <link>https://dev.to/mozes721/how-ai-slop-impacted-content-creators-1feg</link>
      <guid>https://dev.to/mozes721/how-ai-slop-impacted-content-creators-1feg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhyz22qps1xoqa88t62v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhyz22qps1xoqa88t62v.jpg" alt=" " width="626" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a content creator across different media types(blogging, YT) I have been “gutted” of how much of an impact it has made personally both on view count and financial.&lt;/p&gt;

&lt;p&gt;While AI does help tremendously it as well at the same time diminishes importance in most ‘white collar’ jobs.&lt;/p&gt;

&lt;p&gt;📉 View Count &amp;amp; Revenue Decline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is no shocker that AI-generated content flooded a lot of platforms, pushing authentic creators down in search and recommendation algorithms due to sheer&lt;/li&gt;
&lt;li&gt;Lower visibility leads to reduced engagement, watch time, and ad revenue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thus, these platforms have also cut our earnings in half due to it as it’s tough to compete and less people go to platforms to gain insight on different topics.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;😞 Emotional Toll on Creators&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creators feel demoralized, questioning value of their work.
The joy of creating is replaced by frustration and disillusionment.&lt;/li&gt;
&lt;li&gt;Any level of authenticity seems feels diminished even if AI can construct cases a lot better.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧭 Why Creators Are Going Silent&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some creators step back due to the overwhelming noise and lack of reward.&lt;/li&gt;
&lt;li&gt;Others pause to reassess their purpose and whether authenticity still has a place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While I have kept YT to some decent level of consistency, with blogging I have not due to this ‘AI slop’ and cutting of any sorts of rewards in the platforms.&lt;/p&gt;

&lt;p&gt;🧠 Oversaturation &amp;amp; Algorithmic Noise&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creators who invest time and emotion into their work get buried under generic output.&lt;/li&gt;
&lt;li&gt;Platforms are overwhelmed with mass-produced, low-effort content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before AI there was already competition in the field and just raising in all “white collar” type of job, but now it really disrupts it all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ri2r34q6fu82gfqoor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ri2r34q6fu82gfqoor.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;My drop in view count and earnings made me demotivated to post a article for around 3 months. Whether this is my final post or just a pause, one thing is clear — authenticity has to find new ways to shine in a noisy, AI-saturated world.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>contentwriting</category>
      <category>automation</category>
      <category>writing</category>
    </item>
    <item>
      <title>How to setup RAG with VectorDB</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Sat, 21 Jun 2025 14:08:04 +0000</pubDate>
      <link>https://dev.to/mozes721/how-to-setup-rag-with-vectordb-m85</link>
      <guid>https://dev.to/mozes721/how-to-setup-rag-with-vectordb-m85</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnfqsbu7r85fgbch9z7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnfqsbu7r85fgbch9z7t.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may come across a term called RAG by now it's being rapidly adopted and introduced in 2020.&lt;br&gt;
 &lt;br&gt;
Mainly it's heavily in use for LLM-based apps, chatbots AI customer support, internal knowledge assistants etc.&lt;/p&gt;
&lt;h1&gt;
  
  
  📊 Brief overview
&lt;/h1&gt;

&lt;p&gt;Let's create a ticker-specific RAG database table example using Pinecone. In my project I needed to map stock, crypto name: ticker symbols to extract just the ticker symbol. I have as well a Youtube video what you can check bellow. 👇️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/QDe3Gi2tXR8" rel="noopener noreferrer"&gt;https://youtu.be/QDe3Gi2tXR8&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Database setup
&lt;/h1&gt;

&lt;p&gt;As mentioned I used Pinecone but there are as well other options like using Reddis and even Postgres has some support.&lt;br&gt;
So ones you have created an account create a new index(table)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1a2t6prxornnijafsy3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1a2t6prxornnijafsy3q.png" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are many options and it depends of course on the LLM model you use. If you use GPT then by all means use one of the predefined GPT embeddings. There is as well Llama and Microsoft configuration, for my use case I made a "Manual configuration" due to embeddings based on all-MiniLM-L6-v2 model in HuggingFace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" rel="noopener noreferrer"&gt;https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; It's really important to setup configs correctly as otherwise embeddings won't work. My Metric: cosine, Dimensions: 384, Type: Dense your's however may differ.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ones created and running extract your API key and add to your .env for further embeddings.&lt;/p&gt;
&lt;h1&gt;
  
  
  📊 Prepare Dataset
&lt;/h1&gt;

&lt;p&gt;Bellow you can see fraction of my .csv just to get a glimpse what I am embedding&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;text,label
TSLA,Tesla
AAPL,Apple
MSFT,Microsoft
BABA,Alibaba Group Holding Limited
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For LLM I recommend using Jupyter or Google Colab rather then a regular IDE.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
from dotenv import load_dotenv
from sentence_transformers import SentenceTransformer
from datasets import load_dataset
from pinecone import Pinecone

load_dotenv()
pc_api_key= os.getenv("PINECONE_API_KEY")

dataset = load_dataset("Mozes721/stock-crypto-weather-dataset", data_files="crypto_mapppings.csv")
df = dataset["train"].to_pandas()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In above code I just import required packages like pinecone, and sentence_transformers used for embeddings. &lt;/p&gt;

&lt;p&gt;I stored my training data in &lt;a href="https://huggingface.co/new-dataset" rel="noopener noreferrer"&gt;https://huggingface.co/new-dataset&lt;/a&gt; due to the fact it's LLM related same as for fine tunning rather then locally, but that is individual choice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Build Alias Map
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Step 2: Create alias map
alias_to_ticker = {}

for _, row in df.iterrows():
    ticker = row['text'].upper()
    name = row['label'].lower()
    alias_to_ticker[ticker] = ticker
    alias_to_ticker[name] = ticker
    # Optional: add lowercase ticker too
    alias_to_ticker[ticker.lower()] = ticker

# Step 3: Prepare for embedding
aliases = list(alias_to_ticker.keys())
tickers = [alias_to_ticker[a] for a in aliases]

# Embed
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(aliases, convert_to_numpy=True)

# Step 5: Load Pinecone table
pc = Pinecone(api_key=pc_api_key)
index = pc.Index("stock-index")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So alias map is created and in the for loop iterate over rows on text and label. Append to alias tuple both name and ticker( in my mappings it should work on both ends if AAPL given should return AAPL if Apple then AAPL). &lt;/p&gt;

&lt;p&gt;Then we fetch the model we want to embed it to and encode by converting to numpy ad then for now just load the index table.&lt;/p&gt;

&lt;h1&gt;
  
  
  Embed &amp;amp; Store in Pinecone
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Prepare vectors in correct format
vectors = []
for i in range(len(aliases)):
    vectors.append({
        "id": f"stock_{i}",
        "values": embeddings[i].tolist(),
        "metadata": {"ticker": tickers[i], "alias": aliases[i]}
    })

# Batch upsert to avoid 2MB limit
batch_size = 50
total_batches = (len(vectors) + batch_size - 1) // batch_size

for i in range(0, len(vectors), batch_size):
    batch = vectors[i:i + batch_size]
    index.upsert(vectors=batch)
    batch_num = i // batch_size + 1
    print(f"Batch {batch_num}/{total_batches} has been embedded and uploaded ({len(batch)} vectors)")

print("All city batches completed!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The vectors should be prepared in an array, we go through a for loop in aliases. Then append to vector and have id, values and metadata defined.&lt;/p&gt;

&lt;p&gt;Uploading to Pinecone needs to be done in batches to avoid 2MB upsert limit. When its done with batch_size we can upsert to stock-index table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdjwc91mcacwi5696w6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdjwc91mcacwi5696w6a.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🤖 Querying with RAG
&lt;/h1&gt;

&lt;p&gt;The testing face should be quite simple as long as data has been embedded properly and same LLM model is used.]&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
from dotenv import load_dotenv
from sentence_transformers import SentenceTransformer
from pinecone import (
    Pinecone
)

class EmbeddingStockMapper:
    def __init__(self, model_name: str, pinecone_api_key: str):
        # Initialize the embedding model
        self.model = SentenceTransformer(model_name)

        pc = Pinecone(api_key=pinecone_api_key)
        self.index = pc.Index("stock-index")

    def get_stock_ticker(self, query):
        # Get embedding for the query
        query_embedding = self.model.encode(query, convert_to_numpy=True)

        # Search in Pinecone
        results = self.index.query(
            vector=query_embedding.tolist(),
            top_k=1,
            include_metadata=True
        )

        if results.matches:
            return results.matches[0].metadata['ticker']
        return None

# Initialize the mapper
load_dotenv()
pc_api_key= os.getenv("PINECONE_API_KEY")
mapper = EmbeddingStockMapper(model_name="all-MiniLM-L6-v2", pinecone_api_key=pc_api_key)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we initialize the model with all-MiniLM-L6-v2 same as used before in embedings. Then create a method &lt;strong&gt;&lt;em&gt;get_stock_ticker&lt;/em&gt;&lt;/strong&gt; that will encode the query passed to it. It will then return a result.matches[0].metadata['ticker'] as per own specification that most closely matches.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test_queries = ["AAPL", "Apple Inc.", "apple", "What is the current stock price of Tesla.", "Google", "google", "TSLA", "Tesla", "tesla", "Microsoft Corporation", "microsoft"]

for query in test_queries:
    ticker = mapper.get_stock_ticker(query)
    print(f"Query: {query} -&amp;gt; Ticker: {ticker}")

//Output
Query: AAPL -&amp;gt; Ticker: AAPL
Query: Apple Inc. -&amp;gt; Ticker: AAPL
Query: apple -&amp;gt; Ticker: AAPL
Query: What is the current stock price of Tesla. -&amp;gt; Ticker: TSLA
Query: Google -&amp;gt; Ticker: GOOGL
Query: google -&amp;gt; Ticker: GOOGL
Query: TSLA -&amp;gt; Ticker: TSLA
Query: Tesla -&amp;gt; Ticker: TSLA
Query: tesla -&amp;gt; Ticker: TSLA
Query: Microsoft Corporation -&amp;gt; Ticker: MSFT
Query: microsoft -&amp;gt; Ticker: MSFT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Above you can see how it gracefully returned ticker symbols as per my request!&lt;/p&gt;




&lt;p&gt;In all honesty I was astonished by the results. RAG is slowly getting traction and I think a this is a lot better approach even if there is a learning curve compared to just using ChatGPT API calls. But most of us have simple need for AI implementation so using the whole AI model can be deemed as an "overkill".&lt;br&gt;
 &lt;br&gt;
My repo you can find here for any questions feel &lt;a href="https://github.com/Mozes721/RAGxTune" rel="noopener noreferrer"&gt;free&lt;/a&gt; to ask.&lt;/p&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>vectordatabase</category>
      <category>automation</category>
    </item>
    <item>
      <title>Vibe Coding: The Good and the Ugly</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Thu, 08 May 2025 13:27:40 +0000</pubDate>
      <link>https://dev.to/mozes721/vibe-coding-the-good-and-the-ugly-20n4</link>
      <guid>https://dev.to/mozes721/vibe-coding-the-good-and-the-ugly-20n4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4y3sjq3r9b0dbs4n26l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4y3sjq3r9b0dbs4n26l.jpg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have been around the dev block recently, you most likely heard of this new trend "vibe coding" my friend roped me into trying it with Cursor, and let me just tell you I have mixed feelings about it. Here's why.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Good
&lt;/h1&gt;

&lt;p&gt;When it comes to building a new project it feels liberating to vibe yourself to success! Not the same with legacy code however but not impossible.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Real-time feedback and rapid iteration&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cursor(like WindSurf) updates on the fly, making it a lot more faster and you don't have to switch between writing and running as not only it updates scripts but it can as well create new ones!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44mo71rweqnh4wbqwa60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44mo71rweqnh4wbqwa60.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from above screenshot for Cursor it at first asked for permission to mkdir and then automatically created schema.sql script.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Fresh start&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Vibe coding on fresh project creates less baggage compared to legacy code. In addition it gives you sense of momentum, exploration and blazingly fast builds! &lt;br&gt;
In addition it can do deploys to your Cloud provider if ssh keys are present and manage it in terminal(so not just code).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Great for ideation and brainstorming&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Stuck on what tools to use? Postgres or Mongo? RabbitMQ? Kubernetes? Or what API system to use and even language Cursor and other AI IDEs can help with that by leveraging it's deep understanding of all the tools in the disposal. &lt;/p&gt;

&lt;p&gt;Just have to be explicit and scaffold based on my goals, and then gives me full stack with languages, reasoning and tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vnwj1azwkblj8xw3w7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vnwj1azwkblj8xw3w7a.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I asked a question of what I am included to build and to follow up with README.md tech stack and architecture tree including language of choice and other tools that would be optimal. ☝️&lt;/p&gt;

&lt;h1&gt;
  
  
  The Ugly
&lt;/h1&gt;

&lt;p&gt;Even though these AI IDEs have access to your whole repo, they can still duplicate logic and other issues mentioned bellow. Thus extensive monitoring and NOT commiting before checking is a must. 🕵🏻&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Code duplication&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While code assistant IDEs have access to whole repo, it still sometimes create same functions in different directories. Like I had with my Rust project bellow. 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tpicorlrm22cm6axdg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tpicorlrm22cm6axdg5.png" alt=" " width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you pay attention you can see that verify_pin function was created in plugin, service and utils module lvl by Cursor, so when some code doesn't work it can often recreate similar logic elsewhere and forget that there is already something implemented.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tooling mismatch&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This hits hardest in JavaScript and Python. With massive ecosystems, AI assistants often install libraries that don't play well together - or that are outdated.&lt;br&gt;
 &lt;br&gt;
With React it can install certain dependencies like Supabase client and server side dependencies, same as with TS that are not compatible.&lt;/p&gt;

&lt;p&gt;As we all know we can not always use the latest packages and documentation/methods are not available for some packages or are stale with AI not knowing it but just pasting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Too fast, lack of reasoning&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sometimes it feels too easy to "vibe yourself to success", but often context of repo is misunderstood and explanation of WHY particular approach was taken falls short. Thus it's easy to fall into deep woods when you see 10 scripts generate and instead of reasoning and understand it you just close a blind eye and hope for AI Gods to fix possible issues.&lt;/p&gt;




&lt;p&gt;To conclude I must say Cursor undeniably is a game-changer. But just because something becomes 3x easier doesn't mean it can not wrap up "technical debt" in the future, especially if your not fully familiar with the tech like me when it comes to Rust. &lt;/p&gt;

&lt;p&gt;The $20/month price is a bit steep, especially when WindSurf offers a free tier and GitHub Copilot is cheaper. Still, for now, I'm sticking with Cursor.&lt;/p&gt;

&lt;p&gt;That said, I'll be more cautious. Just because the assistant offers changes doesn't mean you should accept them blindly. Review them. Test them. Because duplicated functions, unnecessary utils, and mismatched packages can snowball fast.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>ai</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Local Llama Setup: A Python Developer's Guide</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Thu, 23 Jan 2025 00:42:53 +0000</pubDate>
      <link>https://dev.to/mozes721/local-llama-setup-a-python-developers-guide-56fb</link>
      <guid>https://dev.to/mozes721/local-llama-setup-a-python-developers-guide-56fb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6sd347d9k01zfbw3f51.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6sd347d9k01zfbw3f51.jpg" alt=" " width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As using chatGPTis API is becoming more and more expensive and number of tokens are limited there comes a point in your life that have to look for alternatives. Thats where Llama comes in!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Alternatively you can use smaller models (3B parameters instead of 7B)&lt;br&gt;
Use bitsandbytes for 8-bit quantization, which reduces memory usage significantly.&lt;br&gt;
If you don't have strong GPU can always outsource to cloud options that are out there like Google Colab, Hugging Face Inference API, RunPod&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Accessing Llama Models
&lt;/h1&gt;

&lt;p&gt;To start off Hugging Face is the primary platform used for accessing Llama models(e.g., &lt;code&gt;meta-llama/Llama-2-7b-chat-hf&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co" rel="noopener noreferrer"&gt;https://huggingface.co&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create your account in &lt;strong&gt;Hugging Face&lt;/strong&gt;👆 to start using LLM models provided by Llama. &lt;br&gt;
If your ambitious can as well create own model if not there are a bunch of models to choose from. 🤖&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgobielcejrm1grvwqo2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgobielcejrm1grvwqo2q.png" alt=" " width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most people including me just need text-to-text model so typical choose would be &lt;em&gt;meta-llama/Llama-2–7b-chat-hf&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Once the model has been selected be sure to request access to the model by adding credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;huggingface-cli login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you will have to login in terminal to use the models. In your huggingface profile go to Settings &amp;gt; Access Tokens, generate your access token that you will paste in.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the Model
&lt;/h1&gt;

&lt;p&gt;In your python app we should use &lt;strong&gt;conda&lt;/strong&gt; instead of regular &lt;strong&gt;venv&lt;/strong&gt; be sure to install it activating it is similar as venv.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.anaconda.com/working-with-conda/environments/" rel="noopener noreferrer"&gt;https://docs.anaconda.com/working-with-conda/environments/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Required instalation for conda PyTorch
conda install pytorch torchvision torchaudio cpuonly -c pytorch
//Required python packages for huggingface etc
pip install transformers accelerate sentencepiece huggingface_hub
//To reduce memory usage you can as well install
pip install bitsandbytes
//Activate conda
conda activate myenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this demonstration will just make it a simple as possible in main.py the power lies when implementing &lt;strong&gt;RAG&lt;/strong&gt; (Retrieval-Augmented Generation)) or fine tuning the model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import transformers
import torch

def main():
    # Load Llama model using transformers pipeline
    pipeline = transformers.pipeline(
        "text-generation",
        model="meta-llama/Llama-2-7b-chat-hf",  # Replace with your model path if using a local model
        model_kwargs={"torch_dtype": torch.bfloat16},
        device_map="auto"
    )

    # Start the Llama pipeline
    while True:
        # Get user input
        query = input("\nYou: ")

        # Exit condition
        if query.lower() in ["exit", "quit"]:
            print("Goodbye!")
            break

        # Handle the query
        try:
            # Construct the prompt
            messages = [
                {"role": "user", "content": query},
            ]

            # Generate a response using the Llama model
            outputs = pipeline(
                messages,
                max_new_tokens=256,  # Adjust as needed
            )

            # Extract and print the response
            response = outputs[0]["generated_text"][-1]["content"]
            print(f"Bot: {response}")
        except Exception as e:
            print(f"Error handling query: {e}")

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: The model i used computation heavy(CPU, GPU) and due to enormous parameters this particular 7 Billion parameters so if it doesn't break and hangs it may be due to weak PC.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43ajlojh10ql6nmmvnrj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43ajlojh10ql6nmmvnrj.png" alt=" " width="800" height="953"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As AI doesn't seem to fade and hype keeps on going good good to be more familiarized with it if not building own model might as well implement in own project with fine tuning or implementing it with RAG. &lt;br&gt;
Of course IF your PC can handle it. 😉&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Can Pinterest Help Build Your Personal Brand?</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Tue, 10 Dec 2024 21:33:04 +0000</pubDate>
      <link>https://dev.to/mozes721/can-pinterest-help-build-your-personal-brand-39b0</link>
      <guid>https://dev.to/mozes721/can-pinterest-help-build-your-personal-brand-39b0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb476pcpfc3jqq6y4yxsc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb476pcpfc3jqq6y4yxsc.jpeg" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In todays age personal branding is more important then ever, while most of us focus on YouTube, blogging, personal website, GitHub(as a developer), Link bio(like linktree).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pinterest works similarly to affiliate links, directing users to external content where the shared content actually resides.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While it's true that Pinterest is better suited for creative, visually-driven fields like art, tattoos, and fashion, it can also be a valuable tool for IT professionals. Although results may not be instantaneous, men and women alike can leverage Pinterest for personal branding.&lt;/p&gt;

&lt;p&gt;Setting up Pinterest&lt;/p&gt;

&lt;p&gt;Ones you login with email, google or facebook you can start by creating different boards based on content that you want to share.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx99mkk52pz3hqqfup4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx99mkk52pz3hqqfup4w.png" alt=" " width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above you can see my different boards i have created over time(the last one i have been invited to). &lt;/p&gt;

&lt;p&gt;Some social platforms like &lt;strong&gt;YouTube&lt;/strong&gt; will automatically suggest option to create a Pin on Pinterest. If not I suggest create manually through Canva by selecting Pinterest pin 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.canva.com/" rel="noopener noreferrer"&gt;https://www.canva.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pinterest offers SEO benefits that can drive traffic to your website or content. Additionally, there are paid options for promoting pins, but I use the free version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Statistics
&lt;/h2&gt;

&lt;p&gt;Out of the many pins I've created, some have performed exceptionally well. For instance, my article '&lt;strong&gt;How to Stop Vaping for Good&lt;/strong&gt;' received 25 clicks, with 8 coming from Pinterest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7c9eed11tza7abivm1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7c9eed11tza7abivm1t.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from above picture while most dont get any traction I made one article listed bellow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://richard-taujenis.medium.com/how-to-stop-vaping-for-good-bb3e0563648e" rel="noopener noreferrer"&gt;https://richard-taujenis.medium.com/how-to-stop-vaping-for-good-bb3e0563648e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So while it doesn't occur often it doesn't take as much time to create a Pin.&lt;/p&gt;

&lt;p&gt;Bellow you can see how much pins i have created reality paints a different picture often in life compared to what we expect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pl.pinterest.com/richardtaujenis/" rel="noopener noreferrer"&gt;https://pl.pinterest.com/richardtaujenis/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Branding yourself in a digital landscape has never been as competitive and saturated as it is now. While people say "you have to stand out from the rest" has never been more difficult, as when everyone is doing the same thing how to stand out?&lt;/p&gt;

&lt;p&gt;While Pinterest may not always bring immediate results, its potential for long-term brand building is significant. Don't overlook this powerful platform in your personal branding strategy.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Effortlessly Deploy Your GCP Cloud Run App Using Terraform</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Fri, 01 Nov 2024 14:37:15 +0000</pubDate>
      <link>https://dev.to/mozes721/effortlessly-deploy-your-gcp-cloud-run-app-using-terraform-22mb</link>
      <guid>https://dev.to/mozes721/effortlessly-deploy-your-gcp-cloud-run-app-using-terraform-22mb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk301g43jhmfux5719d2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk301g43jhmfux5719d2q.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform is gaining more popularity for a reason as it provides high level of control flexibility as IaC(Infrastructure as Code) &lt;/p&gt;

&lt;p&gt;Supports modules, keeps track of state of your infrastructure and is helpfull if your project is complex, multi-cloud or hybrid enviornments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To start off be sure to follow &lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;this&lt;/a&gt; guide for Terraform instalation if you haven't done so and be sure to have &lt;a href="https://cloud.google.com/gcp?&amp;amp;gad_source=1" rel="noopener noreferrer"&gt;GCP&lt;/a&gt; account already set up.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You should have the app already prior deployed through other means like CLI to understand the deployment process, baseline configuration, incremental transition etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Related blog with manual deployment I added bellow 👇📖&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.stackademic.com/how-to-deploy-a-go-service-to-gcp-cloud-run-694d01cab5b5" rel="noopener noreferrer"&gt;https://blog.stackademic.com/how-to-deploy-a-go-service-to-gcp-cloud-run-694d01cab5b5&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;For my project structure I have these files and directory structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform/
  ├── modules/
  │   ├── docker/
  │   │   ├── docker-artifact.tf
  │   │   └── variables.tf
  │   ├── gcp/
  │   │   ├── cloud-run.tf
  │   │   └── variables.tf
  ├── main.tf
  ├── set-prod.env.sh
  ├── terraform.tfvars
  ├── variables.tf
  └── account_key.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt;: Including the required providers and the Google provider configuration.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt;: Describe how to define variables for your project.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform.tfvars&lt;/code&gt;: Explain how to set variable values specific to your * environment.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;set-prod.env.sh&lt;/code&gt;: Sets the enviornment variables for terraform with TF_VAR prefix flag.&lt;/li&gt;
&lt;li&gt;Modules: Detail the &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;cloud-run&lt;/code&gt; modules, explaining their roles and how they interact.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  IaC Scripts
&lt;/h2&gt;

&lt;p&gt;I will showcase from parent to child modules scripts for more of a higher order guide.&lt;br&gt;
Most likely you will have env variables most convieneit way for me is to create shell script with the &lt;code&gt;TF_VAR_&lt;/code&gt; prefix that Terraform will recoginze and use ones initialized(but for that later).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

#server 
export TF_VAR_redis_url="redis_url"
export TF_VAR_firebase_account_key="your_account_key.json"
export TF_VAR_client_url="client_url"
export TF_VAR_gcp_account_key="client_url"

echo "Environment variables for Terraform GCP set."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables that i have as well set in module level but parent will usually contain all of them but in module level i just passed the right ones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "project_id" {
  description = "The ID of the Google Cloud project."
  type        = string
}

variable "project_name" {
  description = "The project name of the Google Cloud Run project."
  type        = string
}

variable "region" {
  description = "The Google Cloud region."
  type        = string
}

variable "redis_url" {
  description = "The URL for the Redis instance."
  type        = string
}

variable "client_url" {
  description = "The URL for the client application."
  type        = string
}

variable "gcp_account_key" {
  description = "Path to the Google Cloud service account key file."
  type        = string
}

variable "firebase_account_key_location" {
  description = "Firebase account key location in Docker container."
  type        = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is as well other script file that I created that does NOT contain private or secret key values that can be easily modified and is handy for default values thats your &lt;code&gt;terraform.tfvars&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project_id = "recepies-6e7c0"
project_name = "recipe-service"
region     = "europe-north1"
gcp_account_key = "./account_key.json"
firebase_account_key_location = "/app/config/account_key.json"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets talk about 🐘 in the room our &lt;code&gt;main.tf&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "&amp;gt;= 4.0.0"
    }
  }
  required_version = "&amp;gt;= 0.12"
}

provider "google" {
  credentials = file(var.gcp_account_key)
  project     = var.project_id
  region      = var.region
}

# Get project information
data "google_project" "project" {
  project_id = var.project_id
}

module "docker" {
  source      = "./modules/docker"
  project_id  = var.project_id
}

module "cloud_run" {
  source      = "./modules/gcp"
  project_id  = var.project_id
  region      = var.region
  redis_url   = var.redis_url
  client_url  = var.client_url
  firebase_account_key_location = var.firebase_account_key_location
  cloudrun_image = "gcr.io/${var.project_id}/recipe-server:latest"

  depends_on = [
    module.docker
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the begining I define the PaaS provider as i use GCP google is added you can add AWS, Azure or other providers. Creditentials are essential to approve your request to any cloud provider the gcp_account_key you pass as a json file that i have in parent terraform directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4yjjn6pbf0jjw8038om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4yjjn6pbf0jjw8038om.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At above screenshot you can see i have created a Service account key in GCP and passed the right IAM access rights.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It crucial to assign the correct IAM (Identity and Access Management) access rights to the account_key.json  as otherwise you will have different permission issues when trying to run Terraform. Roles viewer, editor, storage.admin, cloudrun.admin, Docker artifacts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is an alternative as well to just assign roles and permission through IaC but for me it's more an hastle at least until i get more familiar with it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
  --member="serviceAccount:YOUR_SERVICE_ACCOUNT_EMAIL" \
  --role="roles/editor"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above ilustrates how it could be done.&lt;/p&gt;

&lt;p&gt;Then next steps are running your modules I start off with docker as need to create Docker Artifact in GCP and after that is completed I do the same with Cloud Run. Keep in mind I access the dir with  &lt;code&gt;"./modules/docker"&lt;/code&gt;  and pass needed variables from parent to child  &lt;code&gt;modules/docker/variables.tf&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_project_service" "container_registry_api" {
  project = var.project_id
  service = "containerregistry.googleapis.com"
  disable_on_destroy = false
}

resource "null_resource" "docker_build_push" {
  triggers = {
    always_run = timestamp()
  }

  provisioner "local-exec" {
    command = &amp;lt;&amp;lt;-EOT
      # Build the Docker image
      docker build -t gcr.io/${var.project_id}/recipe-server:latest .

      # Configure docker to authenticate with GCP
      gcloud auth configure-docker --quiet

      # Push the image
      docker push gcr.io/${var.project_id}/recipe-server:latest
    EOT
  }

  depends_on = [
    google_project_service.container_registry_api
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;docker-artifact.tf&lt;/em&gt; is quite short as only think we need is to define the resources used starting with &lt;strong&gt;container_registry_api&lt;/strong&gt; and secondly  &lt;strong&gt;docker_build_push&lt;/strong&gt; add provisioning for local execution and end it with building and deploying the grc docker image with passed in &lt;em&gt;var.project_id&lt;/em&gt; + add that it depends on &lt;em&gt;container_registry_api&lt;/em&gt; as its required.&lt;/p&gt;

&lt;p&gt;Lastly in our IaC we deploy it running our last module with &lt;code&gt;"./modules/gcp"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_project_service" "required_apis" {
  for_each = toset([
    "run.googleapis.com",
    "containerregistry.googleapis.com"
  ])

  project = var.project_id
  service = each.key
  disable_on_destroy = false
}

resource "google_cloud_run_service" "recipe_service" {
  name     = var.project_name
  location = var.region
  project  = var.project_id

  template {
    spec {
      containers {
        image = var.cloudrun_image

        env {
          name  = "REDIS_URL"
          value = var.redis_url
        }
        env {
          name  = "CLIENT_URL"
          value = var.client_url
        }
        env {
          name  = "FIREBASE_ACCOUNT_KEY"
          value = var.firebase_account_key_location
        }
      }
    }
  }

  depends_on = [
    google_project_service.required_apis
  ]
}

resource "google_cloud_run_service_iam_member" "public_access" {
  location = google_cloud_run_service.recipe_service.location
  project  = google_cloud_run_service.recipe_service.project
  service  = google_cloud_run_service.recipe_service.name
  role     = "roles/run.invoker"
  member   = "allUsers"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same as for docker module we define required resources for &lt;code&gt;"google_cloud_run_service"&lt;/code&gt; we select the name, region, project_id then select the image thats been passed from main.&lt;br&gt;
If you have required env variables pass them as well. &lt;br&gt;
IAM member resource is added to give permission for deployment to Cloud Run.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploying Your Application
&lt;/h2&gt;

&lt;p&gt;Now when architecture is set and done we do the following steps.&lt;/p&gt;

&lt;p&gt;1.Initialize Terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the Shell Script or manually set your env variables
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source set-prod.env.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For terraform to access the .env variables.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Preview the changes in terraform or dirrectly deploy it.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan //Helps you preview the changes that Terraform will make to your infrastructure. 

terraform apply //Run the terraform script to deploy your app through IaC.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all is good you will end up with something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd1p4ofzrtwaftnhko6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd1p4ofzrtwaftnhko6r.png" alt=" " width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If commiting to GitHub be note worthy to add some files in .gitignore as terraform generates artifacts and backup etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform/set-prod-env.sh
terraform/account_key.json
terraform/.terraform
terraform/.terraform.lock.hcl
terraform/.terraform.tfstate.lock.info

# Ignore Terraform working directory
terraform/.terraform/

# Ignore tfstate files and backups
*.tfstate
*.tfstate.backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While IaC adds some complexity compared to manual setup it adds as well levarage as mentioned before of more maintainability and automation espectially of interact between multiple cloud providers etc. As well for me personally it gives more power to me as a developer! &lt;br&gt;
Repo you can find &lt;a href="https://github.com/Mozes721/RecipesApp" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>terraform</category>
      <category>devops</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>How to Deploy a Go service to GCP Cloud Run</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Wed, 02 Oct 2024 15:04:32 +0000</pubDate>
      <link>https://dev.to/mozes721/how-to-deploy-a-go-service-to-gcp-cloud-run-23ng</link>
      <guid>https://dev.to/mozes721/how-to-deploy-a-go-service-to-gcp-cloud-run-23ng</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72pgvaewlsm0e5gvmg9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72pgvaewlsm0e5gvmg9g.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploying a Go service to GCP Cloud Run involves several steps, including setting up a Dockerfile and configuring environment variables. &lt;/p&gt;

&lt;p&gt;This guide will walk you through the process.&lt;/p&gt;

&lt;p&gt;If you would like to follow it in video format as well have it on *&lt;em&gt;YouTube *&lt;/em&gt;&lt;a href="https://youtu.be/mKXIVCkW2-8" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up your GCP Project
&lt;/h2&gt;

&lt;p&gt;Start off by going to &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;GCP&lt;/a&gt; create account if havent done so yet.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a GCP Project.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to the GCP console and create a new project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note the project ID for deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvonk2ozjnl2ucn9tnaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvonk2ozjnl2ucn9tnaq.png" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable required APIs.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Enable the Cloud Run API and Container Registry API.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Install Google Cloud SDK&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Initialize your repository with gcloud init.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create your Go Service
&lt;/h2&gt;

&lt;p&gt;Ensure your Go app can run locally and set up a Dockerfile.&lt;/p&gt;

&lt;p&gt;cmd/main.go&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// cmd/main.go
func main() {
 flag.Parse()

 a := app.Application{}

 if err := a.LoadConfigurations(); err != nil {
        log.Fatalf("Failed to load configurations: %v", err)
    }

    if err := runtime.Start(&amp;amp;a); err != nil {
        log.Fatalf("Failed to start the application: %v", err)
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;runtime/base.go&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func Start(a *app.Application) error {
    router := gin.New()

    router.Use(cors.New(md.CORSMiddleware()))

    api.SetCache(router, a.RedisClient)
    api.SetRoutes(router, a.FireClient, a.FireAuth, a.RedisClient)

    err := router.Run(":" + a.ListenPort)
    log.Printf("Starting server on port: %s", a.ListenPort)
    if err != nil {
        return err
    }

    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Go image as the base image
FROM golang:1.18

WORKDIR /app

# Copy the Go module files
COPY go.mod go.sum ./

RUN go mod download

# Copy the rest of the application code
COPY . .

RUN go build -o main  ./cmd/main.go

CMD ["./main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set Up Env variables&lt;/p&gt;

&lt;p&gt;Use shell script to automate setting env variables for GCP &lt;/p&gt;

&lt;p&gt;as &lt;strong&gt;env-variables.sh&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// env-variables.sh
#!/bin/bash

# Environment variables
export PROJECT_ID=recepies-6e7c0
export REGION=europe-west1

export REDIS_URL="rediss://default:AVrvA....-lemur-23279.u....:6379"
export FIREBASE_ACCOUNT_KEY="/app/config/account_key.json"
export CLIENT_URL="https://.....vercel.app/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deployment script as &lt;strong&gt;deploy-with-yaml.sh&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

source env-variables.sh

#Comment if correctly deployed
docker build -t gcr.io/$PROJECT_ID/recipe-server:latest .
docker push gcr.io/$PROJECT_ID/recipe-server:latest

#Uncomment if json needs to be added to GCP 
# gcloud secrets create firebase-account-key --data-file=/mnt/c/own_dev/RecipesApp/server/config/account_key.json --project=recepies-6e7c0

#Add permission IAM
gcloud projects add-iam-policy-binding recepies-6e7c0 \
    --member="serviceAccount:service-988443547488@serverless-robot-prod.iam.gserviceaccount.com" \
    --role="roles/artifactregistry.reader"

gcloud run deploy recipe-service \
  --image gcr.io/$PROJECT_ID/recipe-server:latest \
  --region $REGION \
  --platform managed \
  --set-env-vars REDIS_URL=$REDIS_URL,CLIENT_URL=$CLIENT_URL,FIREBASE_ACCOUNT_KEY=$FIREBASE_ACCOUNT_KEY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment to your GCP Cloud Run
&lt;/h3&gt;

&lt;p&gt;Run the deployment script&lt;/p&gt;

&lt;p&gt;env-variables.sh&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Issues and Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Permission Issues: Ensure the Cloud Run Service Agent has permission to read the image.&lt;/li&gt;
&lt;li&gt;Environment Variables: Verify that all required environment variables are set correctly.&lt;/li&gt;
&lt;li&gt;Port Configuration: Ensure the PORT environment variable is set correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oi5ft9v9nljhuc5fh7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oi5ft9v9nljhuc5fh7e.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When all set up as needed you will see Image being build and pushed to your GCP project Artifact Registry. In the end i got this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a9099c3159f5: Layer already exists
latest: digest: sha256:8c98063cd5b383df0b444c5747bb729ffd17014d42b049526b8760a4b09e5df1 size: 2846
Deploying container to Cloud Run service [recipe-service] in project [recepies-6e7c0] region [europe-west1]
✓ Deploying... Done.
  ✓ Creating Revision...
  ✓ Routing traffic...
Done.
Service [recipe-service] revision [recipe-service-00024-5mh] has been deployed and is serving 100 percent of traffic.
Service URL: https://recipe-service-819621241045.europe-west1.run.app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a standart error that I came across multiple times 👇&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Deploying container to Cloud Run service [recipe-service] in project [recepies-6e7c0] region [europe-west1] X Deploying… - Creating Revision… . Routing traffic… Deployment failed ERROR: 
(gcloud.run.deploy) Revision 'recipe-service-00005-b6h' 
is not ready and cannot serve traffic. Google Cloud Run Service Agent service-819621241045@serverless-robot-prod.iam.gserviceaccount.com must have permission to read the image, 
gcr.io/loyal-venture-436807-p7/recipe-server:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that the image is from project [loyal-venture-436807-p7], which is not the same as this project [recepies-6e7c0]. Permission must be granted to the Google Cloud Run Service 
Agent service-819621241045@serverless-robot-prod.iam.gserviceaccount.com from this project. See https://cloud.google.com/run/docs/deploying#other-projects
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Often it states that PORT=8080 could not be set but main issue is something else like env variable not set or in my case firebase account_key.json incorrectly set for deployment.&lt;/p&gt;




&lt;p&gt;When all is set you can test the connection and do requests.&lt;/p&gt;

&lt;p&gt;I have my frontend deployed in Vercel and bellow you can see my Cloud Run Logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs44tmxxz9564olgq7f28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs44tmxxz9564olgq7f28.png" alt=" " width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploying a Go service to GCP Cloud Run can be streamlined with a few key configurations and automation scripts. &lt;/p&gt;

&lt;p&gt;Although there might be some common errors, such as permission issues or incorrect environment variables, understanding how to troubleshoot them through Cloud Run logs ensures a smooth deployment.&lt;br&gt;
My repo you can find &lt;a href="https://github.com/Mozes721/RecipesApp" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>docker</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>gRPC Communication Between Go and Python</title>
      <dc:creator>Ricards Taujenis</dc:creator>
      <pubDate>Thu, 22 Aug 2024 09:55:32 +0000</pubDate>
      <link>https://dev.to/mozes721/grpc-communication-between-go-and-python-40i3</link>
      <guid>https://dev.to/mozes721/grpc-communication-between-go-and-python-40i3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopi8jstkbi7l22jn7uqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopi8jstkbi7l22jn7uqk.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;gRPC is a powerful, high-performance Remote Procedure Call (RPC) framework that, despite being less commonly used than REST, offers significant advantages in certain scenarios.&lt;/p&gt;

&lt;p&gt;In addition it's language agnostic and can run in any environment, making it an ideal choice for server-to-server communication.&lt;/p&gt;

&lt;p&gt;I will not delve into in whole explenation of it but here is a general link of gRPC. I'll provide a hands on turtorial&lt;/p&gt;

&lt;p&gt;Related video you can find here: &lt;a href="https://dev.tourl"&gt;https://www.youtube.com/watch?v=BXY1-BJc3js&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go gRPC client 
&lt;/h2&gt;

&lt;p&gt;Lets image our Go is client but is a server asfor frontend app React, Svelte etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func getFirstArg() (string, error) {
    if len(os.Args) &amp;lt; 2 {
        return "", fmt.Errorf("expected 1 argument, but got none")
    }
    return os.Args[1], nil
}

func main() {
    filePath, err := getFirstArg()
    if err != nil {
        log.Fatalf("Failed to get file path from arguments: %v", err)
    }

    fileData, err := ioutil.ReadFile(filePath)
    if err != nil {
        log.Fatalf("Failed to read file: %v", err)
    }

 ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m8mly55ixuiy9fg5c5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m8mly55ixuiy9fg5c5c.png" alt=" " width="632" height="300"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;As an example React frontend uploads a file, Go process it but we need answers from excel we will use GPT API. While it can be done with Go, Python on the otherhand has more packages that can ease our lives like langchan_openai, pandas for excel and so forth.&lt;/p&gt;




&lt;p&gt;Lets start with instalation of gRPC preferably in your virtualenv .venv&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
$ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
$ export PATH="$PATH:$(go env GOPATH)/bin"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next up you should install protocol buffer in your OS can follow it here.&lt;br&gt;
Let's create a proto dir where you will store your protocol buffer file I will name it as &lt;em&gt;excel.proto&lt;/em&gt; and paste this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;syntax = "proto3";
option go_package = "client-gRPC/proto";
service ExcelService {
    rpc UploadFile(FileRequest) returns (FileResponse);
}
message FileRequest {
    string file_name = 1;
    bytes file_content = 2;
}
message FileResponse {
    bytes file_content = 1;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gRPC service, ExcelService, allows clients to upload a file by sending its name and content. The server responds with the same file content. &lt;/p&gt;

&lt;p&gt;For Go its essential to pass in go_package in Python the line is not needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;vscode-proto3 is a good extension to download if you use VSCode.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After all of this you can generate your proto files I preffer it in same level as prot dir, for that run this command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative proto/excel.proto&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If succesfull two files should be generated, optionally if there would be a lot of adjustments add a Makefile and define it as proto + upper command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import (
    ....

    "google.golang.org/grpc"
    pb "client-gRPC/proto"
    "github.com/xuri/excelize/v2"
)

func main() {
    ....

    conn, err := grpc.Dial("localhost:50051", grpc.WithInsecure())
    if err != nil {
        log.Fatalf("Failed to connect to gRPC server: %v", err)
    }
    defer conn.Close()

    client := pb.NewExcelServiceClient(conn)

    req := &amp;amp;pb.FileRequest{
        FileName:    filePath,
        FileContent: fileData,
    }

    res, err := client.UploadFile(context.Background(), req)
    if err != nil {
        log.Fatalf("Failed to upload file: %v", err)
    }

    outputFile := "output.xlsx"
    err = saveBytesAsExcel(outputFile, res.FileContent)
    if err != nil {
        log.Fatalf("Failed to save bytes as Excel file: %v", err)
    }

    fmt.Printf("Excel file saved as: %s\n", outputFile)
}

func saveBytesAsExcel(filePath string, fileContent []byte) error {
    f, err := excelize.OpenReader(bytes.NewReader(fileContent))
    if err != nil {
        return fmt.Errorf("failed to open Excel file: %v", err)
    }

    if err := f.SaveAs(filePath); err != nil {
        return fmt.Errorf("failed to save Excel file: %v", err)
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We make a connection to listen to 50051 that will be our Python server, &amp;amp;pb.FileRequest was generated prior by using proto command and now we are importing the methods. If you run you will recive 👇 due to Python server not established yet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Failed to upload file: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:50051: connect: connection refused"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Python gRPC server
&lt;/h2&gt;

&lt;p&gt;As python will act as a server the approach will be slightly different but in essense same proto file appart from package field is not reqired. Lets start by creating a base &lt;em&gt;main.py&lt;/em&gt; without the gRPC just to give a glance of how GPT will populate the questions in excel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import openai
import pandas as pd
from dotenv import load_dotenv

def get_answer_from_gpt(apikey: str, question: str):
    openai.api_key = apikey
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": question}
        ]
    )
    return response['choices'][0]['message']['content'].strip()

def answer_questions_df(df: pd.DataFrame, apikey: str):
    answers = []

    for question in df.iloc[:, 0]: 
        answer = get_answer_from_gpt(apikey, question)
        answers.append(answer)
    return answers

if __name__ == "__main__":
    load_dotenv()

    openai_api_key = os.getenv("OPENAI_API_KEY", "OpenAI API key hasn't been set.")

    df = pd.read_excel('Book1.xlsx')

    df['Answer'] = answer_questions_df(df, openai_api_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Its a simple script that will answer questions that Go will send us but the LOC is less due to dedicated openai library that makes it easier.&lt;/p&gt;




&lt;p&gt;We start by as well adding proto dir with same file as above the option section can be removed as disccused. Install gRPC in your virtualenv preferably and follow here the instalation for proto generation I ran"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m grpc_tools.protoc --proto_path=proto --python_out=proto --grpc_python_out=proto proto/excel.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To be in same lvl as my proto directory &lt;em&gt;remember to add __init&lt;/em&gt;&lt;em&gt;.py&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;Ones the files have been generated lets continue on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io
import grpc
from proto import excel_pb2_grpc as excel_grpc
from proto import excel_pb2

class ExcelService(excel_grpc.ExcelServiceServicer):
    def UploadFile(self, request, context):
        try:
            # Convert bytes to a file-like object
            file_like_object = io.BytesIO(request.file_content)

            # Load the workbook from the file-like object
            workbook = openpyxl.load_workbook(file_like_object)

            # Access the first sheet (or use appropriate logic to get the sheet you need)
            sheet = workbook.active

            # Convert the sheet to a DataFrame
            data = sheet.values
            columns = next(data)  # Get the header row
            df = pd.DataFrame(data, columns=columns)

            print("Loaded DataFrame:")
            print(df.head())

            # Ensure that the DataFrame is not empty and has questions
            if df.empty or df.shape[1] &amp;lt; 1:
                print("DataFrame is empty or does not have the expected columns.")
                return excel_pb2.FileResponse(file_content=b'')

            # Get answers and add them to the DataFrame
            answers = answer_questions_df(df, openai_api_key)
            df['Answer'] = answers

            # Write the updated DataFrame back to a BytesIO object
            output = io.BytesIO()
            with pd.ExcelWriter(output, engine='openpyxl') as writer:
                df.to_excel(writer, index=False, sheet_name='Sheet1')

            # Reset the buffer's position to the beginning
            output.seek(0)

            # Return the modified file content
            response = excel_pb2.FileResponse(file_content=output.read())
            return response
        except Exception as e:
            print(f"Error processing file: {e}")
            return excel_pb2.FileResponse(file_content=b'')

def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    excel_grpc.add_ExcelServiceServicer_to_server(ExcelService(), server)
    server.add_insecure_port('[::]:50051')
    server.start()
    print("Server running on port 50051.")
    server.wait_for_termination()

if __name__ == "__main__":
    load_dotenv()

    openai_api_key = os.getenv("OPENAI_API_KEY", "OpenAI API key hasn't been set.")

    serve()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define server and add ExcelService class what containes the methods generated by proto file. Because we recive file by bytes have to use io byte reader and commence further processing of the file and population the second column.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = excel_pb2.FileResponse(file_content=output.read())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the end we are returning ☝️ for our Go client to recive.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To be able to find proto files in python however you should define an export path&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;export PYTHONPATH=$PYTHONPATH:mnt/c/own_dev/gRPC/server/proto&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Running Client and Server
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If all is good you can run

#First comes server

python3 -m main

#Then client

go run client.go Book1.xlsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you should get the updated .xlsx file in Go client side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we explored the fundamentals of setting up gRPC communication between Python server and Go client. By leveraging gRPC, we established a seamless way to send an Excel file from a Go application to a Python server, process the file using OpenAI's GPT API, and return the modified file back to the Go client.&lt;/p&gt;

</description>
      <category>go</category>
      <category>python</category>
      <category>backend</category>
      <category>chatgpt</category>
    </item>
  </channel>
</rss>
