<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ama</title>
    <description>The latest articles on DEV Community by Ama (@amals367).</description>
    <link>https://dev.to/amals367</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amals367"/>
    <language>en</language>
    <item>
      <title>What You Actually Get from Google AI Pro (Beyond the Marketing)</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Thu, 05 Mar 2026 12:11:01 +0000</pubDate>
      <link>https://dev.to/amals367/what-you-actually-get-from-google-ai-pro-beyond-the-marketing-17c2</link>
      <guid>https://dev.to/amals367/what-you-actually-get-from-google-ai-pro-beyond-the-marketing-17c2</guid>
      <description>&lt;p&gt;Everyone lists the 2 TB and Gemini access, but that's just the box 📦. The real value is in the workflows it quietly unlocks—if you know where to look.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real problem
&lt;/h2&gt;

&lt;p&gt;Most productivity tools add more steps to your day. You subscribe for one shiny feature, but end up managing another app, learning new shortcuts, and fighting a different UI 😮‍💨. The promise of "AI assistance" often feels like more work, not less.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I changed
&lt;/h2&gt;

&lt;p&gt;I stopped treating it as a chatbot subscription and started treating it as a &lt;strong&gt;context engine&lt;/strong&gt; 🧠. The goal isn't to have conversations with AI; it's to offload the mental overhead of starting, summarizing, and connecting information scattered across Google's ecosystem. The trade-off? You have to lean into Google's apps (Docs, Drive, Gmail) to get the full benefit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real workflow (step-by-step)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Dump everything into a dedicated Drive folder.&lt;/strong&gt; I have a folder called &lt;code&gt;_context&lt;/code&gt;. In goes every project brief, meeting note, spec sheet, and useful snippet I find. This isn't for organization—it's raw material 🗃️.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Start every new Doc with a command, not a thought.&lt;/strong&gt; Instead of staring at a blank page, I write a prompt like: "Based on the files in the &lt;code&gt;_context&lt;/code&gt; folder about [Project X], draft a technical approach focusing on error handling." It uses my past work as a template 🚀.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Use Gmail's "Help me write" for triage, not composition.&lt;/strong&gt; I don't have it write full emails. I use it on dense threads with: "Summarize the key decisions and list action items for me." In 3 seconds, I know if I need to engage or not 📬.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Mistakes and gotchas
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Mistake 1: Treating Gemini like a search engine.&lt;/strong&gt; It's terrible for factual lookups 🔍. Its strength is synthesizing &lt;em&gt;your&lt;/em&gt; content. Avoid asking "what is X?"; ask "based on my document Y, how would I approach X?"&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Mistake 2: Ignoring the family plan structure.&lt;/strong&gt; If you're paying solo, you're overpaying 💸. The 6-person family sharing is the secret. You can split the cost with trusted colleagues (everyone keeps their own account/data), making it one of the cheapest per-person AI tools available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More about family plan is &lt;a href="https://dev.to/amals367/20-subscription-6-people-google-one-ai-family-sharing-explained-65a"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final take
&lt;/h2&gt;

&lt;p&gt;Don't buy it for the AI. Buy it for the 2 TB of unified storage, then use the AI to make that storage actively useful. The intelligence is a bonus that only works if you feed it your own context.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR 🧾
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  The core benefit is using 2 TB of Drive storage as your team's or your own active, AI-accessible memory.&lt;/li&gt;
&lt;li&gt;  Real value comes from prompting Gemini with &lt;em&gt;your&lt;/em&gt; documents and emails, not general knowledge.&lt;/li&gt;
&lt;li&gt;  Always use the Family Plan; share with 5 others to drop the effective cost to ~$3.33/person/month.&lt;/li&gt;
&lt;li&gt;  It's a system for reducing startup friction on tasks, not a magic idea generator.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's the one annoying task it's actually helped you automate?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>context</category>
      <category>google</category>
      <category>programming</category>
    </item>
    <item>
      <title>Stop paying OpenAI to transcribe your voice notes (My offline Telegram bot stack) 🎙️</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Wed, 04 Mar 2026 13:45:16 +0000</pubDate>
      <link>https://dev.to/amals367/stop-paying-openai-to-transcribe-your-voice-notes-my-offline-telegram-bot-stack-3a65</link>
      <guid>https://dev.to/amals367/stop-paying-openai-to-transcribe-your-voice-notes-my-offline-telegram-bot-stack-3a65</guid>
      <description>&lt;p&gt;Every tutorial on building an &lt;strong&gt;AI Telegram bot&lt;/strong&gt; right now uses the exact same lazy architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User sends a voice message.&lt;/li&gt;
&lt;li&gt;Bot downloads the .ogg file.&lt;/li&gt;
&lt;li&gt;Bot sends the file to OpenAI's Whisper API.&lt;/li&gt;
&lt;li&gt;You get billed per minute of audio.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is fine if you are building a quick prototype. But if you actually use your bot every single day, you are burning money on a task your own CPU can do for free. Not to mention the privacy nightmare of shipping all your personal audio logs to a third-party cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The local alternative ⚙️&lt;/strong&gt;&lt;br&gt;
I wanted to build a Telegram interface for the Nomi API. I heavily rely on voice messages, so I needed speech-to-text.&lt;/p&gt;

&lt;p&gt;Instead of defaulting to a paid API, I built the entire transcription pipeline locally using Vosk and FFmpeg.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow is dead simple:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Telegram sends the .ogg voice note.&lt;/li&gt;
&lt;li&gt;FFmpeg runs a local process to convert it to a .wav file with the correct sample rate.&lt;/li&gt;
&lt;li&gt;The offline Vosk model reads the file and returns the text.&lt;/li&gt;
&lt;li&gt;Then the text is sent to the LLM.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;code Python

# The core logic is just wrapping the offline tools
# No network requests, no API keys for the transcription

async def process_voice(file_path):
    wav_path = convert_ogg_to_wav_ffmpeg(file_path)
    text = await run_vosk_transcription(wav_path)
    return text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters 🧱&lt;/strong&gt;&lt;br&gt;
Network latency disappears. Your server isn't waiting on a cloud provider to process an audio file. The transcription happens instantly, offline, and costs absolutely zero dollars.&lt;/p&gt;

&lt;p&gt;More importantly, it forces you to understand your tools. Setting up async subprocesses for FFmpeg inside a Telegram bot (I use aiogram) teaches you way more about backend architecture than just making another HTTP request to OpenAI.&lt;/p&gt;

&lt;p&gt;You can find the entire implementation in my NomiAssistantTG repo. It handles the Telegram async loop, the offline voice processing, and the API connections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I build micro-tools (and a quick favor) 🤝&lt;/strong&gt;&lt;br&gt;
Developers default to paid SaaS tools because setting up local binaries like FFmpeg and Vosk feels like a headache.&lt;/p&gt;

&lt;p&gt;I spend my time taking those headaches, wrapping them into clean, reusable Python code, and dropping them on GitHub. My repositories aren't theoretical frameworks; they are the actual tools I use to bypass expensive API limits.&lt;/p&gt;

&lt;p&gt;If my NomiAssistantTG code just saved you a monthly OpenAI Whisper bill, or gave you a working async boilerplate for your next Telegram bot, consider dropping a sponsorship on my GitHub:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/sponsors/AmaLS367" rel="noopener noreferrer"&gt;Sponsor AmaLS367 on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your support directly buys the time I need to keep breaking things, reading awful documentation, and open-sourcing production-ready templates so you don't have to.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Stop sending basic audio transcription to the cloud.&lt;/li&gt;
&lt;li&gt;Use FFmpeg to normalize Telegram voice notes.&lt;/li&gt;
&lt;li&gt;Use Vosk for free, offline, instant speech-to-text.&lt;/li&gt;
&lt;li&gt;Keep your architecture lean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Are you still using Whisper API for personal projects, or have you moved to local models? Let me know 👇&lt;/p&gt;

</description>
      <category>python</category>
      <category>telegram</category>
      <category>backend</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I deleted my entire AI microservice and just used Postgres (here is why) 🐘⚡</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Sun, 01 Mar 2026 11:04:22 +0000</pubDate>
      <link>https://dev.to/amals367/i-deleted-my-entire-ai-microservice-and-just-used-postgres-here-is-why-541m</link>
      <guid>https://dev.to/amals367/i-deleted-my-entire-ai-microservice-and-just-used-postgres-here-is-why-541m</guid>
      <description>&lt;p&gt;A few months ago, I needed to build a feature that everyone is asking for right now: &lt;em&gt;"Let our users chat with their messy data."&lt;/em&gt; (In my case, it was a massive dump of chaotic customer support tickets).&lt;/p&gt;

&lt;p&gt;If you Google how to build this, the internet will immediately try to sell you a 5-tier architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need a Vector DB for embeddings.&lt;/li&gt;
&lt;li&gt;You need a graph database for relationships.&lt;/li&gt;
&lt;li&gt;You need LangChain to glue it together.&lt;/li&gt;
&lt;li&gt;You need a separate Python microservice to run it all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I fell for it. I built it.&lt;/strong&gt;&lt;br&gt;
And two weeks later, I was dealing with the most annoying bug in modern engineering: State mismatch.&lt;/p&gt;

&lt;p&gt;A user would delete a ticket in our main database, but the vector representation of that ticket &lt;strong&gt;still lived&lt;/strong&gt; in our Vector DB. The AI kept hallucinating answers based on deleted data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Aha" Moment 💡&lt;/strong&gt;&lt;br&gt;
Syncing data between a relational database and a dedicated vector database is a nightmare. You have to write custom webhooks, handle failed retries, and pay for two separate servers.&lt;/p&gt;

&lt;p&gt;So, I just threw it all away.&lt;/p&gt;

&lt;p&gt;I stopped treating AI like some magical entity that requires a bespoke ecosystem, and went back to the most reliable tool in the backend world: Postgres.&lt;/p&gt;

&lt;p&gt;Postgres + pgvector = Peace of Mind 🧘‍♂️ (and its not a joke)&lt;br&gt;
Instead of sending data across the internet to a third-party vector store, &lt;strong&gt;I just enabled the pgvector extension&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My data and my embeddings now live in the exact same table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION vector;

CREATE TABLE support_tickets (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id uuid REFERENCES users(id) ON DELETE CASCADE,
  issue_text text,
  embedding vector(1536) -- OpenAI's embedding size
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Look at that ON DELETE CASCADE.&lt;/strong&gt;&lt;br&gt;
If a user deletes their account, their tickets disappear. Because the tickets disappear, the embeddings disappear. Instant, ACID-compliant state management without writing a single line of sync logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dumping the heavy frameworks 🗑️&lt;/strong&gt;&lt;br&gt;
Once everything was in Postgres, I realized I didn't need LangChain or LlamaIndex either.&lt;/p&gt;

&lt;p&gt;Most AI frameworks try to do too much. They hide the actual API calls behind layers of abstraction, making debugging impossible.&lt;/p&gt;

&lt;p&gt;Now, my entire "RAG" pipeline is just a raw SQL query and a standard API fetch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User asks a question.&lt;/li&gt;
&lt;li&gt;Turn the question into an embedding.&lt;/li&gt;
&lt;li&gt;Run a cosine similarity search directly in SQL:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT issue_text 
FROM support_tickets 
ORDER BY embedding &amp;lt;=&amp;gt; $1 
LIMIT 5;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Feed those 5 results into the OpenAI/Anthropic API with a strict JSON schema.&lt;/p&gt;

&lt;p&gt;That's it. No microservices. No $80/month Vector DB bills. Just a regular monolithic backend doing its job efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I'm sharing this (and how you can support) ☕&lt;/strong&gt;&lt;br&gt;
The tech industry loves hype. We are constantly pushed to adopt the newest, most complex tools, even when a boring 20-year-old technology does it better.&lt;/p&gt;

&lt;p&gt;I spend a lot of my time testing these architectures, making the mistakes, and stripping away the marketing fluff so you can just build things that actually work in production.&lt;/p&gt;

&lt;p&gt;If this post just saved you from an unnecessary architecture rewrite, a massive SaaS bill, or a weekend of debugging sync errors, consider sponsoring my work on GitHub:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/sponsors/AmaLS367" rel="noopener noreferrer"&gt;Sponsor AmaLS367 on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sponsorships give me the freedom to keep experimenting, breaking things, and open-sourcing production-ready boilerplates that save you time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;br&gt;
Don't over-engineer your AI features.&lt;br&gt;
Dedicated Vector DBs often introduce data-sync nightmares.&lt;br&gt;
pgvector keeps your embeddings ACID-compliant.&lt;br&gt;
Ditch heavy AI frameworks; raw API calls + SQL are easier to debug and maintain.&lt;/p&gt;

&lt;p&gt;I’m really curious — how are you managing your embeddings right now? Are you using a separate DB or keeping it monolithic? Let me know below! 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>backend</category>
      <category>postgres</category>
      <category>architecture</category>
    </item>
    <item>
      <title>I ship a lot of API/webhook integrations. Here’s how I make them NOT hurt in production 🔥</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Fri, 27 Feb 2026 13:19:06 +0000</pubDate>
      <link>https://dev.to/amals367/i-ship-a-lot-of-apiwebhook-integrations-heres-how-i-make-them-not-hurt-in-production-50hb</link>
      <guid>https://dev.to/amals367/i-ship-a-lot-of-apiwebhook-integrations-heres-how-i-make-them-not-hurt-in-production-50hb</guid>
      <description>&lt;h1&gt;
  
  
  I ship a lot of API/webhook integrations. Here’s how I make them NOT hurt in production 🔥
&lt;/h1&gt;

&lt;p&gt;If you do freelance backend long enough, you start noticing a pattern:&lt;/p&gt;

&lt;p&gt;Clients don’t pay for “beautiful code”.&lt;br&gt;
They pay for &lt;strong&gt;it working tomorrow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And webhook integrations are the fastest way to get random chaos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;duplicate events&lt;/li&gt;
&lt;li&gt;out of order delivery&lt;/li&gt;
&lt;li&gt;retries that DDoS you&lt;/li&gt;
&lt;li&gt;and the classic “it worked yesterday 🤡”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So here’s my real-world baseline for building webhook/API integrations that don’t wake me up at 3AM.&lt;/p&gt;

&lt;p&gt;No theory. Just a practical checklist + a simple architecture that scales.&lt;/p&gt;




&lt;h2&gt;
  
  
  1) Assume the webhook will be duplicated. Because it will. ✅
&lt;/h2&gt;

&lt;p&gt;If you process every incoming request as “unique”, you’re cooked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule:&lt;/strong&gt; every webhook must be idempotent.&lt;/p&gt;

&lt;p&gt;That means you need an &lt;strong&gt;event id&lt;/strong&gt; or a &lt;strong&gt;hash&lt;/strong&gt; that lets you say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Seen it. Skipping.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Real workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract &lt;code&gt;event_id&lt;/code&gt; from payload (or generate a hash from stable fields)&lt;/li&gt;
&lt;li&gt;store it with a status&lt;/li&gt;
&lt;li&gt;on repeat: return &lt;code&gt;200 OK&lt;/code&gt; and do nothing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because if you return 500, they will retry harder.&lt;/p&gt;




&lt;h2&gt;
  
  
  2) Acknowledge fast. Process async. ⚡
&lt;/h2&gt;

&lt;p&gt;A webhook handler that does real work inside the HTTP request is a trap.&lt;/p&gt;

&lt;p&gt;It feels fine until:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your DB is slow for 5 seconds&lt;/li&gt;
&lt;li&gt;the provider timeout hits&lt;/li&gt;
&lt;li&gt;retries begin&lt;/li&gt;
&lt;li&gt;now you’re processing the same event 5 times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My default:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Receive webhook&lt;/li&gt;
&lt;li&gt;Validate signature / basic checks&lt;/li&gt;
&lt;li&gt;Save event to DB (raw payload + metadata)&lt;/li&gt;
&lt;li&gt;Return &lt;code&gt;200 OK&lt;/code&gt; fast&lt;/li&gt;
&lt;li&gt;Process the event in a worker/job queue&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This makes your system calm.&lt;/p&gt;




&lt;h2&gt;
  
  
  3) Store raw payloads. Future you will thank you 🧠
&lt;/h2&gt;

&lt;p&gt;When something breaks, the client will say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I don’t know, it just didn’t send.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you don’t store raw payloads, you have no evidence and no replay.&lt;/p&gt;

&lt;p&gt;I always store:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;full raw JSON payload&lt;/li&gt;
&lt;li&gt;headers (at least important ones)&lt;/li&gt;
&lt;li&gt;provider name&lt;/li&gt;
&lt;li&gt;received timestamp&lt;/li&gt;
&lt;li&gt;processing status&lt;/li&gt;
&lt;li&gt;error message if failed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;replay events&lt;/li&gt;
&lt;li&gt;debug edge cases&lt;/li&gt;
&lt;li&gt;prove what happened&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It turns “guessing” into “knowing”.&lt;/p&gt;




&lt;h2&gt;
  
  
  4) Security: verify signatures or don’t pretend it’s secure 🔒
&lt;/h2&gt;

&lt;p&gt;If the provider supports signatures, verify them.&lt;/p&gt;

&lt;p&gt;Not later. Not “we’ll add it after MVP”.&lt;/p&gt;

&lt;p&gt;Right away.&lt;/p&gt;

&lt;p&gt;Because otherwise you’re basically running:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;public endpoint that triggers actions&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s how you get spam, abuse, or worse.&lt;/p&gt;




&lt;h2&gt;
  
  
  5) Rate limits and backoff: retries are not your enemy, your implementation is 😅
&lt;/h2&gt;

&lt;p&gt;When processing fails, don’t do instant retries like a maniac.&lt;/p&gt;

&lt;p&gt;Use backoff:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 min&lt;/li&gt;
&lt;li&gt;5 min&lt;/li&gt;
&lt;li&gt;30 min&lt;/li&gt;
&lt;li&gt;2 hours&lt;/li&gt;
&lt;li&gt;dead-letter queue (manual review)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most integrations fail because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;temporary provider downtime&lt;/li&gt;
&lt;li&gt;temporary DB issue&lt;/li&gt;
&lt;li&gt;network nonsense&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Backoff makes it survive like a tank.&lt;/p&gt;




&lt;h2&gt;
  
  
  6) Logging that actually helps, not “we logged something” 📝
&lt;/h2&gt;

&lt;p&gt;I log at two layers:&lt;/p&gt;

&lt;h3&gt;
  
  
  Request layer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;request id&lt;/li&gt;
&lt;li&gt;provider&lt;/li&gt;
&lt;li&gt;event id&lt;/li&gt;
&lt;li&gt;status returned&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Job layer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;event id&lt;/li&gt;
&lt;li&gt;job attempt&lt;/li&gt;
&lt;li&gt;result&lt;/li&gt;
&lt;li&gt;full error stack (if any)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And one extra rule:&lt;br&gt;
If job fails, save a &lt;strong&gt;short human-readable error&lt;/strong&gt; near the event record.&lt;/p&gt;

&lt;p&gt;So later I can scan the DB and instantly see patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  7) My minimal scalable structure (simple but powerful)
&lt;/h2&gt;

&lt;p&gt;I like separating responsibilities like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;webhook_controller&lt;/code&gt;&lt;br&gt;
accepts HTTP, validates, stores event, returns response fast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;event_store&lt;/code&gt;&lt;br&gt;
saves raw payloads, dedup keys, statuses&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;processor&lt;/code&gt;&lt;br&gt;
contains business logic: “what do we do with this event”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;adapters&lt;/code&gt;&lt;br&gt;
provider-specific mapping (CRM A vs CRM B)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;queue/worker&lt;/code&gt;&lt;br&gt;
runs processing asynchronously with retry rules&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lets you add new integrations without rewriting everything.&lt;/p&gt;

&lt;p&gt;You just add a new adapter.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common production “gotchas” (learned the annoying way) 🤝
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Out-of-order events
&lt;/h3&gt;

&lt;p&gt;You might receive “updated” before “created”.&lt;/p&gt;

&lt;p&gt;Solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;allow upserts&lt;/li&gt;
&lt;li&gt;store event history&lt;/li&gt;
&lt;li&gt;process based on current state&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Provider sends partial data
&lt;/h3&gt;

&lt;p&gt;Sometimes they send only IDs and you must fetch details.&lt;/p&gt;

&lt;p&gt;Solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use a “hydration step” in the worker (API pull)&lt;/li&gt;
&lt;li&gt;cache if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Webhook timeouts
&lt;/h3&gt;

&lt;p&gt;If you process inside request, you lose.&lt;/p&gt;

&lt;p&gt;Solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fast ACK, async processing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  TL;DR 🧾
&lt;/h2&gt;

&lt;p&gt;If you want webhook integrations that behave in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;idempotency is mandatory&lt;/li&gt;
&lt;li&gt;acknowledge fast, process async&lt;/li&gt;
&lt;li&gt;store raw payloads&lt;/li&gt;
&lt;li&gt;verify signatures&lt;/li&gt;
&lt;li&gt;implement sane retries&lt;/li&gt;
&lt;li&gt;log like you’ll debug it later (because you will)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you’ve ever shipped webhooks in production, you already know:&lt;/p&gt;

&lt;p&gt;it’s never “done”.&lt;/p&gt;

&lt;p&gt;it’s “stable enough to survive real traffic” 😄&lt;/p&gt;

&lt;p&gt;Drop your worst webhook horror story below 👇&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>What you can actually DO with Google AI Pro (real workflows) ⚡</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:24:41 +0000</pubDate>
      <link>https://dev.to/amals367/what-you-can-actually-do-with-google-ai-pro-real-workflows-56g0</link>
      <guid>https://dev.to/amals367/what-you-can-actually-do-with-google-ai-pro-real-workflows-56g0</guid>
      <description>&lt;p&gt;Most posts about Google AI Pro sound like:&lt;/p&gt;

&lt;p&gt;“2 TB storage 🤖&lt;br&gt;
Gemini in Gmail 🤖&lt;br&gt;
Gemini in Docs 🤖”&lt;/p&gt;

&lt;p&gt;Cool.&lt;/p&gt;

&lt;p&gt;But none of that matters until it saves you from something annoying in real life.&lt;/p&gt;

&lt;p&gt;So here’s how I actually use it — not as a chatbot, but as a daily tool that quietly removes friction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Gmail → from chaos to “ok I know what’s going on” 📬
&lt;/h2&gt;

&lt;p&gt;You know those email threads where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;12 replies&lt;/li&gt;
&lt;li&gt;half decisions&lt;/li&gt;
&lt;li&gt;someone changed the scope&lt;/li&gt;
&lt;li&gt;and you’re mentioned once in the middle&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before: reread everything like a detective 🕵️&lt;/p&gt;

&lt;p&gt;Now I just ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What was decided here and what do I need to do?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that’s it.&lt;/p&gt;

&lt;p&gt;I get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;decisions&lt;/li&gt;
&lt;li&gt;action items&lt;/li&gt;
&lt;li&gt;things waiting for me&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the first time an AI feature actually made my inbox calmer instead of louder.&lt;/p&gt;




&lt;h2&gt;
  
  
  Starting a project without staring at a blank page
&lt;/h2&gt;

&lt;p&gt;Blank Docs page = fake productivity.&lt;/p&gt;

&lt;p&gt;Now my flow is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a technical plan for a Playwright-based monitoring tool with retries, logging and alerts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It gives me a structure.&lt;/p&gt;

&lt;p&gt;Not something I copy.&lt;/p&gt;

&lt;p&gt;Something I react to.&lt;/p&gt;

&lt;p&gt;That’s the difference.&lt;/p&gt;

&lt;p&gt;It turns “ugh I need to start” into&lt;br&gt;
“ok this is already moving”.&lt;/p&gt;




&lt;h2&gt;
  
  
  The underrated use: reading long technical stuff for me 🧠
&lt;/h2&gt;

&lt;p&gt;Client specs.&lt;br&gt;
Random API docs.&lt;br&gt;
Some integration written in… a very creative way.&lt;/p&gt;

&lt;p&gt;Instead of scanning everything:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What are the risky parts?&lt;br&gt;
What is unclear?&lt;br&gt;
What will break in production?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That question alone saved me from writing the wrong thing more than once.&lt;/p&gt;

&lt;p&gt;And that’s hours of life.&lt;/p&gt;




&lt;h2&gt;
  
  
  Google Drive became my external memory 🗂️
&lt;/h2&gt;

&lt;p&gt;This part is low-key insane.&lt;/p&gt;

&lt;p&gt;I dump there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;old project notes&lt;/li&gt;
&lt;li&gt;architecture drafts&lt;/li&gt;
&lt;li&gt;random research&lt;/li&gt;
&lt;li&gt;useful snippets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Based on my previous automation projects, suggest a structure for this new one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’re no longer prompting a model.&lt;/p&gt;

&lt;p&gt;You’re prompting &lt;strong&gt;your past self&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That feeling is wild.&lt;/p&gt;




&lt;h2&gt;
  
  
  Research without the 37-tab anxiety 🌐
&lt;/h2&gt;

&lt;p&gt;My normal research looked like:&lt;/p&gt;

&lt;p&gt;open tabs → open more tabs → forget why I opened the first tab.&lt;/p&gt;

&lt;p&gt;Now:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Compare these approaches for browser automation in production. Focus on stability and scaling.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I still open sources.&lt;/p&gt;

&lt;p&gt;But only the ones that matter.&lt;/p&gt;

&lt;p&gt;Less noise. Same depth.&lt;/p&gt;




&lt;h2&gt;
  
  
  Coding use (not the way people think) 💻
&lt;/h2&gt;

&lt;p&gt;I don’t use it for:&lt;/p&gt;

&lt;p&gt;“write me a function”.&lt;/p&gt;

&lt;p&gt;I use it for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What kind of race condition can happen here?&lt;/p&gt;

&lt;p&gt;Is this module boundary bad?&lt;/p&gt;

&lt;p&gt;How would you refactor this safely?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s like having a second brain for architectural thinking.&lt;/p&gt;

&lt;p&gt;And it never gets tired.&lt;/p&gt;




&lt;h2&gt;
  
  
  2 TB storage changed one stupid habit ☁️
&lt;/h2&gt;

&lt;p&gt;I stopped deleting things.&lt;/p&gt;

&lt;p&gt;Seriously.&lt;/p&gt;

&lt;p&gt;Now I keep:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;datasets&lt;/li&gt;
&lt;li&gt;recordings&lt;/li&gt;
&lt;li&gt;project archives&lt;/li&gt;
&lt;li&gt;experiments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;p&gt;my past work is always available → AI can use it → better outputs later.&lt;/p&gt;

&lt;p&gt;Before I optimized for space.&lt;/p&gt;

&lt;p&gt;Now I optimize for context.&lt;/p&gt;




&lt;h2&gt;
  
  
  Family sharing = the most practical “AI scaling” move
&lt;/h2&gt;

&lt;p&gt;Everyone has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;their own account&lt;/li&gt;
&lt;li&gt;their own chats&lt;/li&gt;
&lt;li&gt;their own workspace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the plan is shared.&lt;/p&gt;

&lt;p&gt;So it doesn’t feel like:&lt;/p&gt;

&lt;p&gt;“paying for a tool”&lt;/p&gt;

&lt;p&gt;It feels like:&lt;/p&gt;

&lt;p&gt;a small private AI environment for people you work or live with.&lt;/p&gt;




&lt;h1&gt;
  
  
  The real shift
&lt;/h1&gt;

&lt;p&gt;The biggest change is not the model.&lt;/p&gt;

&lt;p&gt;It’s this:&lt;/p&gt;

&lt;p&gt;AI is no longer a separate tab I visit.&lt;/p&gt;

&lt;p&gt;It lives inside:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gmail&lt;/li&gt;
&lt;li&gt;Docs&lt;/li&gt;
&lt;li&gt;Drive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which I already use all day.&lt;/p&gt;

&lt;p&gt;So instead of “using AI”&lt;br&gt;
I just… work.&lt;/p&gt;




&lt;h1&gt;
  
  
  Who this is actually for
&lt;/h1&gt;

&lt;p&gt;This setup makes sense if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you juggle multiple projects&lt;/li&gt;
&lt;li&gt;you read long messy things&lt;/li&gt;
&lt;li&gt;you plan systems before coding&lt;/li&gt;
&lt;li&gt;you reuse your own knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not if you just want:&lt;/p&gt;

&lt;p&gt;“write me a tweet”.&lt;/p&gt;




&lt;h1&gt;
  
  
  What I’m testing next 🧪
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Using Drive as structured long-term context&lt;/li&gt;
&lt;li&gt;Full project planning inside Docs&lt;/li&gt;
&lt;li&gt;Pushing it harder in real coding workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Will share what actually holds up in production life.&lt;/p&gt;




&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;Google AI Pro became useful for me when I stopped treating it like a chatbot and started using it as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;inbox interpreter&lt;/li&gt;
&lt;li&gt;project starter&lt;/li&gt;
&lt;li&gt;context reader&lt;/li&gt;
&lt;li&gt;external memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is just features.&lt;/p&gt;




&lt;p&gt;If you’re using it in a different way — I’m genuinely curious.&lt;br&gt;
Always looking for workflows that remove friction ⚡&lt;/p&gt;




</description>
      <category>ai</category>
      <category>gemini</category>
      <category>google</category>
      <category>productivity</category>
    </item>
    <item>
      <title>$20 subscription 6 people: Google One AI family sharing explained 😈✨</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Wed, 25 Feb 2026 12:47:31 +0000</pubDate>
      <link>https://dev.to/amals367/20-subscription-6-people-google-one-ai-family-sharing-explained-65a</link>
      <guid>https://dev.to/amals367/20-subscription-6-people-google-one-ai-family-sharing-explained-65a</guid>
      <description>&lt;p&gt;If you’re paying for Google AI Pro alone, you might be leaving value on the table.&lt;/p&gt;

&lt;p&gt;Google lets you create a &lt;strong&gt;family group with up to 6 accounts total&lt;/strong&gt; (1 manager + 5 members) and share certain Google services across that group. So one person pays, and multiple people can benefit — &lt;strong&gt;without mixing chats&lt;/strong&gt;, because everyone still uses their &lt;strong&gt;own Google account&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This post explains what you actually get, what’s shared vs not shared, and how to set it up without drama 🙃&lt;/p&gt;




&lt;h2&gt;
  
  
  The idea (in one sentence)
&lt;/h2&gt;

&lt;p&gt;One person buys &lt;strong&gt;Google AI Pro ($19.99/month)&lt;/strong&gt;, enables &lt;strong&gt;family sharing&lt;/strong&gt;, invites up to 5 people, and each person uses Gemini from their &lt;strong&gt;own&lt;/strong&gt; Google account (so chats and history don’t collide).&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚠️ Quick honesty check (before the “invite whoever” part)
&lt;/h2&gt;

&lt;p&gt;Google calls it a &lt;strong&gt;family group&lt;/strong&gt; and markets it as sharing with “people you love” / household. You &lt;em&gt;can&lt;/em&gt; invite any Google account technically, but don’t be weird about it: invite people you trust, and assume Google expects it to be “family/household-style sharing.”&lt;/p&gt;

&lt;p&gt;That said… yes: it’s &lt;strong&gt;not a shared login&lt;/strong&gt;. Each member uses their own account, their own prompts, their own Drive, etc. ✅&lt;/p&gt;




&lt;h2&gt;
  
  
  What you get (the “why this is OP” list) 🔥
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) Up to &lt;strong&gt;6 accounts&lt;/strong&gt; total
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;1 family manager&lt;/li&gt;
&lt;li&gt;up to 5 invited members
Even pending invites count toward the limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2) Gemini benefits for family members (no extra cost)
&lt;/h3&gt;

&lt;p&gt;Google’s own help docs literally say that &lt;strong&gt;family plan members on a Google AI Pro plan can enjoy AI benefits and features at no extra cost&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) &lt;strong&gt;2 TB&lt;/strong&gt; storage (shared pool)
&lt;/h3&gt;

&lt;p&gt;AI Pro includes &lt;strong&gt;2 TB&lt;/strong&gt; total storage across &lt;strong&gt;Google Drive / Gmail / Photos&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Gemini inside Google apps (Gmail / Docs / etc.)
&lt;/h3&gt;

&lt;p&gt;Google has been rolling Gemini into apps like &lt;strong&gt;Gmail, Docs, Sheets, Slides, Meet&lt;/strong&gt; for these plans (availability can vary by country/language).&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Coding goodies (yes, Antigravity is in the bundle)
&lt;/h3&gt;

&lt;p&gt;On the official subscriptions page, Google AI Pro includes higher access to a bunch of dev stuff like &lt;strong&gt;Gemini Code Assist / Gemini CLI / Jules&lt;/strong&gt; and &lt;strong&gt;Google Antigravity&lt;/strong&gt;.&lt;br&gt;
(And yeah, Antigravity is their “agentic dev platform” thing — it’s getting real attention lately.)&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s shared vs what’s NOT shared 🧠
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ Shared
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;membership benefits&lt;/strong&gt; that are meant to be shareable across a family group (AI Pro perks)&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;storage pool&lt;/strong&gt; (2 TB) across the family group&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ❌ Not shared (important)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Your &lt;strong&gt;Gemini chats&lt;/strong&gt; are not merged (different accounts → different chat histories) ✅&lt;/li&gt;
&lt;li&gt;Your &lt;strong&gt;files are not visible&lt;/strong&gt; to the manager unless you explicitly share them. Google states that others in your family won’t have access to your files unless you share directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you get the benefits, but you don’t become roommates inside one account. Perfect.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step-by-step setup (the clean way) 🛠️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: One person becomes the “plan manager”
&lt;/h3&gt;

&lt;p&gt;They subscribe to &lt;strong&gt;Google AI Pro&lt;/strong&gt; (currently shown as &lt;strong&gt;$19.99/month&lt;/strong&gt; on Google’s subscription page).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create / manage your Google family group
&lt;/h3&gt;

&lt;p&gt;As the family manager, you can invite up to 5 people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Turn on Google One family sharing
&lt;/h3&gt;

&lt;p&gt;Google One supports sharing with up to 5 family members.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Invite people (and they accept)
&lt;/h3&gt;

&lt;p&gt;They accept the invite, and then they just use Gemini normally… &lt;strong&gt;on their own account&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common gotchas (aka “why doesn’t it work for my friend?”) 😵‍💫
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Country availability / rollout&lt;/strong&gt;&lt;br&gt;
AI features and integrations can vary by country/territory and rollout timing. Google AI Pro availability is listed as “150+ countries/territories” (check your region).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You hit the 6-person limit&lt;/strong&gt;&lt;br&gt;
It’s strict: &lt;strong&gt;5 members + 1 manager&lt;/strong&gt; max.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage confusion&lt;/strong&gt;&lt;br&gt;
Storage is a shared pool, but &lt;strong&gt;each person controls their own files&lt;/strong&gt;. If someone is eating all the storage, you can’t delete their stuff — they need to clean up their own account.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  My personal “best use” setup 😎
&lt;/h2&gt;

&lt;p&gt;If you’re doing coding + content + life stuff:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Gemini Pro for &lt;strong&gt;research + planning + drafting&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use Gemini inside &lt;strong&gt;Gmail/Docs&lt;/strong&gt; for annoying admin work&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Antigravity / Code Assist / CLI&lt;/strong&gt; for coding workflows (depending on what you like)&lt;/li&gt;
&lt;li&gt;Enjoy the &lt;strong&gt;2 TB&lt;/strong&gt; and stop playing storage Tetris forever&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  TL;DR 🧾
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Google family group = &lt;strong&gt;up to 6 accounts&lt;/strong&gt; (manager + 5)&lt;/li&gt;
&lt;li&gt;Google AI Pro = &lt;strong&gt;$19.99/month&lt;/strong&gt;, includes &lt;strong&gt;2 TB&lt;/strong&gt; + Gemini in apps + dev tools&lt;/li&gt;
&lt;li&gt;Family members can enjoy &lt;strong&gt;AI benefits at no extra cost&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chats don’t mix&lt;/strong&gt; (separate accounts), and &lt;strong&gt;files aren’t visible&lt;/strong&gt; unless shared &lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;1: &lt;a href="https://support.google.com/googleplay/answer/6317858?hl=en&amp;amp;utm_source=chatgpt.com" rel="noopener noreferrer"&gt;https://support.google.com/googleplay/answer/6317858?hl=en&amp;amp;utm_source=chatgpt.com&lt;/a&gt; "Join or leave a family on Google"&lt;br&gt;
2: &lt;a href="https://gemini.google/subscriptions/" rel="noopener noreferrer"&gt;https://gemini.google/subscriptions/&lt;/a&gt; "Google AI Pro &amp;amp; Ultra — get access to Gemini 3.1 Pro &amp;amp; more"&lt;br&gt;
3: &lt;a href="https://families.google/intl/en_ca/families/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;https://families.google/intl/en_ca/families/?utm_source=chatgpt.com&lt;/a&gt; "Stay Connected with a Family Account"&lt;br&gt;
4: &lt;a href="https://support.google.com/googleone/answer/14534406?hl=en" rel="noopener noreferrer"&gt;https://support.google.com/googleone/answer/14534406?hl=en&lt;/a&gt; "Use Google AI Pro benefits - Google One Help"&lt;br&gt;
5: &lt;a href="https://blog.google/products-and-platforms/products/google-one/google-one-gemini-ai-gmail-docs-sheets/" rel="noopener noreferrer"&gt;https://blog.google/products-and-platforms/products/google-one/google-one-gemini-ai-gmail-docs-sheets/&lt;/a&gt; "Google One AI Premium: Gemini access in Gmail, Docs, Sheets and more"&lt;br&gt;
6: &lt;a href="https://www.theverge.com/news/822833/google-antigravity-ide-coding-agent-gemini-3-pro" rel="noopener noreferrer"&gt;https://www.theverge.com/news/822833/google-antigravity-ide-coding-agent-gemini-3-pro&lt;/a&gt; "Google Antigravity is an ‘agent-first’ coding tool built for Gemini 3 | The Verge"&lt;br&gt;
7: &lt;a href="https://support.google.com/googleone/answer/9004015?co=GENIE.Platform%3DAndroid&amp;amp;hl=en" rel="noopener noreferrer"&gt;https://support.google.com/googleone/answer/9004015?co=GENIE.Platform%3DAndroid&amp;amp;hl=en&lt;/a&gt; "Start or stop sharing with your family - Android - Google One Help"&lt;br&gt;
8: &lt;a href="https://support.google.com/googleplay/answer/6286986?co=GENIE.Platform%3DAndroid&amp;amp;hl=en&amp;amp;utm_source=chatgpt.com" rel="noopener noreferrer"&gt;https://support.google.com/googleplay/answer/6286986?co=GENIE.Platform%3DAndroid&amp;amp;hl=en&amp;amp;utm_source=chatgpt.com&lt;/a&gt; "Manage your family on Google - Android"&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>google</category>
      <category>gemini</category>
    </item>
    <item>
      <title>♻️ Persisting Login Sessions in Headless Playwright Automation</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Mon, 02 Feb 2026 16:24:37 +0000</pubDate>
      <link>https://dev.to/amals367/persisting-login-sessions-in-headless-playwright-automation-k07</link>
      <guid>https://dev.to/amals367/persisting-login-sessions-in-headless-playwright-automation-k07</guid>
      <description>&lt;p&gt;Logging into websites every single run is the fastest way to:&lt;/p&gt;

&lt;p&gt;🚫 trigger anti-bot systems&lt;br&gt;
🧩 face endless CAPTCHAs&lt;br&gt;
🔒 get accounts locked&lt;br&gt;
🐌 slow down your scraper&lt;/p&gt;

&lt;p&gt;The correct pattern is simple:&lt;/p&gt;

&lt;p&gt;Login once → persist browser profile → reuse it forever (until expiration).&lt;/p&gt;

&lt;p&gt;In this article I will show how this works in a real project:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/AmaLS367/parts_info_collector" rel="noopener noreferrer"&gt;https://github.com/AmaLS367/parts_info_collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project automates data extraction from Gemini’s web UI and keeps authentication between runs using a persistent browser profile.&lt;/p&gt;

&lt;p&gt;🧠 The Core Idea&lt;/p&gt;

&lt;p&gt;Instead of exporting cookies manually, Playwright can launch Chromium with a persistent user profile directory.&lt;/p&gt;

&lt;p&gt;That directory stores:&lt;/p&gt;

&lt;p&gt;cookies 🍪&lt;br&gt;
localStorage&lt;br&gt;
IndexedDB&lt;br&gt;
login tokens&lt;br&gt;
session metadata&lt;/p&gt;

&lt;p&gt;Once created, it becomes your reusable authenticated identity.&lt;/p&gt;

&lt;p&gt;📂 How the Project Does It&lt;/p&gt;

&lt;p&gt;In parts_info_collector, authentication is handled via:&lt;/p&gt;

&lt;p&gt;a user-data/ directory&lt;br&gt;
first interactive run via first_start.bat&lt;br&gt;
manual login to Gemini&lt;br&gt;
all next runs reuse the same browser profile&lt;/p&gt;

&lt;p&gt;So the flow is:&lt;/p&gt;

&lt;p&gt;1️⃣ Run once with UI&lt;br&gt;
2️⃣ Log in manually&lt;br&gt;
3️⃣ Browser profile is saved&lt;br&gt;
4️⃣ Next runs are headless and already authenticated&lt;/p&gt;

&lt;p&gt;No repeated login.&lt;br&gt;
No constant CAPTCHA hell 😌&lt;/p&gt;

&lt;p&gt;🔐 Persistent Context in Playwright&lt;/p&gt;

&lt;p&gt;This is the key API:&lt;/p&gt;

&lt;p&gt;browser_context = playwright.chromium.launch_persistent_context(&lt;br&gt;
    user_data_dir="user-data",&lt;br&gt;
    headless=True&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;That single folder is everything.&lt;/p&gt;

&lt;p&gt;If it exists, Playwright loads it.&lt;br&gt;
If not, you run in visible mode and authenticate.&lt;/p&gt;

&lt;p&gt;🧑‍💻 First Run: Create the Session&lt;/p&gt;

&lt;p&gt;In the project, the first launch happens with UI enabled so you can log in:&lt;/p&gt;

&lt;p&gt;browser_context = playwright.chromium.launch_persistent_context(&lt;br&gt;
    user_data_dir="user-data",&lt;br&gt;
    headless=False&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;You open Gemini, authenticate manually, and close the browser.&lt;/p&gt;

&lt;p&gt;From that moment:&lt;/p&gt;

&lt;p&gt;📁 user-data/ contains your session.&lt;/p&gt;

&lt;p&gt;♻️ All Next Runs: Fully Headless&lt;/p&gt;

&lt;p&gt;Subsequent executions simply reuse the same folder:&lt;/p&gt;

&lt;p&gt;browser_context = playwright.chromium.launch_persistent_context(&lt;br&gt;
    user_data_dir="user-data",&lt;br&gt;
    headless=True&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;You are already logged in.&lt;/p&gt;

&lt;p&gt;No forms.&lt;br&gt;
No credentials.&lt;br&gt;
No redirects.&lt;/p&gt;

&lt;p&gt;⚠️ What Happens When the Session Expires?&lt;/p&gt;

&lt;p&gt;Eventually cookies die.&lt;/p&gt;

&lt;p&gt;The project handles this operationally:&lt;br&gt;
you delete user-data/&lt;br&gt;
or rerun first_start.bat&lt;br&gt;
login again&lt;br&gt;
profile is recreated&lt;/p&gt;

&lt;p&gt;Simple, manual, and reliable.&lt;/p&gt;

&lt;p&gt;You can extend this with:&lt;/p&gt;

&lt;p&gt;detecting redirect to /login&lt;br&gt;
checking DOM markers&lt;br&gt;
auto-relogin logic&lt;br&gt;
alerting&lt;/p&gt;

&lt;p&gt;🛡️ Why Persistent Profiles Beat Cookie Dumps&lt;/p&gt;

&lt;p&gt;Using a persistent profile directory is stronger than just exporting cookies:&lt;/p&gt;

&lt;p&gt;✅ keeps IndexedDB tokens&lt;br&gt;
✅ survives browser restarts&lt;br&gt;
✅ mimics real user Chrome profile&lt;br&gt;
✅ less suspicious than scripted logins&lt;br&gt;
✅ perfect for long-running monitors&lt;/p&gt;

&lt;p&gt;🚀 When You Should Use This Pattern&lt;/p&gt;

&lt;p&gt;This approach is ideal for:&lt;/p&gt;

&lt;p&gt;📊 price monitors&lt;br&gt;
🤖 automation bots&lt;br&gt;
🧠 AI web scrapers&lt;br&gt;
📈 background workers&lt;br&gt;
🔁 periodic collectors&lt;/p&gt;

&lt;p&gt;Anything that runs for weeks.&lt;/p&gt;

&lt;p&gt;🔗 Real Repository&lt;/p&gt;

&lt;p&gt;Full project:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/AmaLS367/parts_info_collector" rel="noopener noreferrer"&gt;https://github.com/AmaLS367/parts_info_collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧾 TL;DR&lt;/p&gt;

&lt;p&gt;✅ use launch_persistent_context&lt;br&gt;
✅ store profile in user-data/&lt;br&gt;
✅ login once manually&lt;br&gt;
✅ reuse forever&lt;br&gt;
✅ refresh only when expired&lt;/p&gt;

</description>
      <category>programming</category>
      <category>playwright</category>
    </item>
    <item>
      <title>🔍 Monitoring Bybit P2P USDT/RUB prices with Playwright network interception + Telegram alerts (Python)</title>
      <dc:creator>Ama</dc:creator>
      <pubDate>Sun, 01 Feb 2026 13:25:49 +0000</pubDate>
      <link>https://dev.to/amals367/monitoring-bybit-p2p-usdtrub-prices-with-playwright-network-interception-telegram-alerts-4n01</link>
      <guid>https://dev.to/amals367/monitoring-bybit-p2p-usdtrub-prices-with-playwright-network-interception-telegram-alerts-4n01</guid>
      <description>&lt;p&gt;If you have ever tried to “scrape” a modern website, you know the pain: the UI looks simple, but the data is loaded dynamically, and the DOM changes every two minutes 😅&lt;/p&gt;

&lt;p&gt;⚡ What this tool does&lt;/p&gt;

&lt;p&gt;You set a threshold like 71.50 RUB.&lt;br&gt;
The script checks the current best USDT/RUB buy offer and notifies you in Telegram if the price is below that level.&lt;/p&gt;

&lt;p&gt;Core flow:&lt;/p&gt;

&lt;p&gt;🧭 open the Bybit P2P page&lt;br&gt;
🕵️ listen to network responses&lt;br&gt;
🧾 parse JSON from the P2P API response&lt;br&gt;
📉 compare the best price with your threshold&lt;br&gt;
📩 send Telegram message&lt;/p&gt;

&lt;p&gt;No fragile selectors. No DOM gymnastics. Just data.&lt;/p&gt;

&lt;p&gt;🧰 Setup&lt;/p&gt;

&lt;p&gt;Install dependencies:&lt;/p&gt;

&lt;p&gt;pip install -r requirements.txt&lt;br&gt;
playwright install&lt;/p&gt;

&lt;p&gt;Create .env:&lt;/p&gt;

&lt;p&gt;BOT_TOKEN=your_telegram_bot_token&lt;br&gt;
CHAT_ID=your_chat_id&lt;/p&gt;

&lt;p&gt;Run:&lt;/p&gt;

&lt;p&gt;python main.py&lt;/p&gt;

&lt;p&gt;Then enter your threshold when the script asks for it.&lt;/p&gt;

&lt;p&gt;🧠 The key idea: intercept the JSON response&lt;/p&gt;

&lt;p&gt;Bybit P2P UI is basically a shell. The real data is coming from an internal endpoint.&lt;/p&gt;

&lt;p&gt;So the strategy is:&lt;/p&gt;

&lt;p&gt;open the page&lt;br&gt;
subscribe to Playwright response events&lt;br&gt;
filter responses by URL (the internal API endpoint)&lt;br&gt;
call response.json()&lt;br&gt;
extract offers and prices&lt;/p&gt;

&lt;p&gt;This is usually more stable than DOM scraping because you rely on the same payload the app uses internally.&lt;/p&gt;

&lt;p&gt;📦 Why network interception beats DOM scraping&lt;/p&gt;

&lt;p&gt;DOM scraping often breaks because:&lt;br&gt;
class names change&lt;br&gt;
elements are lazy loaded&lt;br&gt;
content is rendered differently by region or language&lt;br&gt;
the site adds random wrappers&lt;/p&gt;

&lt;p&gt;Network interception is cleaner:&lt;/p&gt;

&lt;p&gt;✅ data structure is consistent&lt;br&gt;
✅ you get structured JSON&lt;br&gt;
✅ fewer moving parts&lt;br&gt;
✅ easier debugging&lt;/p&gt;

&lt;p&gt;If the endpoint changes, you update one filter. Not 20 selectors.&lt;/p&gt;

&lt;p&gt;📩 Telegram alerts in 10 seconds&lt;/p&gt;

&lt;p&gt;Telegram sending is just a POST request to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://api.telegram.org/bot" rel="noopener noreferrer"&gt;https://api.telegram.org/bot&lt;/a&gt;/sendMessage&lt;/p&gt;

&lt;p&gt;Payload:&lt;/p&gt;

&lt;p&gt;chat_id&lt;br&gt;
text&lt;/p&gt;

&lt;p&gt;Add a timeout so your monitor loop never hangs.&lt;/p&gt;

&lt;p&gt;Result: when the best price crosses your threshold, you get a nice message instantly 🚀&lt;/p&gt;

&lt;p&gt;If you want to evolve this tool, here are easy upgrades:&lt;/p&gt;

&lt;p&gt;🧽 deduplicate notifications so you do not spam yourself&lt;br&gt;
🧷 move threshold into .env or CLI args&lt;br&gt;
🧪 add a --once mode to run a single check&lt;br&gt;
📊 store history to a simple CSV for analysis&lt;/p&gt;

&lt;p&gt;✅ Wrap up&lt;/p&gt;

&lt;p&gt;If you are scraping a dynamic website and the data comes from JSON calls, intercepting the network layer is often the most robust solution.&lt;/p&gt;

&lt;p&gt;This project is a tiny example of that approach:&lt;/p&gt;

&lt;p&gt;Playwright for browser and network events&lt;br&gt;
JSON parsing for structured data&lt;br&gt;
Telegram for instant notifications&lt;/p&gt;

&lt;p&gt;Repo again: &lt;a href="https://github.com/AmaLS367/bybit-p2p-price-monitor" rel="noopener noreferrer"&gt;https://github.com/AmaLS367/bybit-p2p-price-monitor&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have ideas for improvements or want to contribute, PRs are welcome 💪🔥&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
