<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexis</title>
    <description>The latest articles on DEV Community by Alexis (@alexis_gilgonzales_c5d9a).</description>
    <link>https://dev.to/alexis_gilgonzales_c5d9a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexis_gilgonzales_c5d9a"/>
    <language>en</language>
    <item>
      <title>The Lobster’s Shell: 5 Rules for Not Turning Your AI Assistant into a Botnet</title>
      <dc:creator>Alexis</dc:creator>
      <pubDate>Sun, 25 Jan 2026 09:50:53 +0000</pubDate>
      <link>https://dev.to/alexis_gilgonzales_c5d9a/the-lobsters-shell-5-rules-for-not-turning-your-ai-assistant-into-a-botnet-1n7i</link>
      <guid>https://dev.to/alexis_gilgonzales_c5d9a/the-lobsters-shell-5-rules-for-not-turning-your-ai-assistant-into-a-botnet-1n7i</guid>
      <description>&lt;p&gt;So, you’ve decided to install &lt;a href="https://github.com/clawdbot/clawdbot" rel="noopener noreferrer"&gt;Clawdbot&lt;/a&gt;, the "lobster way" of life. You’ve got a personal AI assistant that can browse your web, control your mac, chat on Signal, and probably knows your Spotify playlist better than you do. It’s local-first, fast, and incredibly powerful.&lt;/p&gt;

&lt;p&gt;I looked under the hood. The architecture is brilliant, but let’s be real: you’re basically giving a highly persuasive, occasionally hallucinating LLM a set of keys to your digital kingdom.&lt;/p&gt;

&lt;p&gt;If you don't want your assistant to accidentally leak your .env files to a random Telegram bot or execute a rm -rf / because a prompt-injection attack told it to "clean the room", follow these five essential recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Respect the pairing policy (Don’t Open the Door to Strangers)
&lt;/h2&gt;

&lt;p&gt;Clawdbot has a dmPolicy="pairing" default for a reason. When a random account DMs your bot on Telegram or WhatsApp, it asks for a code. Do not change this to "open" unless you want to invite the entire internet into your shell.&lt;/p&gt;

&lt;p&gt;Treat every inbound DM as untrusted input. If you’re using the "open" policy for a public-facing bot, ensure your agent’s workspace is strictly isolated. A lobster in a glass box is a safe lobster.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Guard the Gateway (Tailscale is your BFF)
&lt;/h2&gt;

&lt;p&gt;The Gateway is your control plane. Clawdbot makes it tempting to use Tailscale Funnel to access your dashboard from anywhere. While Funnel is cool, it’s public.&lt;/p&gt;

&lt;p&gt;Stick to tailscale serve (internal to your tailnet) whenever possible. If you must use Funnel, for the love of all things holy, use gateway.auth.mode: "password". An unauthenticated AI gateway is just a RCEaaS ("Remote Code Execution as a Service") endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Sandboxing: because "LLM-safe" is an oxymoron
&lt;/h2&gt;

&lt;p&gt;Clawdbot supports a Docker-based sandbox (Dockerfile.sandbox). Use it. If you let your agent run scripts or "research" the web on your bare-metal host, you’re one clever prompt away from a disaster.&lt;/p&gt;

&lt;p&gt;Configure your agent to run tools inside the container. If the agent decides to download and run a "cool optimization script" it found on a shady forum, it only kills a disposable container, not your precious mac box.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Audit your "Skills" (and their permissions)
&lt;/h2&gt;

&lt;p&gt;The skills platform is what makes Clawdbot a powerhouse: Gmail access, browser control, system notifications. But every skill is a new attack vector.&lt;/p&gt;

&lt;p&gt;Periodically run clawdbot doctor. It’s not just for troubleshooting; it’s your security auditor. If a skill doesn't need "System Run" permissions to tell you the weather, don't give it any. Privilege escalation is much harder when there are no privileges to escalate.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Rotate your Secrets (.env is &lt;em&gt;not&lt;/em&gt; a Vault)
&lt;/h2&gt;

&lt;p&gt;Clawdbot handles OAuth for Anthropic, OpenAI, and more. It even has a detect-secrets scan in the repo. Follow that lead.&lt;/p&gt;

&lt;p&gt;Don't hardcode API keys in plain text files that you might accidentally sync to a public gist. Use the managed clawdbot onboard flow to handle credentials. If you suspect your bot has been "convinced" to output its environment variables, rotate those keys immediately.&lt;/p&gt;

&lt;p&gt;Final thought: Clawdbot is an incredible piece of engineering. It brings the power of an agentic future to your local machine today. Just remember: with great power comes the absolute certainty that someone, somewhere, will try to prompt-inject your lobster.&lt;/p&gt;

&lt;p&gt;Stay secure, stay hungry, and keep snapping those claws. 🦞&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>OpenAI's testament to Postgresql reliability</title>
      <dc:creator>Alexis</dc:creator>
      <pubDate>Sat, 24 Jan 2026 11:54:21 +0000</pubDate>
      <link>https://dev.to/alexis_gilgonzales_c5d9a/openais-testament-to-postgresql-reliability-12o2</link>
      <guid>https://dev.to/alexis_gilgonzales_c5d9a/openais-testament-to-postgresql-reliability-12o2</guid>
      <description>&lt;p&gt;As someone who has seen the "Postgres vs The World" wars more times than I’ve seen successful schema migrations, I find &lt;a href="https://openai.com/index/scaling-postgresql/" rel="noopener noreferrer"&gt;OpenAI's engineering blog on scaling PostgreSQL&lt;/a&gt; to be a fascinating study in Brute Force meets Elegance.&lt;/p&gt;

&lt;p&gt;It is truly heartwarming to see a trillion-dollar AI powerhouse wrestling with the same VACUUM issues and connection storms that haunt a weekend Shopify plugin. Here's my analysis their journey, contrasted with the "grass is greener" alternatives of the RDBMS world.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Cracks in the Initial Design ("Postgres is slow" phase)
&lt;/h2&gt;

&lt;p&gt;The challenge: OpenAI hit the classic wall: write amplification. In Postgres, every UPDATE is effectively an INSERT of a new row version (MVCC), which triggers index updates and bloat.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Scaling up (bigger instances) and scaling out (more read replicas), then eventually surrendering and moving write-heavy shards to Azure Cosmos DB.&lt;/p&gt;

&lt;p&gt;How other engines compare: &lt;/p&gt;

&lt;p&gt;MySQL/MariaDB: These would have smirked. MySQL’s InnoDB uses an "Update-in-Place" mechanism with an Undo Log. It doesn't copy the whole row for every change, making it inherently more efficient for write-heavy workloads. OpenAI might have avoided moving to Cosmos DB so early if they’d started with a storage engine that doesn't treat an update like a funeral for the old row.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Reducing Load on the Primary ("One Writer to Rule Them All")
&lt;/h2&gt;

&lt;p&gt;The Challenge: Relying on a single primary for writes for 800 million users is… bold. One "write storm" from a new feature, and the whole house of cards wobbles.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Moving sharded workloads away and aggressively optimizing application-side writes.&lt;/p&gt;

&lt;p&gt;How other engines compare:&lt;/p&gt;

&lt;p&gt;TiDB: This (and other) NewSQL engines were built specifically to avoid this single writer existential dread. They offer native horizontal write scaling. While OpenAI is busy manually sharding to Cosmos DB (a completely different API), a TiDB cluster would have just asked for more nodes and kept the SQL interface intact. Just ask Atlassian, Plaid, Bolt, Pinterest and many others.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Query Optimization &amp;amp; ORM Sins
&lt;/h2&gt;

&lt;p&gt;The Challenge: A 12-table join generated by an ORM nearly brought down ChatGPT.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Moving join logic to the application layer and hunting down "idle in transaction" sessions.&lt;/p&gt;

&lt;p&gt;How other engines compare:&lt;/p&gt;

&lt;p&gt;PostgreSQL actually has one of the best query planners in existence. If PostgreSQL couldn't handle that 12-table join, MySQL likely would have just timed out and retired. However, MariaDB has some interesting hash join optimizations that can occasionally handle ORM-induced trauma better than standard nested loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Workload Isolation ("Noisy Neighbor")
&lt;/h2&gt;

&lt;p&gt;The Challenge: A new feature launch can starve the API of resources.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Physically splitting high-priority and low-priority traffic onto different instances.&lt;/p&gt;

&lt;p&gt;How other engines compare:&lt;/p&gt;

&lt;p&gt;Oracle / SQL Server (The "Expensive" Engines): While we love open source, Oracle’s Resource Manager and SQL Server's Resource Governor are decades ahead here, allowing fine-grained CPU/IO caps within a single instance. In the open-source world, OpenAI's "separate instance" approach is the standard solution, though PostgreSQL's lack of native multi-tenancy resource controls makes this inevitable.&lt;/p&gt;

&lt;p&gt;There are a few open-source alternatives to resource governing (other than OpenAI's proxy layer) though : &lt;/p&gt;

&lt;h3&gt;
  
  
  Linux Control Groups (cgroups)
&lt;/h3&gt;

&lt;p&gt;This is the most direct open-source equivalent, but it operates at the OS level rather than the SQL level.&lt;br&gt;
You create cgroups (CPU shares, memory limits, I/O bandwidth) and assign specific database processes to them.&lt;br&gt;
This works best for PostgreSQL because Postgres is process-based (one PID per connection). You can move a PID into a restricted cgroup.&lt;br&gt;
However, it’s a manual nightmare to automate at scale. You end up writing custom watcher scripts to catch rogue queries and shove them into the "penalty box" cgroup.&lt;/p&gt;

&lt;h3&gt;
  
  
  MySQL 8.4 LTS &amp;amp; 9.x Resource Groups ("tread-pinning")
&lt;/h3&gt;

&lt;p&gt;MySQL (the free one) actually introduced a native Resource Group feature to compete with the big boys.&lt;br&gt;
You can define resource groups and assign them to specific threads. You can even use SQL hints like SELECT /*+ RESOURCE_GROUP(Batch_Group) */ ... to force a query into a restricted lane.&lt;br&gt;
You can also assign specific priorities to threads (0 to 19). Higher priorities get more cpu face time under contention.&lt;br&gt;
Alas, it currently only manages CPU/vCPU affinity. It doesn’t yet have the sophisticated IOPS or memory throttling found in SQL Server’s Resource Governor or Oracle's Resource Manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  MariaDB 11.x Thread-Pooling
&lt;/h3&gt;

&lt;p&gt;MariaDB has historically rolled its eyes at MySQL’s Resource Groups. Instead, they’ve perfected the Thread Pool.&lt;/p&gt;

&lt;p&gt;Priority Queuing: Instead of pinning threads to CPUs, MariaDB’s Thread Pool (especially in the latest 11.x LTS releases) uses a sophisticated queuing system. You can define thread_pool_prio=high for specific connections.&lt;/p&gt;

&lt;p&gt;The "LIFO" Trick: To prevent "clogging," MariaDB can prioritize existing transactions over new ones, ensuring that work already in progress finishes faster and releases resources—a "clean up your own mess" philosophy.&lt;/p&gt;

&lt;p&gt;Resource Limits (the old school way): MariaDB still leans heavily on MAX_USER_CONNECTIONS, MAX_QUERIES_PER_HOUR, and MAX_UPDATES_PER_HOUR. It’s less "dynamic throttling" and more "hard ceiling". It still lacks a "CREATE RESOURCE GROUP" similar to MySQL's.&lt;/p&gt;

&lt;h3&gt;
  
  
  pg_resgroup (Greenplum / CloudNativePG)
&lt;/h3&gt;

&lt;p&gt;While standard Postgres doesn't have native resource groups, its specialized cousin Greenplum (open-source) does.&lt;/p&gt;

&lt;p&gt;Some extensions and operators for Kubernetes (like CloudNativePG) are trying to bridge this gap by using Kubernetes "Resource Quotas" to mimic governance.&lt;br&gt;
In standard Postgres, the closest you get without a specialized fork is setting statement_timeout or using work_mem limits per user, which is a blunt instrument compared to a governor.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Connection Pooling (The PgBouncer Tax)
&lt;/h2&gt;

&lt;p&gt;The Challenge: Postgres creates a process per connection. 5,000 connections and the kernel starts sweating.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Deploying fleets of PgBouncer in Kubernetes.&lt;/p&gt;

&lt;p&gt;How other engines compare:&lt;/p&gt;

&lt;p&gt;MySQL: Handles connections as threads, which is lighter, but still struggles at massive scale.&lt;/p&gt;

&lt;p&gt;MariaDB: Specifically implemented a Thread Pool plugin years ago to solve exactly what OpenAI is doing with PgBouncer. Using MariaDB, they might have avoided the complexity of managing a middle-tier proxy layer entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Scaling Read Replicas (The WAL Avalanche)
&lt;/h2&gt;

&lt;p&gt;The Challenge: The primary has to ship Write Ahead Logs (WAL) to 50+ replicas, eating its own bandwidth.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: Cascading replication (replicas of replicas).&lt;/p&gt;

&lt;p&gt;In this model, the Primary sends logs to level 1 replicas, which in turn act as masters for level 2 replicas.&lt;/p&gt;

&lt;p&gt;This strategy protects the Primary's bandwidth. If you have 50 replicas, the Primary only talks to 3 or 4; those 4 handle the traffic for the rest.&lt;br&gt;
However, every hop adds latency. By the time data reaches a level 2 replica, it might be hundreds of milliseconds (or seconds) behind the Primary. This makes "Read-Your-Own-Writes" nearly impossible.&lt;br&gt;
Worse, if a level 1 replica dies, all its children become "orphans" and stop updating. You need a very smart automation layer (like Patroni) to re-home those orphans.&lt;/p&gt;

&lt;p&gt;Other options : &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Aurora PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Aurora laughed at the "Cascading" problem and deleted it entirely.&lt;br&gt;
Here, replicas do not have their own copy of the data. They all look at a single, massive, distributed storage volume.&lt;/p&gt;

&lt;p&gt;Since there is no "Log Shipping" between instances, there is zero replication lag in the traditional sense. You can have 15 replicas, and they all see the data almost instantly (typically &amp;lt;10ms).&lt;br&gt;
You don't need "Levels." You just add more readers to the cluster endpoint.&lt;br&gt;
So, you gain incredible scaling but lose the ability to move that database back to a standard on-prem server without a full export/import.&lt;/p&gt;

&lt;h3&gt;
  
  
  Google Cloud AlloyDB
&lt;/h3&gt;

&lt;p&gt;AlloyDB is Google's answer to Aurora, but it adds an "Intelligent Cache" layer.&lt;/p&gt;

&lt;p&gt;AlloyDB uses a specialized "Log Processing Layer" that sits between the database and storage. It pre-digests the WALs so the replicas don't have to work as hard, here's how : &lt;/p&gt;

&lt;p&gt;AlloyDB splits the database into a Compute Layer (where SQL lives) and an Intelligent Storage Layer. The "pre-digesting" happens in the middle, handled by the Log Processing Service (LPS).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: The Primary ships "thin" logs: When you update a row, the primary node only sends the WAL record to the storage layer. It doesn't worry about updating data blocks on disk.&lt;/li&gt;
&lt;li&gt;Step 2: The LPS takes over: The LPS is a fleet of regional workers that receive these WAL records. Their entire job is to materialize the logs. They take the stream of changes and apply them to the blocks in the distributed storage system &lt;em&gt;asynchronously&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Step 3: Replicas stay "fresh": Because the storage itself is being updated by the LPS, the read replicas don't have to replay anything. When a replica needs a data block, it simply fetches the already-updated block from the storage layer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is designed specifically for "Analytical-Lite" queries on read replicas, often outperforming standard PostgreSQL by 4x–5x for complex scans.&lt;br&gt;
But, like Aurora, it’s a "black box." You get the performance, but you pay the "Cloud Premium."&lt;br&gt;
How other engines compare:&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Schema Management
&lt;/h2&gt;

&lt;p&gt;The Challenge: Table rewrites for column changes.&lt;/p&gt;

&lt;p&gt;The OpenAI Fix: A strict 5-second timeout and forbidding any change that triggers a rewrite.&lt;/p&gt;

&lt;p&gt;How other engines compare:&lt;/p&gt;

&lt;p&gt;MySQL 8.0+: Introduced "Instant DDL," allowing users to add columns in milliseconds regardless of table size. &lt;/p&gt;

&lt;p&gt;MariaDB introduced Instant Add Column in version 10.3 (back in 2018) and has been expanding it ever since. &lt;br&gt;
In MariaDB, adding a column at the end of a table, changing a default value, or renaming a column is a metadata-only operation. It doesn't matter if your table has 10 rows or 10 billion; the change happens in milliseconds because the engine doesn't touch the existing data files.&lt;br&gt;
By MariaDB 10.6 and 11.x, even more operations became "Instant," including dropping columns (which is handled by logically hiding them rather than physically deleting them immediately).&lt;/p&gt;

&lt;p&gt;Postgres is getting better at this (it no longer rewrites for many ALTER TABLE ops), but OpenAI’s trauma suggests they are still haunted by the ghosts of Postgres versions in the past.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;OpenAI's achievement is a testament to PostgreSQL's reliability. They have successfully pushed a general-purpose, single-primary database to do things it was never meant to do.&lt;/p&gt;

&lt;p&gt;However, looking at their "Road Ahead," they are slowly reinventing Vitess or TiDB one architectural SEV at a time. They chose the "Devil they knew" (PostgreSQL) and are paying the "Engineering Tax" to keep it alive. It’s a masterclass in stability over novelty, even if that stability requires 50 replicas and a separate NoSQL database just to handle the overflow.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>mysql</category>
      <category>mariadb</category>
    </item>
  </channel>
</rss>
