<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aryan </title>
    <description>The latest articles on DEV Community by Aryan  (@aryan21888).</description>
    <link>https://dev.to/aryan21888</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aryan21888"/>
    <language>en</language>
    <item>
      <title>I Gave My Codebase an AI Intern. Here's What Actually Happened.</title>
      <dc:creator>Aryan </dc:creator>
      <pubDate>Wed, 08 Apr 2026 16:00:21 +0000</pubDate>
      <link>https://dev.to/aryan21888/i-gave-my-codebase-an-ai-intern-heres-what-actually-happened-5ana</link>
      <guid>https://dev.to/aryan21888/i-gave-my-codebase-an-ai-intern-heres-what-actually-happened-5ana</guid>
      <description>&lt;p&gt;The first time I used Claude to help me write a FastAPI endpoint, I thought — okay, this is it. This is the thing that changes everything.&lt;br&gt;
And it did. Just not in the way I expected.&lt;br&gt;
I'm a backend engineer. I build APIs, design schemas, think about concurrency, deploy things to EC2 and hope they don't die at 2am. I use Claude and Cursor every day now. Genuinely can't imagine going back.&lt;br&gt;
But somewhere in the last few months, I quietly picked up a second job nobody put in my offer letter.&lt;br&gt;
AI output reviewer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Drunk Intern Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here's the most accurate description of working with AI tools I've come across recently — it's like being handed an incredibly fast, highly enthusiastic, slightly drunk intern.&lt;br&gt;
They ship fast. Like, embarrassingly fast. You give them a task and they're back in 3 seconds with something that looks completely reasonable.&lt;br&gt;
And that's exactly the problem.&lt;br&gt;
Because "looks completely reasonable" and "is actually correct" are two very different things. And the intern can't tell the difference. So now you have to.&lt;br&gt;
I ran into this headfirst while building a seat-locking system for a real-time booking platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What I Was Building&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The platform was Uptown — a venue and event booking product. The core feature: users browse available seats, select one, complete payment.&lt;br&gt;
Simple enough until you think about what happens when two users try to book the same seat at the same time.&lt;br&gt;
The flow needed a locking mechanism. When a user selects a seat, you temporarily lock it — say for 10 minutes — so nobody else can grab it while they're in the payment flow. Lock expires? Seat goes back to available.&lt;br&gt;
Classic distributed systems problem. I'd read about it. Never actually had to build it under real pressure before.&lt;br&gt;
So I did what any reasonable engineer does in 2025. I opened Claude and explained the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Clean Implementation That Wasn't&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Claude came back fast. Clean endpoint, sensible schema, even handled the lock expiry with a background task. Looked great. Genuinely passed the vibe check.&lt;br&gt;
Missed the race condition entirely.&lt;br&gt;
Here's what happens at scale. Two users open the booking page. Same seat. Both hit the lock endpoint within milliseconds of each other.&lt;br&gt;
Both requests check availability at nearly the same time. Both see the seat as available. Both proceed. Both write a lock.&lt;br&gt;
Two users. One seat. Both think they won.&lt;br&gt;
The AI read the availability and wrote the lock as two separate operations. No atomicity. No database-level guarantee that only one request wins. It optimized for "looks correct" — not "survives production."&lt;br&gt;
The fix required rethinking the operation entirely — making the check and the write happen in a single atomic database call so only one request could ever win. Simple in hindsight. Not obvious if you haven't thought about what milliseconds actually mean in a live system.&lt;br&gt;
The AI didn't think about milliseconds. It's never had to.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Quiet Problem Nobody Talks About&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Race conditions were the dramatic failure. But there was a slower, quieter one that crept up alongside it.&lt;br&gt;
Database bloat.&lt;br&gt;
Every seat selection created a lock record. Expired locks accumulated silently. Nobody cleaned them up. The table kept growing. Availability queries got slower. Nothing broke immediately — it just degraded, the way things do in production when nobody's watching closely.&lt;br&gt;
AI didn't flag this either. It answered the question I asked — how do I lock a seat — not the question I should have also asked — what happens to this data six months from now.&lt;br&gt;
That's not a complaint. That's just the limit of the tool. It lives in the moment of the prompt. Systems live in time.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So Where Does This Leave Us&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I'm not writing this to dunk on AI tools. They've made me genuinely faster in ways that matter.&lt;br&gt;
Boilerplate? Gone. Getting unstuck on syntax? Seconds. Exploring design options? Way faster than it used to be.&lt;br&gt;
But there's a layer underneath all of that where the tools consistently fall short. Not because they're bad. Because that layer requires something they don't have — context about your specific system, judgment built from watching things break, and the experience of being responsible for something when it went wrong at 2am.&lt;br&gt;
System design lives there. Production intuition lives there. The decision of which correct-looking option is actually right for your constraints — that lives there too.&lt;br&gt;
AI gave me a fast, confident co-pilot. I still have to know where we're going.&lt;br&gt;
The engineers who'll struggle aren't the ones AI is supposedly replacing. They're the ones who leaned on the tool before they built the judgment underneath it.&lt;br&gt;
Still a backend engineer. Just with a weird new coworker and a much stronger opinion about atomic database operations.&lt;/p&gt;

&lt;p&gt;Built with FastAPI, PostgreSQL, and one slightly drunk AI intern.&lt;/p&gt;

</description>
      <category>python</category>
      <category>fastapi</category>
      <category>claude</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Automated a Freelance Data Collection Gig in Under an Hour Using OpenClaw</title>
      <dc:creator>Aryan </dc:creator>
      <pubDate>Fri, 03 Apr 2026 04:34:18 +0000</pubDate>
      <link>https://dev.to/aryan21888/i-automated-a-freelance-data-collection-gig-in-under-an-hour-using-openclaw-23ho</link>
      <guid>https://dev.to/aryan21888/i-automated-a-freelance-data-collection-gig-in-under-an-hour-using-openclaw-23ho</guid>
      <description>&lt;p&gt;I had a freelance gig. Collect water management system data for a bunch of towns — infrastructure details, treatment plants, distribution networks, whatever was publicly available. Put it all together in one structured document.&lt;br&gt;
Sounds simple. It's not.&lt;br&gt;
If you've ever tried to manually collect government data from the web, you know the pain. Every town has a different website. Some have PDFs buried three links deep. Some have scanned documents from 2014 that no one has updated since. Some don't even have a proper site — just a PDF uploaded to a random state portal.&lt;br&gt;
So the workflow was basically: Google the town, find the relevant pages, open 15 tabs, copy-paste data into a doc, find a PDF, download it, read through it, extract the relevant numbers, repeat for the next town. For hours.&lt;br&gt;
I wasn't going to do that manually. So I built an automation pipeline using OpenClaw — an open-source autonomous AI agent tool — and got the whole thing working in under an hour.&lt;br&gt;
The Stack&lt;br&gt;
Three pieces:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Brave Search API — for programmatic web search. I needed to find town-specific water management data without manually googling each one. Brave's API let me run structured queries and get back URLs with relevant results.&lt;/li&gt;
&lt;li&gt;OCR (via an OpenClaw skill) — a lot of the useful data was locked inside scanned PDFs. Government reports, water quality documents, infrastructure plans — stuff that's technically "public" but practically unreadable by machines. The OCR skill extracted text from these documents so the pipeline could actually use the content.&lt;/li&gt;
&lt;li&gt;Claude API — the brain of the operation. Once I had raw search results and extracted PDF text, Claude processed everything, pulled out the relevant water management data, and structured it into a clean, consistent format across all the towns.
&lt;strong&gt;How OpenClaw Tied It All Together&lt;/strong&gt;
The thing about OpenClaw is it's not just running one AP__I call. It's an autonomous agent — meaning I could describe the task, give it the tools (Brave Search, OCR, Claude), and let it figure out the sequencing.
For each town, the agent would:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Search for water management data using Brave&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify which results were useful (web pages vs PDFs vs junk)&lt;/li&gt;
&lt;li&gt;For PDFs — download and run OCR to extract text&lt;/li&gt;
&lt;li&gt;For web pages — pull the relevant content&lt;/li&gt;
&lt;li&gt;Send everything to Claude to extract and structure the data&lt;/li&gt;
&lt;li&gt;Compile the output into a consistent format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn't have to write the logic for "if it's a PDF, do this, if it's a webpage, do that." The agent handled those decisions. I just described what I needed and gave it the tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Part That Blew My Mind&lt;/strong&gt;&lt;br&gt;
Here's what I didn't expect.&lt;br&gt;
After the pipeline was working, OpenClaw didn't just run the task — it created a skill out of the entire workflow. A reusable, self-contained skill that combined Brave Search, OCR, and Claude into one thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;So after the initial setup, my workflow became:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Enter town name → get structured water management data.&lt;br&gt;
That's it. One input. The skill handled everything — searching, finding PDFs, scanning them, extracting the data, structuring the output. I didn't have to think about the pipeline anymore. It was just a tool now.&lt;br&gt;
Setup took maybe 40 minutes — configuring the individual pieces, testing on one town, making sure the output format was right. But once the agent built the skill, every town after that was basically instant.&lt;br&gt;
And the output was more consistent than what I would have produced manually. Because Claude was structuring every town's data the same way — same fields, same format. When I do it manually, I start strong and by town 15 I'm cutting corners. We all do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I'm Writing This&lt;/strong&gt;&lt;br&gt;
I see a lot of posts about AI agents being used for coding — Cursor, Claude Code, Copilot. And that's great. But I think the more underrated use case is stuff like this. Data collection. Research. Document processing. The boring, repetitive work that doesn't require you to be a senior engineer — just someone who knows how to connect a few APIs and let an agent do the manual labor.&lt;br&gt;
If you're a backend developer or honestly anyone who deals with data, look into autonomous agent tools. Not for the hype. For the hours you'll get back.&lt;br&gt;
OpenClaw is open-source. The Brave Search API has a free tier. Claude API costs are minimal for this kind of workload. The barrier to entry is lower than you think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenClaw (autonomous AI agent)&lt;/li&gt;
&lt;li&gt;Brave Search API&lt;/li&gt;
&lt;li&gt;OCR skill (document scanning)&lt;/li&gt;
&lt;li&gt;Claude API (data extraction and structuring)
&lt;strong&gt;Time saved:&lt;/strong&gt; ~6-8 hours of manual work → under 1 hour of setup + automated execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you've used AI agents for non-coding automation, I'd love to hear what you built. Drop it in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>openclaw</category>
      <category>productivity</category>
      <category>webscraping</category>
    </item>
  </channel>
</rss>
