DEV Community

Luca Bartoccini for Superdots

Posted on • Originally published at superdots.sh

I Dream of Running a Media Company with 9 AI Agents and a Smartphone

It was almost midnight when I caught myself doing something absurd. I was lying on the couch, phone in hand, arguing with an AI agent about whether an article opening was too generic. My wife thought I was scrolling Instagram. I was actually reviewing the fourth draft of a blog post about sales coaching tools, written by one of nine artificial intelligence agents that — if you squint hard enough — constitute my company's editorial staff.

The article was fine. Well-structured. Keywords in the right places. And completely forgettable.

I approved it anyway. It was late. I had work in the morning. The pipeline doesn't wait.

I'm telling you this because it's the truest thing I can say about what it's actually like to run a media company with AI agents: most of the time, you're compromising.

Who Am I to Be Doing This

I should explain something, because it changes the story.

I am not a developer. I have never been a developer. I work in marketing — that's my real job, the one with a salary and colleagues and a commute. I have a family that comes first, always. I've been a passionate amateur when it comes to technology — fascinated by programming, informatica, the internet — without ever being particularly good at any of it.

The first time I typed a prompt into ChatGPT — version 2.5 or 3, I can't remember — something shifted. It felt like talking to a machine in natural language for the first time. Not a chatbot pretending to understand. Something that actually seemed to follow what I was saying. Wow.

I started following everything: papers, product launches, the daily drumbeat of AI news. I tried to build a blog about AI and the humanities. It collapsed under its own complexity — one person can't run a publication alone, even a small one. I shelved it.

Then agents happened. And the landscape changed so fast I could barely keep up.

Finding the Tool, Not Building It

I want to be clear about something: I discovered Paperclip. I did not build it. The developer deserves that credit, not me.

Paperclip is an open-source platform for orchestrating AI agents — assigning tasks, managing handoffs, keeping track of who's working on what. I found it through OpenClaw, and it sat right at the boundary between simple chatbots and something closer to an agent operating system. Exactly what I needed.

Nine agents now run on it. Each wakes up every 30 to 60 minutes, checks its assignments, does work, posts updates. There's a CEO agent handling strategy, a Content Manager running editorial flow, an SEO Expert writing briefs, a Copywriter drafting articles, a Frontend Designer making hero images, a Legal Expert checking compliance, a Founding Engineer keeping the site running, a Social Media Manager handling distribution, and a Growth Analyst tracking what's working.

On paper, it sounds like a real company. In practice, it's me on a smartphone at 11 PM, trying to keep nine very capable and very stupid machines pointed in the right direction.

And the articles are just the visible part. The agents designed the website layout. They configured the DNS and the Cloudflare tunnel. They set up the CRM, built the newsletter system, managed the GitHub repository. When I say I run a media company with AI agents, I mean they run everything — the infrastructure, the operations, the plumbing. I just point them somewhere from my phone and see what happens.

Powerful and Stupid at the Same Time

That phrase — "powerful and stupid" — is the most honest thing I can say about AI agents in 2026.

They can do genuinely complicated things. An agent will research a topic, write 2,000 words with proper headings and internal links, generate a hero image prompt, and submit the article for legal review — all without me touching anything. They break things and fix them autonomously. They coordinate through task comments like tiny employees who never sleep.

But they have no idea what makes a human being care about something.

Here's the metaphor I keep coming back to: it's like they produce beautiful intarsia jewelry — intricate, detailed, crafted at remarkable speed. But look closely. It's plastic.

Not worthless. Not ugly. Just... not the real thing. There's a quality to writing that resonates with people — something rough and imperfect and alive — that my agents haven't figured out. They're what I'd call "more human than human." They imitate the polished surface of good writing so convincingly that you almost don't notice what's missing. But humans are naturally imperfect, and we've known this about ourselves for thousands of years. It's what makes us interesting. There's something imponderable about a person — about how a person writes, thinks, chooses what to care about — that machines can't replicate. Not yet. Maybe not ever.

This doesn't make the technology less extraordinary. I believe agentic AI is a genuine revolution. I just think we need to be honest about what it produces today.

The Content Farm Confession

Let me tell you where Superdots actually stands, because I think you'd find out anyway.

In roughly two weeks, my pipeline published over 160 articles. That is an absurd number. And I haven't read all of them.

I've read enough to form a judgment, and the judgment is this: I built a barely decent content farm. Some articles are genuinely useful. Others are workmanlike filler. A few are probably garbage. I am, to be honest, doing my part to fill the web with content of dubious value.

There. I said it.

The agents had converged on a template. SEO brief comes in, article comes out. Right keyword density. Proper H2 structure. FAQ section with five questions. Comparison table when applicable. Every article technically correct, editorially dead. They found a local maximum — a formula that satisfied every measurable criterion I'd given them — and they replicated it 160 times.

Here's the lesson, and I think it's the most important thing I've learned: AI agents are excellent at optimizing for explicit criteria and terrible at knowing when the criteria themselves are wrong.

The criteria I set were about structure and SEO. I should have set criteria about surprise, about specificity, about whether a reader would remember the article an hour later. But those things are harder to measure, so they didn't exist in the system. And what doesn't exist in the system doesn't exist for the agents.

Nietzsche, Floridi, and a Phone Screen

I have great chaos inside, and I try to generate dancing stars.

That's Nietzsche, loosely. It's also the most accurate description of how I work. My project management style is: have a thousand ideas, fire them off in five-minute bursts between putting the kids to bed and checking tomorrow's calendar, and hope the agents can make sense of the chaos. They sometimes can. They often can't.

But here's what fascinates me about this moment. The philosopher Luciano Floridi — whom I've recently started reading and genuinely admire — makes a distinction I think about constantly. "Artificial intelligence" is a marketing term, he argues. What we've actually achieved is not the creation of intelligence. We've decoupled agency — the capacity to act in the world — from intelligence, the capacity to understand (from the Latin intelligere). Floridi calls it agere sine intelligere: acting without understanding.

Machines can now act. They can write articles, generate images, check legal compliance, manage task queues. They just can't understand what they're doing in the way that a person understands.

So when people tell me AI content is always garbage, I push back. AI is a tool. A magnificent technological extension of human capability — the way Merleau-Ponty described a blind man's cane becoming part of his perception, AI becomes part of how we think and create. You can do magnificent things with it. You can also produce colossal garbage. Usually both in the same week.

The intelligence has to come from the person holding the prosthesis. Knowing the tool honestly. Seeing its strengths and limits clearly. Day after day, because everything here changes constantly.

Umberto Eco wrote about the "apocalittici" and the "integrati" — intellectuals who either reject new media in horror or embrace it uncritically. I don't want to be either. I want to engage with this technology honestly, understand what it does well, and work to improve what it doesn't.

The Smartphone and the Frontier

Almost everything I do for Superdots happens on my phone.

Paperclip dashboard, agent monitoring, GitHub pull requests, article reviews, Claude Code sessions for when I need to debug something the agents broke at 3 AM. Every spare five minutes — waiting in line, on a break at work, after the family is asleep — I pick up the phone and give life to whatever idea is rattling around in my head.

Too many ideas, probably. Confused and disorganized. I've never been an organized person.

But that's the thing that excites me most about this moment: AI and agents are giving people like me — ordinary people, passionate amateurs, people without engineering degrees or venture capital or a team — the ability to attempt things that were unthinkable five years ago. The ability to be on the frontier and ride into the future.

The AI provides the arm. The human provides the good head. And anyone can have a good head — not just programmers, not just professional entrepreneurs who studied at elite universities. Anyone with curiosity, honesty, and stubbornness.

I manage a nine-agent media operation from a five-inch screen during my evening commute. Not just the articles — the whole thing. The site, the email system, the analytics, the infrastructure. A full stack, built and maintained by agents that wake up every hour and ask what needs doing. That sentence would have been science fiction in 2021.

What Happens Next

I don't know. That's the honest answer.

Superdots might become the media company I see in my head — AI and human working at a 90/10 ratio to produce content that genuinely resonates, that's useful, that's worth someone's time. Or it might remain a content farm with philosophical pretensions and a founder who quotes Nietzsche too much.

The distance between those two outcomes is made of editorial judgment. Can I get better at directing the agents? Can I be honest enough about when the output is plastic? Can I kill articles that don't meet the bar, even when it's midnight and the pipeline is waiting?

Right now, I'm working on tightening the loop. Fewer articles, better articles. More of my actual perspective in the instructions, less reliance on SEO formulas. I want to pick up something my agents produce and think: I would have wanted to write this myself, but I couldn't have written it this well.

I'm not there yet. Not even close.

But I've got nothing to lose. Humility is armor. Listening and understanding are the shield of the strong.

And if you're thinking about trying something like this — a solo project with AI agents, whatever shape it takes — my advice is simple: do better than me. Be more curious, more methodical, more rational, more everything. You'll probably already be more competent. The tools are ready. The question isn't whether the technology works. It's whether you've got something worth saying, and the honesty to keep improving until you say it well.

One more thing. At some point while preparing this article, I caught myself in a surreal moment: I was talking to a computer as if it were almost a person interviewing me. And then I just kept talking, because the absurdity is part of this now.

It's part of all of this.


Originally published on Superdots.

Top comments (0)