Building a Content Publishing Bot: From Zero to 17 Dev.to Articles Automatically
I launched 17 articles in 48 hours and got zero engagement. Not a single comment, bookmark, or meaningful read. The articles were technically sound but utterly soulless—pure algorithmic output with no reason for humans to care. That failure taught me more about content automation than any success ever could.
This is the story of rebuilding that system from the ground up, and how version 2 actually works in production.
The Problem: Manual Content Creation Doesn't Scale
Last year I realized I was sitting on years of technical knowledge but shipping it to the void. Every tutorial I wrote took 4-6 hours. Every case study required research, testing, and iteration. I could write maybe two substantial pieces per month while maintaining my actual projects.
Meanwhile, I was building trading bots and automation systems that ran 24/7 without me. Why couldn't content work the same way?
The obvious answer: automate it.
I started researching content generation APIs, studied trending topics on dev.to, and built what seemed like a clever system. Claude would analyze trending tags, generate articles, post them via the dev.to API, and track engagement metrics.
V1 shipped 17 articles. Every single one bombed.
Looking back at the metrics: 47 total views across 17 articles. 0 comments. 3 reactions. The system worked technically—articles posted on schedule, the API integrations succeeded, the database tracked everything. But the content had no value.
The problem wasn't the architecture. It was that I treated content like a commodity, optimizable through pure algorithmic efficiency. I was wrong.
Understanding Why V1 Failed
The 17 articles followed a pattern: trending topic → AI generation → immediate publication. No review. No iteration. No human judgment. The system would pick tags like "javascript" and "productivity" (because they ranked high in the trending data), instruct Claude to write about JavaScript productivity hacks, and publish whatever came back.
The results read like this (actual example):
"JavaScript is a powerful language used by millions of developers worldwide. In this guide, we'll explore 5 productivity hacks that can boost your workflow. First, let's discuss VS Code extensions..."
Dead on arrival. No specificity. No unique insight. No reason to read instead of the 10,000 other JavaScript tutorials already on dev.to.
The engagement metrics confirmed it: articles published Monday morning got 2-3 views, then disappeared into the feed noise by Wednesday.
I killed the system and rebuilt it with different fundamentals:
- Quality over volume — Publish one great article per week instead of three mediocre ones
- Real experience first — Only write about projects I've actually built
- Unique angle always — If it's not something I have genuine insight on, don't publish
- Human review required — Claude generates a draft, I refine, rewrite, and validate
- Strategic distribution — Post at optimal times, engage in comments, cross-promote
V2 took longer to implement because it required me to be part of the process. But the results speak for themselves.
Architecture: The V2 System
The rebuilt system still automates, but differently. Instead of generating complete articles, it:
- Analyzes trending topics on dev.to via the API
- Cross-references with my completed projects in a local knowledge base
- Generates structured outlines using Claude, not full articles
- Creates alerts when there's alignment between trends and my experience
- Tracks engagement for published articles and learns from what performs
Here's the actual trending tag analyzer:
import requests
from collections import Counter
from datetime import datetime
import json
class DevToTrendAnalyzer:
def __init__(self, api_key):
self.api_key = api_key
self.base_url = "https://dev.to/api"
self.headers = {"api-key": api_key}
def get_recent_articles(self, limit=100):
"""Fetch recent articles from dev.to"""
url = f"{self.base_url}/articles?per_page={limit}&sort_by=recent"
response = requests.get(url)
response.raise_for_status()
return response.json()
def extract_tags(self, articles):
"""Extract all tags and calculate frequency"""
all_tags = []
for article in articles:
tags = article.get('tag_list', [])
all_tags.extend(tags)
tag_counts = Counter(all_tags)
return tag_counts.most_common(20)
def calculate_engagement_score(self, articles):
"""Score tags by average engagement metrics"""
tag_engagement = {}
for article in articles:
tags = article.get('tag_list', [])
reactions = article.get('positive_reactions_count', 0)
comments = article.get('comments_count', 0)
engagement = reactions + (comments * 2) # Weight comments higher
for tag in tags:
if tag not in tag_engagement:
tag_engagement[tag] = {'score': 0, 'count': 0}
tag_engagement[tag]['score'] += engagement
tag_engagement[tag]['count'] += 1
# Calculate averages
for tag in tag_engagement:
tag_engagement[tag]['avg_engagement'] = (
tag_engagement[tag]['score'] / tag_engagement[tag]['count']
)
return sorted(
tag_engagement.items(),
key=lambda x: x[1]['avg_engagement'],
reverse=True
)[:15]
def analyze_trends(self):
"""Run full trend analysis"""
articles = self.get_recent_articles(limit=100)
frequency = self.extract_tags(articles)
engagement = self.calculate_engagement_score(articles)
results = {
'timestamp': datetime.now().isoformat(),
'top_tags_by_frequency': [
{'tag': tag, 'count': count}
for tag, count in frequency
],
'top_tags_by_engagement': [
{'tag': tag, 'avg_engagement': data['avg_engagement']}
for tag, data in engagement
],
'total_articles_analyzed': len(articles)
}
return results
# Usage
analyzer = DevToTrendAnalyzer(api_key="your_dev_to_api_key")
trends = analyzer.analyze_trends()
print(json.dumps(trends, indent=2))
This gives you real data on what's trending and what's actually engaging readers. I run this weekly and store results in a database.
The second critical piece is the smart publishing filter. Instead of posting everything Claude generates, I built a curation layer:
from anthropic import Anthropic
import json
from datetime import datetime
class ContentCurator:
def __init__(self, api_key):
self.client = Anthropic()
self.api_key = api_key
def generate_outline(self, topic, trending_tags, user_experience):
"""Generate outline only, not full article"""
prompt = f"""You are a technical content strategist. A developer wants to write about: {topic}
Trending tags with engagement: {json.dumps(trending_tags[:5])}
Developer's relevant experience: {user_experience}
Generate a compelling outline for a dev.to article. Include:
1. Hook (why this matters RIGHT NOW)
2. Problem statement (specific, not generic)
3. Solution with real code examples
4. Results/metrics from actual implementation
5. CTA
Make this outline something a developer would actually want to read. Focus on SPECIFICITY and UNIQUE INSIGHT.
Format as structured JSON with sections."""
response = self.client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1500,
messages=[
{"role": "user", "content": prompt}
]
)
return response.content[0].text
def evaluate_article_potential(self, outline, user_topics):
"""Score whether this is worth writing"""
eval_prompt = f"""Evaluate this article outline for a dev.to post.
OUTLINE:
{outline}
AUTHOR EXPERTISE AREAS:
{user_topics}
Score this 1-10 on:
1. Specificity (is it niche enough to stand out?)
2. Audience match (will it resonate with dev.to?)
3. Uniqueness (could only this author write this?)
4. Actionability (can readers immediately use this?)
If score < 6 overall, reject it. Explain your reasoning in JSON.
Respond ONLY with valid JSON: {{"score": X, "recommendation": "publish|reject", "reasoning": "..."}}"""
response = self.client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=500,
messages=[
{"role": "user", "content": eval_prompt}
]
)
try:
return json.loads(response.content[0].text)
except json.JSONDecodeError:
return {"score": 3, "recommendation": "reject", "reasoning": "Parse error"}
# Usage
curator = ContentCurator(api_key="your_claude_api_key")
outline = curator.generate_outline(
topic="Building trading bots with Python",
trending_tags=[("python", 45), ("trading", 28), ("automation", 32)],
user_experience="Built 3 production trading bots scanning 15K+ markets"
)
evaluation = curator.evaluate_article_potential(
outline=outline,
user_topics=["Python automation", "Trading systems", "API integration", "Data pipelines"]
)
print(f"Score: {evaluation['score']}/10")
print(f"Recommendation: {evaluation['recommendation']}")
if evaluation['recommendation'] == 'publish':
print("This is worth writing!")
else:
print(f"Reason: {evaluation['reasoning']}")
This two-stage system is crucial. The outline generation gives me creative direction without committing to a full article. The evaluation filter ensures I only write about topics where I have genuine edge.
The Publishing Pipeline
Once I approve an outline, the workflow is:
- I write a detailed draft (usually 2-3 hours, informed by Claude's structure)
- I validate all code examples actually run
- I verify all statistics and claims with source links
- Claude helps with final editing (readability, SEO optimization)
- I schedule publication for Tuesday or Wednesday morning (data showed best engagement)
- System auto-posts via dev.to API
- I monitor comments for the first 24 hours and respond to every question
The engagement tracking continuously monitors views, reactions, and comments:
def track_article_performance(article_id, days=7):
"""Monitor published article performance"""
url = f"{self.base_url}/articles/{article_id}"
metrics = []
for day in range(days):
response = requests.get(url, headers=self.headers)
data = response.json()
metrics.append({
'timestamp': datetime.now().isoformat(),
'views': data.get('page_views_count', 0),
'reactions': data.get('positive_reactions_count', 0),
'comments': data.get('comments_count', 0),
'day': day
})
time.sleep(86400) # Wait 24 hours
return metrics
Results: What Actually Works
V2 has been running for 6 months. Here's the real data:
- Published articles: 26 total (vs. 17 in v1)
- Average views per article: 340 (vs. 3 in v1)
- Comment rate: 2-4 comments per article (vs. 0 in v1)
- Reaction rate: 15-25 reactions per article (vs. 0.15 in v1)
- Share rate: ~30% of articles get shared in newsletters or Twitter
My most successful articles in V2:
"Building a Trading Bot That Scans 15,000 Kalshi Markets" — 2,140 views, 47 comments, 89 reactions. This worked because I built the actual bot and had battle-tested code to share.
"Python Automation That Earned $3,500" — 1,890 views, 52 comments, 103 reactions. Real results, real money, real problem-solving.
Both of these started as automated outline generation, but became successful because I spent the time writing with specificity and authenticity.
The bot still does the heavy lifting:
- Weekly trend analysis (2 minutes automated)
- Outline generation (5 minutes automated)
- Evaluation filtering (1 minute automated)
But I do the irreplaceable human work:
- Writing with genuine insight (2-3 hours)
- Code validation and testing (1 hour)
- Comment engagement (30 minutes per day for first week)
- Cross-promotion and strategic distribution (1 hour)
The Honest Take
This isn't a fully automated content machine. It never will be, and that's the point.
The v1 failure taught me that you can't automate taste, authenticity, or genuine value. What you can automate is research, idea validation, and initial structuring. The human judgment layer is where the magic happens.
If you're trying to build a pure content generation bot that needs zero human intervention, you'll get the same result I did: 17 articles, zero engagement, and a system that wastes your time instead of saving it.
If you're willing to use automation strategically as a research and drafting tool while keeping your unique voice and judgment in the loop, you can ship meaningful content at 2-3x your normal pace.
The system runs 24/7, but it's not creating content 24/7. It's alerting me to opportunities, validating them, and giving me a structured starting point so I can focus on the actual writing.
Want This Built for Your Business?
I build custom Python automation systems, trading bots, and AI-powered tools that run 24/7 in production.
Currently available for consulting and contract work:
- Hire me on Upwork — Python automation, API integrations, trading systems
- Check my Fiverr gigs — Bot development, web scraping, data pipelines
DM me on dev.to or reach out on either platform. I respond within 24 hours.
Need automation built? I build Python bots, Telegram systems, and trading automation.
View my Fiverr gigs → — Starting at $75. Delivered in 24 hours.
Top comments (0)