π€ I Automated My Entire Content Workflow β Here's Exactly How
Let me be real with you. I was spending 15+ hours per week just creating, formatting, scheduling, and publishing content across LinkedIn, Twitter, my blog, YouTube, and email. It was exhausting. And most of it was repetitive.
So I did what any sane developer would do β I automated the whole thing.
In this post, I'm going to walk you through exactly how I built a fully automated AI content pipeline using n8n, OpenAI's GPT API, Google Sheets, and Webhooks. No fluff. Just working workflows you can steal and customize.
By the end, you'll have a blueprint for automating:
- π LinkedIn posting
- π° Blog generation
- π§ Email replies
- π¦ Tweet threads
- π¬ YouTube scripts
- π SEO metadata
Let's build.
π§° The Tech Stack
Before diving into the workflows, here's every tool involved and what role it plays.
| Tool | Role | Free Tier? |
|---|---|---|
| n8n | Workflow automation engine (self-hosted) | β Yes (self-host) |
| Make.com | Alternative visual automation builder | β Limited |
| Zapier | Quick integrations for non-technical users | β Limited |
| OpenAI API | GPT-4o / GPT-4o-mini for content generation | π° Pay-per-use |
| Webhooks | Trigger workflows from any external event | β Yes |
| Google Sheets API | Central content database and tracking | β Yes |
| Buffer / Typefully | Social scheduling (LinkedIn + Twitter) | β Free tier |
| Ghost / Dev.to API | Blog publishing endpoints | β Yes |
| Gmail API | Email reading and auto-reply | β Yes |
I chose n8n as the core engine because it's open-source, self-hostable, and gives you way more control than Zapier or Make.com for complex AI workflows. You can run it on a $5/month VPS and never worry about per-task pricing.
ποΈ Architecture Overview: How Everything Connects
Here's the high-level picture of how data flows through the system:
ββββββββββββββββ βββββββββββββββββ βββββββββββββββββββ
β Google Sheet ββββββΆβ n8n Core ββββββΆβ OpenAI GPT API β
β (Content DB) βββββββ (Workflows) βββββββ (Generation) β
ββββββββββββββββ βββββββ¬ββββββββββ βββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
β LinkedIn β β Twitter β β Blog β
β API β β API β β API β
ββββββββββββ ββββββββββββ ββββββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
β Gmail β β YouTube β β SEO β
β Replies β β Scripts β β Metadata β
ββββββββββββ ββββββββββββ ββββββββββββ
The Google Sheet acts as the single source of truth. Every content idea, status, generated output, and publish date lives there. n8n reads from it, calls GPT, and pushes results back into the sheet and out to platforms.
π Step 1: Set Up the Content Database (Google Sheets)
Everything starts with a well-structured Google Sheet. This is your content command center.
Sheet Structure
| Column | Description | Example |
|---|---|---|
A β Content ID |
Unique identifier | CTN-042 |
B β Topic |
Core topic or keyword | AI Automation for Devs |
C β Platform |
Target platform | LinkedIn, Twitter, Blog |
D β Content Type |
Format of the content | Post, Thread, Article |
E β Status |
Workflow stage | Idea β Drafted β Reviewed β Published |
F β Generated Content |
GPT output stored here | (auto-filled by n8n) |
G β Publish Date |
Scheduled date | 2026-02-25 |
H β URL |
Published link | (auto-filled after publish) |
Google Sheets API Setup
First, enable the Google Sheets API in your Google Cloud Console:
- Go to console.cloud.google.com
- Create a new project β Enable Google Sheets API and Google Drive API
- Create a Service Account β Download the JSON key
- Share your Google Sheet with the service account email
In n8n, add the Google Sheets credential using the service account JSON:
{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "your-sa@your-project.iam.gserviceaccount.com",
"client_id": "123456789",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token"
}
βοΈ Step 2: Automate LinkedIn Posting
LinkedIn is a goldmine for developer content. But writing a post every single day? That's unsustainable manually.
The n8n Workflow
[Cron Trigger: Daily 8AM]
β [Google Sheets: Get next "LinkedIn" row with Status="Idea"]
β [OpenAI: Generate LinkedIn post]
β [Google Sheets: Update row with generated content + Status="Drafted"]
β [IF: Auto-publish enabled?]
β [HTTP Request: Post to LinkedIn API]
β [Google Sheets: Update Status="Published" + URL]
The GPT Prompt That Actually Works
The quality of your automation lives and dies by the prompt. Here's what I use:
You are a senior developer and LinkedIn content creator. Write a
LinkedIn post about the following topic.
Topic: {{$json["topic"]}}
Rules:
- Start with a bold hook (first line must stop the scroll)
- Use short paragraphs (1-2 sentences max)
- Include a personal angle or hot take
- Add 3-5 relevant hashtags at the end
- Keep it under 1300 characters
- Use line breaks for readability
- End with a question to drive engagement
- Do NOT use emojis in every line. Max 2-3 total.
- Tone: Professional but conversational. Like texting a smart coworker.
Output ONLY the post text. No explanations.
The OpenAI API Call in n8n
In your n8n HTTP Request node (or use the built-in OpenAI node):
{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a senior developer and LinkedIn ghostwriter."
},
{
"role": "user",
"content": "Write a LinkedIn post about: {{$json['topic']}}\n\nRules: Start with a hook. Short paragraphs. Include hashtags. Under 1300 chars."
}
],
"temperature": 0.8,
"max_tokens": 500
}
Pro Tip: Set
temperatureto0.8for creative content like social posts. Use0.3for structured outputs like SEO metadata. The difference is massive.
LinkedIn API Posting
LinkedIn's API requires an OAuth 2.0 access token. Here's the HTTP request structure:
// POST https://api.linkedin.com/v2/ugcPosts
{
"author": "urn:li:person:YOUR_PERSON_ID",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": "{{$json['generated_content']}}"
},
"shareMediaCategory": "NONE"
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
π° Step 3: Automate Blog Generation
This is the big one. Full blog posts, generated, formatted, and optionally published β all hands-free.
The Workflow
[Webhook Trigger: New "Blog" row in Sheet]
β [OpenAI: Generate blog outline]
β [OpenAI: Expand each section into full paragraphs]
β [OpenAI: Generate SEO title + meta description]
β [Merge all content]
β [Google Sheets: Save full draft]
β [Optional: HTTP Request β Publish to Dev.to / Ghost]
Why Two-Step Generation Beats Single-Prompt
Sending GPT a single prompt like "write a 2000-word blog post" produces generic, rambling content. The two-step method is radically better:
Step 1 β Generate the outline:
Create a detailed blog post outline for the topic: "{{topic}}"
Include:
- A compelling title
- 6-8 section headings
- 2-3 bullet points per section describing what to cover
- A conclusion section
Format as JSON:
{
"title": "...",
"sections": [
{ "heading": "...", "points": ["...", "..."] }
]
}
Step 2 β Expand each section (loop in n8n):
You are writing a section of a technical blog post.
Blog Title: {{title}}
Section Heading: {{section.heading}}
Key Points to Cover: {{section.points}}
Write 200-300 words for this section. Be specific. Include code
examples where relevant. Use a conversational but authoritative tone.
Do NOT repeat the heading in your output.
By looping through sections individually, each part gets full attention from the model. The result reads like a human wrote it, not a robot.
Publishing to Dev.to via API
// n8n Function Node β Publish to Dev.to
const article = {
article: {
title: $json["seo_title"],
body_markdown: $json["full_content"],
published: false, // Set to true for auto-publish
tags: ["ai", "automation", "n8n", "openai"],
series: "AI Automation Series",
description: $json["meta_description"]
}
};
return [{ json: article }];
Then use an HTTP Request node:
POST https://dev.to/api/articles
Headers:
api-key: YOUR_DEV_TO_API_KEY
Content-Type: application/json
Body: {{$json}}
π§ Step 4: Automate Smart Email Replies
This one saves me at least 2 hours daily. GPT reads incoming emails, classifies them, drafts context-aware replies, and sends them (or queues them for my review).
The Workflow
[Trigger: New Email in Gmail (Label: "Auto-Reply")]
β [OpenAI: Classify email intent]
β [Switch Node: Route by category]
β Meeting Request β [GPT: Draft scheduling reply]
β Question β [GPT: Draft helpful response]
β Sales Pitch β [GPT: Draft polite decline]
β Important β [Slack: Notify me to handle manually]
β [Gmail: Send draft / auto-reply]
β [Google Sheets: Log interaction]
The Classification Prompt
Classify this email into exactly one category.
Categories:
- MEETING_REQUEST: Wants to schedule a call or meeting
- QUESTION: Asking a technical or professional question
- SALES_PITCH: Selling a product or service
- IMPORTANT: Requires personal attention (partnerships, job offers, urgent)
- NEWSLETTER: Automated newsletter or notification
- SPAM: Obvious spam
Email Subject: {{$json["subject"]}}
Email Body: {{$json["body"]}}
Respond with ONLY the category name. Nothing else.
The Reply Prompt (for Questions)
Draft a professional email reply.
Original Email:
Subject: {{subject}}
Body: {{body}}
Instructions:
- Be helpful and concise
- Match the formality level of the original email
- If you don't have enough context to fully answer, acknowledge
the question and offer to hop on a quick call
- Sign off as "{{my_name}}"
- Keep under 150 words
Output ONLY the reply body. No subject line.
Safety Net: I keep auto-send disabled for the first 2 weeks. Everything goes to Drafts so I can review GPT's replies before they go out. Once you trust the patterns, flip the switch.
π¦ Step 5: Automate Tweet Threads
Twitter threads are incredible for reach, but writing a 10-tweet thread is a pain. Here's how I auto-generate them from blog posts.
The Workflow
[Trigger: Blog Status changed to "Published"]
β [Google Sheets: Fetch full blog content]
β [OpenAI: Convert blog to tweet thread]
β [Split: Parse into individual tweets]
β [Google Sheets: Store thread]
β [Typefully API: Schedule thread]
The Thread Generation Prompt
Convert this blog post into a Twitter/X thread of 8-10 tweets.
Blog Content:
{{blog_content}}
Rules:
- Tweet 1: Must be a powerful hook with no hashtags
- Each tweet: Max 280 characters
- Use "π§΅π" at the end of tweet 1
- Number each tweet (1/, 2/, etc.)
- Last tweet: Call to action + link placeholder [LINK]
- Make each tweet standalone-valuable (people will see
individual tweets in feeds)
- Use plain language. No jargon unless it's widely understood.
- Maximum 2 emojis per tweet
Output as JSON array:
["tweet1", "tweet2", ...]
Posting via Typefully API
Typefully is hands-down the best tool for scheduling threads:
// POST https://api.typefully.com/v1/drafts/
{
"content": "Tweet 1 content\n\n---\n\nTweet 2 content\n\n---\n\nTweet 3 content",
"schedule-date": "2026-02-25T14:00:00Z",
"threadify": true,
"auto_retweet_enabled": true
}
The --- separator tells Typefully where to split tweets. Clean and simple.
π¬ Step 6: Automate YouTube Scripts
YouTube scripts need more structure than social posts. Here's the three-phase approach.
The Workflow
[Trigger: New "YouTube" row in Sheet]
β [Phase 1 β OpenAI: Generate script outline with hooks]
β [Phase 2 β OpenAI: Write full script per section]
β [Phase 3 β OpenAI: Generate title, description, tags]
β [Google Sheets: Store everything]
β [Slack: Notify "Script ready for recording"]
Phase 1: Script Outline
Create a YouTube video script outline.
Topic: {{topic}}
Target Length: 8-10 minutes
Audience: Developers and tech enthusiasts
Structure:
1. HOOK (first 30 seconds - must create curiosity gap)
2. INTRO (establish credibility + preview what they'll learn)
3. MAIN CONTENT (3-4 key sections)
4. DEMO/EXAMPLE (practical walkthrough)
5. RECAP + CTA (subscribe, comment prompt)
For each section include:
- Estimated duration
- Key talking points
- B-roll / screen recording suggestions
- Transition to next section
Output as JSON.
Phase 2: Full Script Expansion
Write the full script for this section of a YouTube video.
Video Title: {{title}}
Section: {{section_name}}
Talking Points: {{points}}
Duration Target: {{duration}}
Write in spoken language (not written). Include:
- [PAUSE] markers for dramatic effect
- [SHOW SCREEN] markers for demo moments
- [B-ROLL: description] for visual suggestions
- Natural transitions between points
This should sound like a knowledgeable friend explaining
something at a coffee shop. Not a lecture.
Phase 3: SEO Metadata for YouTube
Generate YouTube SEO metadata for this video.
Script Summary: {{script_summary}}
Target Keyword: {{primary_keyword}}
Generate:
1. Title (under 60 chars, keyword near front, curiosity-inducing)
2. Description (first 150 chars crucial β front-load value)
3. Tags (15-20 relevant tags, mix of broad and specific)
4. 3 thumbnail text options (max 4 words each, high contrast)
Output as JSON.
π Step 7: Automate SEO Metadata
Every piece of content needs SEO metadata. Doing it manually for every blog post, video, and page is absurd. Automate it.
The Workflow
[Trigger: Any content Status="Drafted"]
β [OpenAI: Analyze content + generate SEO package]
β [Google Sheets: Store metadata alongside content]
The All-in-One SEO Prompt
Generate a complete SEO metadata package for this content.
Content:
{{content}}
Generate ALL of the following:
1. SEO Title (50-60 chars, primary keyword near front)
2. Meta Description (150-160 chars, includes CTA)
3. Primary Keyword
4. Secondary Keywords (5-7)
5. URL Slug (lowercase, hyphens, under 60 chars)
6. Open Graph Title (for social sharing)
7. Open Graph Description (for social sharing)
8. Schema.org Type suggestion (Article, HowTo, FAQ, etc.)
9. Internal linking suggestions (3-5 related topic ideas)
10. Readability Score estimate (Flesch-Kincaid)
Output as JSON:
{
"seo_title": "...",
"meta_description": "...",
"primary_keyword": "...",
"secondary_keywords": ["..."],
"url_slug": "...",
"og_title": "...",
"og_description": "...",
"schema_type": "...",
"internal_links": ["..."],
"readability_score": "..."
}
βοΈ Advanced: Webhook Triggers for Real-Time Automation
Instead of running workflows on a schedule, you can trigger them instantly with webhooks.
Setting Up a Webhook in n8n
Every n8n workflow can start with a Webhook node that gives you a unique URL:
https://your-n8n-instance.com/webhook/content-automation
Example: Trigger from a Notion Database
When you update a Notion page status to "Ready", a Notion automation sends a webhook to n8n:
// Webhook payload from Notion
{
"topic": "Building AI Agents with LangChain",
"platform": "LinkedIn,Twitter,Blog",
"content_type": "Full Pipeline",
"priority": "high",
"notes": "Focus on practical code examples"
}
Example: Trigger from Slack
Type /generate AI tools for startups in Slack, and a Slack bot sends the webhook:
{
"trigger": "slack_command",
"topic": "AI tools for startups",
"user": "john",
"platforms": ["linkedin", "twitter"]
}
n8n picks it up, generates content for both platforms, posts it back to a Slack channel for review, and publishes on approval.
π Security & Best Practices
Automation is powerful, but you need guardrails. Here's what I learned the hard way.
API Key Management
Never hardcode API keys. Use n8n's built-in credential store or environment variables:
# docker-compose.yml for n8n
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GOOGLE_SHEETS_CREDENTIALS=${GOOGLE_CREDS}
- DEVTO_API_KEY=${DEVTO_API_KEY}
- LINKEDIN_ACCESS_TOKEN=${LINKEDIN_TOKEN}
Rate Limiting & Cost Control
| API | Rate Limit | Cost Management Tip |
|---|---|---|
| OpenAI GPT-4o | 10,000 RPM (Tier 5) | Use gpt-4o-mini for classification tasks, gpt-4o for content |
| 100 posts/day | Batch and schedule, never burst | |
| Dev.to | 30 req/30 sec | Add delay nodes in n8n between calls |
| Gmail | 500 emails/day | Track daily sends in Google Sheets |
| Twitter/X | App-dependent | Use Typefully as a buffer |
The "Human Review" Toggle
I built a simple toggle into every workflow:
// n8n Function Node β Review Gate
const autoPublish = $json["auto_publish"] === "true";
const contentQuality = $json["quality_score"];
if (autoPublish && contentQuality > 0.85) {
// Auto-publish
return [{ json: { ...$ json, action: "publish" } }];
} else {
// Send to review queue
return [{ json: { ...$json, action: "review" } }];
}
The quality_score comes from a separate GPT call that grades the content on a 0-1 scale. Anything below 0.85 goes to my review queue.
π Results After 3 Months
Here are the actual numbers from running this system:
| Metric | Before Automation | After Automation | Change |
|---|---|---|---|
| Weekly content pieces | 4-5 | 18-22 | +340% |
| Hours spent on content | 15 hrs/week | 3 hrs/week | -80% |
| LinkedIn impressions | ~2,000/week | ~12,000/week | +500% |
| Blog posts published | 2/month | 8/month | +300% |
| Email response time | 4-6 hours | <30 minutes | -90% |
| Monthly API cost | $0 | ~$45 | Worth every cent |
The 3 hours I still spend? That's reviewing drafts, adding personal stories, and tweaking content that needs a human touch. The robot does the heavy lifting. I do the finishing.
π Getting Started: Your First Workflow in 30 Minutes
Don't try to build everything at once. Start with one workflow:
Recommended First Build: LinkedIn Auto-Poster
# 1. Install n8n locally
npx n8n
# 2. Open http://localhost:5678
# 3. Create workflow:
# - Cron Trigger (daily)
# - Google Sheets node (read row)
# - OpenAI node (generate post)
# - Google Sheets node (write back)
# 4. Test with 3 topics in your sheet
# 5. Once working, add LinkedIn API posting
Once that's running reliably, add the next workflow. I built the full system over 3 weekends, not in one sprint.
π§© n8n vs Make.com vs Zapier β Which Should You Use?
| Feature | n8n | Make.com | Zapier |
|---|---|---|---|
| Pricing | Free (self-hosted) | Free tier, then $9+/mo | Free tier, then $20+/mo |
| Self-hosting | β Full control | β Cloud only | β Cloud only |
| Code nodes | β JavaScript + Python | β Limited | β οΈ Very limited |
| AI integration | β Native OpenAI nodes | β Good | β Good |
| Complex logic | β If/Switch/Loops | β Routers/Iterators | β οΈ Paths only |
| Webhook support | β Built-in | β Built-in | β Built-in |
| Learning curve | Medium | Easy | Very Easy |
| Best for | Developers | Makers | Non-technical users |
My recommendation: If you can docker-compose up, use n8n. The flexibility and zero per-task cost at scale is unbeatable.
π‘ Final Tips From 3 Months of AI Content Automation
Prompt engineering is 80% of the work. Spend time crafting prompts. A $0.003 GPT-4o-mini call with a great prompt beats a $0.03 GPT-4o call with a lazy prompt.
Always add a human review step first. Trust the system gradually. Auto-publish only after you've manually reviewed 50+ outputs.
Version your prompts. Store them in a separate Google Sheet tab. When you tweak a prompt, note the date and what changed. You'll thank yourself.
Monitor costs weekly. OpenAI charges add up fast if you're not careful with model selection. Use
gpt-4o-minifor every task that doesn't need top-tier reasoning.Build in public. Share your automation journey. The content about building automations becomes content itself β a beautiful recursive loop.
π Resources
- n8n Documentation
- OpenAI API Reference
- Google Sheets API Quickstart
- Dev.to API Docs
- LinkedIn Marketing API
- Typefully API
If this post helped you, drop a comment with which workflow you're building first. I read every single one.
And if you want the full n8n workflow JSON exports, follow me β I'll publish them in Part 2. π₯
Enjoyed this? Hit β€οΈ and follow for more AI automation deep-dives.
Top comments (0)