DEV Community

Cover image for I Almost Used LangGraph for Social Media Automation (Here's Why I Built an MCP Server Instead)
Daniel Nwaneri
Daniel Nwaneri

Posted on

I Almost Used LangGraph for Social Media Automation (Here's Why I Built an MCP Server Instead)

How choosing protocol over framework saved me $54/month and shipped in one day


I just posted my first AI-generated tweet to Twitter/x. It was generated by Groq's free AI, processed by a custom MCP server I built, and posted through Ayrshare—running on free tiers locally (Groq + Ayrshare), deployable to Cloudflare Workers for $5/month.

Total build time? Less than a day.

I almost used LangGraph. I studied LangChain's 2,000-line social-media-agent for days. It's production-ready, battle-tested, and impressive. But I chose a different path—and saved $54/month in the process.
Here's why I built an MCP server instead of using LangGraph, and why the decision might matter for your next AI project.

The Context

I'm a full-stack developer from Port Harcourt, Nigeria, and I've been living in the Cloudflare Workers ecosystem for years. I run FPL Hub—a Fantasy Premier League platform serving 2,000+ users with 500K+ daily API calls and 99.9% uptime. Everything runs on Workers.

When I needed social media automation for client work, my instinct was to reach for the established solution: LangGraph. After all, LangChain has a production social-media-agent with 1.7k stars. Why reinvent the wheel?

But then I remembered: I'm not building a wheel. I'm building something simpler.

The LangGraph Temptation

What I Found

LangGraph is impressive. I spent days studying LangChain's social-media-agent repository (1.7k+ stars) and it's genuinely production-ready:

  • Human-in-the-loop workflows with approval gates
  • State checkpointing for complex processes
  • Multi-step agent orchestration
  • Battle-tested reference implementation

For social media automation, LangGraph offers everything: content generation, review cycles, posting workflows, and analytics—all with built-in state management.

So why didn't I use it?

The Aha Moment

I was halfway through setting up LangGraph Cloud when I realized: I was solving the wrong problem.

LangGraph is built for complex, multi-step agent workflows with state management. My use case was simpler:

  1. User gives me content
  2. AI optimizes it for platform
  3. Save as draft
  4. Post when ready

I didn't need a graph. I needed a function call.

That's when I remembered the Model Context Protocol (MCP) work I'd been doing. What if I just... built an MCP server?

The Problems I Saw With LangGraph

1. Complexity for Simple Use Cases

The reference implementation is 2,000+ lines of Python. For my use case (generate content, review, post), I needed maybe 500 lines of TypeScript. The rest was framework overhead.

2. Deployment Constraints

LangGraph requires:

  • Persistent runtime (LangGraph Cloud at $29/month minimum, or VPS)
  • PostgreSQL/MongoDB for checkpointing
  • Always-on infrastructure

I wanted edge deployment on Cloudflare Workers with zero cold starts.

3. Vendor Lock-in

LangGraph code is LangGraph-specific. If I want to switch frameworks or use a different client, I'm rewriting everything.

4. Edge Incompatibility

LangGraph can't run on Cloudflare Workers. The entire paradigm assumes persistent processes and stateful checkpointing.

5. Learning Curve

To use LangGraph effectively, I need to learn:

  • Framework-specific concepts (graphs, nodes, edges)
  • Checkpointer management
  • LangChain ecosystem conventions

With MCP, I just need to understand a simple JSON-RPC protocol.

Why MCP Won

Here's what made me choose MCP:

Protocol Over Framework

MCP is a protocol, not a framework. Any MCP client (Claude Desktop, VS Code with Continue, custom UIs) can use my server without modification.

Edge-Native from Day One

I built this for Cloudflare Workers. The entire architecture assumes:

  • Stateless request/response
  • SQLite (D1) for persistence
  • No cold starts
  • Global distribution

Lightweight Implementation

My complete implementation:

  • ~800 lines of TypeScript
  • 8 tools, 3 resources, 3 prompts
  • Full CRUD for drafts
  • AI content generation
  • Multi-platform posting

Future-Proof

As MCP adoption grows (Anthropic, Zed, Continue, Cody), my server's utility multiplies. It's not tied to any specific framework's lifecycle.

Cost Structure

MCP approach:

  • Groq API: $0 (free tier, rate-limited but plenty)
  • Ayrshare: $0 (10 posts/month free)
  • Cloudflare Workers: $5/month (includes D1, KV, Vectorize, R2)
  • Total: $5/month

LangGraph approach:

LangGraph Cloud: $39/month (or VPS at $10-20/month)
OpenAI API: $20-30/month
Database hosting: $0-10/month
Total: $59-89/month

For a freelancer testing ideas, $5 vs $50 matters.

Architecture: The MCP Approach

Claude Desktop (Client)
  ↓ JSON-RPC over stdio
MCP Server (TypeScript)
  ├── Tools (LLM-controlled actions)
  │   ├── draft_post
  │   ├── schedule_post
  │   ├── post_immediately
  │   ├── generate_thread
  │   └── analyze_engagement
  ├── Resources (App-controlled data)
  │   ├── drafts://list
  │   ├── scheduled://posts
  │   └── stats://summary
  ├── Prompts (User-invoked templates)
  │   ├── write_tweet
  │   ├── linkedin_post
  │   └── thread_generator
  ├── Storage (SQLite → D1)
  └── Integrations
      ├── Ayrshare (multi-platform posting)
      └── Groq (AI content generation)
Enter fullscreen mode Exit fullscreen mode

The beauty? Each layer is independent and replaceable.

Building the MCP Server

I'll walk through the key components.

Storage Layer: SQLite to D1

Started with SQLite for local development:

import Database from 'better-sqlite3';

export class StorageService {
  private db: Database.Database;

  constructor(dbPath: string) {
    this.db = new Database(dbPath);
    this.initializeSchema();
  }

  createDraft(data: DraftData): Draft {
    const id = crypto.randomUUID();
    const now = new Date().toISOString();

    this.db.prepare(`
      INSERT INTO drafts (id, content, platform, tone, status, created_at)
      VALUES (?, ?, ?, ?, 'draft', ?)
    `).run(id, data.content, data.platform, data.tone, now);

    return this.getDraft(id);
  }
}
Enter fullscreen mode Exit fullscreen mode

Migration to Cloudflare D1 is literally:

// Local
const db = new Database('./social_media.db');

// Edge
const db = env.DB; // Cloudflare D1 binding
Enter fullscreen mode Exit fullscreen mode

Same SQL, different runtime. Beautiful.

Social Media Integration: Ayrshare

Instead of managing OAuth for Twitter, LinkedIn, Facebook, Instagram, TikTok, YouTube, Pinterest, Reddit, Telegram, and Google My Business individually, I used Ayrshare:

export class SocialMediaAPI {
  private apiKey: string;
  private baseUrl = 'https://app.ayrshare.com/api';

  async postImmediately(content: string, platforms: string[]) {
    const response = await fetch(`${this.baseUrl}/post`, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        post: content,
        platforms,
      }),
    });

    return response.json();
  }
}
Enter fullscreen mode Exit fullscreen mode

One API, ten platforms. Done.

AI Content Generation: Groq (FREE)

Here's where I saved the most money. Instead of OpenAI or Claude, I used Groq's free tier with Llama 3.3 70B:

import Groq from 'groq-sdk';

export class ContentGenerator {
  private client: Groq;

  async generate(request: GenerateRequest): Promise<GenerateResponse> {
    const systemPrompt = this.buildSystemPrompt(request);

    const response = await this.client.chat.completions.create({
      model: 'llama-3.3-70b-versatile', // FREE
      messages: [
        { role: 'system', content: systemPrompt },
        { role: 'user', content: request.input },
      ],
      max_tokens: 1000,
      temperature: 0.7,
    });

    const text = response.choices[0]?.message?.content || '';
    return this.parseResponse(text, request);
  }

  private buildSystemPrompt(request: GenerateRequest): string {
    const platformPrompts = {
      twitter: `Create engaging tweets that:
- Stay under ${request.maxLength} characters (STRICT)
- Use ${request.tone} tone
- Hook readers in the first line
- End with engagement (question or CTA)`,

      linkedin: `Create professional posts that:
- Are detailed (1,300-1,500 characters)
- Use ${request.tone} tone
- Start with a compelling hook
- Include 3-5 key insights with takeaways`,
    };

    return platformPrompts[request.platform];
  }
}
Enter fullscreen mode Exit fullscreen mode

Quality? Excellent. Cost? $0. Rate limits? 30 requests/minute (more than enough).

MCP Protocol Implementation

The MCP server itself is straightforward:

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

const server = new Server(
  { name: 'social-media-server', version: '1.0.0' },
  { capabilities: { tools: {}, resources: {}, prompts: {} } }
);

// Define tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: 'draft_post',
      description: "'Create AI-generated social media post',"
      inputSchema: {
        type: 'object',
        properties: {
          content: { type: 'string', description: "'Topic or raw content' },"
          platform: { 
            type: 'string',
            enum: ['twitter', 'linkedin', 'facebook', 'instagram'],
          },
          tone: {
            type: 'string',
            enum: ['professional', 'casual', 'technical', 'engaging'],
          },
        },
        required: ['content', 'platform'],
      },
    },
    // ... 7 more tools
  ],
}));

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case 'draft_post':
      const generated = await contentGen.generate({
        input: args.content,
        platform: args.platform,
        tone: args.tone || 'professional',
        maxLength: args.platform === 'twitter' ? 280 : 3000,
      });

      const draft = await storage.createDraft({
        content: generated.text,
        platform: args.platform,
        hashtags: generated.hashtags,
        metadata: { engagement_score: generated.score },
      });

      return {
        content: [{
          type: 'text',
          text: `✅ Draft created!\n\n` +
                `ID: ${draft.id}\n` +
                `Platform: ${args.platform}\n` +
                `Score: ${generated.score}/100\n\n` +
                `${generated.text}`,
        }],
      };
  }
});

// Connect transport
const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

That's it. ~800 lines total for 8 tools, 3 resources, 3 prompts, storage, integrations, and AI generation.

Real-World Usage

Here's what it looks like in Claude Desktop:

Me: Create a professional tweet about edge computing benefits

Claude: [calls draft_post tool]

✅ Draft created successfully!

Draft ID: 61c5fa20-de39-458e-a198-98fbf077a3d4
Platform: twitter
Character Count: 115
Engagement Score: 70/100

Content:
[Sent with Free Plan] Edge computing reduces latency for IoT & AI. 
Where will you apply it first? #EdgeComputing

💡 Use schedule_post with this draft ID to schedule it.
Enter fullscreen mode Exit fullscreen mode

Me: Post it immediately to Twitter

Claude: [calls post_immediately tool]

✅ Posted successfully!

Platforms: twitter
Post ID: lneLUiDHcrie5cr1ydGO

Content: [tweet content]
Enter fullscreen mode Exit fullscreen mode

And it's live. 5 views within minutes. Just like that.

![Live Tweet Posted via MCP Server]
Screenshot of live tweet posted via MCP server showing edge computing content with 5 views on Twitter, demonstrating the working social media automation system

My first AI-generated tweet posted live via the MCP server

Performance Comparison

Metric MCP Server LangGraph Agent
Lines of Code ~800 ~2,000+
Setup Time 1 day 3-5 days
Monthly Cost $5 $50-80+
Edge Compatible ✅ Yes ❌ No
Client Agnostic ✅ Yes ❌ No
Learning Curve Low Medium
State Management Client-side Built-in
Human-in-Loop Client-side Built-in

Lessons Learned

What Worked Well

MCP is simpler than expected. The protocol has three primitives (tools, resources, prompts) and one transport (JSON-RPC). That's it.

TypeScript types catch errors early. Strict mode prevented countless runtime bugs.

Groq is underrated. Free tier, fast responses, good quality. Perfect for prototyping.

Ayrshare abstracts complexity. I didn't touch a single OAuth flow. Just API keys and posting.

Challenges Faced

MCP documentation is still evolving. I relied heavily on examples and source code reading.

Ayrshare free tier limits. 10 posts/month is fine for testing, but scheduling requires paid plan ($20/month).

Platform-specific optimization. Each platform has different character limits, best practices, and engagement patterns. My prompts needed per-platform tuning.

Error handling across async operations. Proper error boundaries took iteration to get right.

What I'd Do Differently

Start with edge deployment. I built locally first, then migrated to Workers. Should've gone edge-native from day one.

Add semantic search earlier. Vectorize for content templates would've been useful from the start.

Build a template library upfront. Reusable content patterns speed up generation significantly.

When to Choose Each Approach

Choose MCP if:

  • You want lightweight automation without framework overhead
  • Edge deployment is important (Cloudflare Workers, Deno Deploy)
  • You value client portability (any MCP client works)
  • You prefer protocol over framework thinking
  • You're building for the future (MCP adoption is growing fast)
  • Cost matters ($5/month vs $50+/month)

Choose LangGraph if:

  • You need complex multi-agent workflows with sophisticated orchestration
  • You have existing LangChain investment and team expertise
  • You want built-in human approval gates with checkpointing
  • Edge deployment isn't a priority
  • You prefer a mature ecosystem with extensive documentation
  • Your use case requires persistent state across long-running processes

Why This Matters for Freelancers

Beyond the technical wins, this project has been valuable for my freelance practice:

Learning investment: Understanding MCP took a week of deep work, but it's positioned me differently in the market. Instead of "Full Stack Developer," I can now say "MCP Specialist" - which attracts different (and higher-paying) projects.

Cost awareness: When pitching clients, being able to say "$5/month to run" vs "$50-80/month" matters. Especially for startups and small businesses watching their burn rate.

Content as learning: Writing forces you to clarify your thinking. I understood MCP much better after writing about it than after just building with it.

Portfolio depth: Having production code (with real integrations, real users, real costs) beats tutorial projects every time.

If you're freelancing or consulting, building tools like this - and writing about them - compounds over time.

What's Next

Immediate improvements:

  • Template library with Vectorize semantic search
  • Multi-account support (manage multiple brands)
  • Analytics dashboard (engagement tracking)
  • Webhook integrations (post on external triggers)

Edge deployment guide:
I'm planning a follow-up article on deploying to Cloudflare Workers:

  • D1 database setup
  • SSE transport for remote access
  • Environment variable management
  • Production deployment checklist

Open source:
Once I clean up the code and add comprehensive docs, I'll publish the full implementation on GitHub.

Conclusion

LangGraph is powerful, battle-tested, and feature-rich. For complex multi-agent systems with sophisticated state management, it's an excellent choice.

But for lightweight automation, edge deployment, and cost-conscious projects, MCP is the better fit. The protocol-based approach offers portability, simplicity, and future-proofing that framework-specific solutions can't match.

I built a production social media automation system in less than a day for $5/month. It runs on the edge with <100ms latency globally. It works with any MCP client. And it's simple enough that I can explain the entire architecture in one article.

Sometimes the newer approach is the better approach.


Try It Yourself

Want to build your own MCP server? Here's where to start:

  1. Read the MCP docs: modelcontextprotocol.io
  2. Clone the SDK: npm install @modelcontextprotocol/sdk
  3. Start with a simple tool (calculator, weather, whatever)
  4. Add complexity gradually (storage, integrations, AI)
  5. Deploy to edge (Cloudflare Workers, Deno Deploy)

Questions? Drop them in the comments. I'm happy to help other developers explore MCP.

Want to see the code? It's open source: GitHub repo

Star the repo if you find it useful! ⭐


Top comments (0)