DEV Community

lofder.issac
lofder.issac

Posted on

I Built an MCP Server to Automate Dropshipping Product Imports

The Problem That Wouldn't Go Away

I've been running dropshipping stores on and off for a couple of years. If you've done it, you know the drill: find a product on AliExpress or a supplier platform, copy the title, download images, tweak the description, set your margins, push it to your Shopify store. Repeat. Fifty times. Every week.

It's not hard work. It's just tedious work. And tedious work is where mistakes happen — wrong prices, missing variants, broken image links. I tried automating bits of it with scripts, browser extensions, even some Zapier flows. Nothing stuck. The workflows were too fragmented.

Then MCP happened.

If you haven't been following, the Model Context Protocol is Anthropic's open standard for connecting AI models to external tools and data. When I first read the spec, I had one of those "wait, this is exactly what I need" moments. Instead of building a full app with a UI, authentication flows, and all that overhead, I could build a tool server that any MCP-compatible client (Claude Desktop, Cursor, etc.) could talk to.

So I built dsers-mcp-product — an MCP server that handles the entire product import pipeline for DSers, from discovering products to pushing them to your connected stores.

Why MCP and Not Just a REST API?

Fair question. I could have built a CLI tool or a REST service. But MCP gives you something those don't: conversational orchestration.

With a traditional API, you write the glue code. You decide the order of operations, handle edge cases in your scripts, build retry logic. With MCP, the AI client becomes your orchestrator. You tell Claude "import this product and push it to my US store with a 2.5x markup" and it figures out which tools to call, in what order, and handles the back-and-forth.

That's not a gimmick. For a workflow that has real branching logic (what if the product has 47 variants? what if some images fail to load? what if the store has listing rules that reject certain categories?), having an intelligent orchestrator is genuinely useful.

The Architecture

The server is built in TypeScript using the official MCP SDK. Here's the high-level picture:

Claude / Cursor / Any MCP Client
        │
        ▼
┌─────────────────────────┐
│   dsers-mcp-product     │
│   (MCP Server)          │
│                         │
│  ┌─────────┐            │
│  │  Tools  │ ← 7 tools  │
│  └─────────┘            │
│  ┌─────────┐            │
│  │ Prompts │ ← 3 prompts│
│  └─────────┘            │
│                         │
│  Transport: stdio/SSE   │
└────────────┬────────────┘
             │
             ▼
       DSers Platform
Enter fullscreen mode Exit fullscreen mode

Nothing fancy. stdio transport for local clients like Claude Desktop, SSE for remote/web integrations. The server itself is stateless per-request — all state lives on the DSers side.

I went with TypeScript because the MCP SDK has first-class TS support and because I wanted strong typing for the tool schemas. When you're defining tool parameters that an AI model will be calling, you really want those schemas to be precise. Vague schemas lead to vague tool calls.

The 7 Tools

Each tool maps to a distinct step in the product import workflow. I spent a lot of time thinking about granularity here — too few tools and each one becomes a god function with too many parameters; too many and the AI wastes tokens figuring out which to use.

Here's what I landed on:

product.import

The core tool. Takes a product URL or identifier and imports it into your DSers workspace. This handles fetching product data, images, variants, and specs. The heavy lifting.

product.preview

Returns a structured preview of an imported product — title, price, images, variant matrix. This exists because I wanted the AI to be able to "look" at a product before deciding what to do with it. It's the equivalent of a human opening the product page and scanning it.

product.visibility

Controls which variants and options are visible/active. Dropshipping products often come with 30+ variants and you only want to list 5-6. This tool lets you toggle visibility without deleting anything.

store.discover

Lists your connected stores with their status and capabilities. Before pushing a product, the AI needs to know where it can push to and what each store supports.

store.push

Pushes a product to one or more connected stores. This is where pricing rules, category mapping, and store-specific formatting happen.

rules.validate

Validates a product against a store's listing rules before pushing. Does the title exceed the character limit? Are all required fields populated? Are the images the right dimensions? Better to catch this upfront than get a rejection.

job.status

Some operations (especially bulk imports) are async. This tool checks the status of running jobs. Simple but necessary.

Why This Split?

The key insight was separating read operations from write operations. product.preview, store.discover, rules.validate, and job.status are all read-only. The AI can call them freely to gather information and make decisions. product.import, product.visibility, and store.push are the write operations that actually change state.

This matters because MCP clients can implement approval flows for write operations. You might want the AI to auto-discover and preview products but require your confirmation before actually importing or pushing.

The 3 Prompts

MCP prompts are pre-built conversation templates. Think of them as recipes that encode common workflows:

  • quick-import — Single product, single store. The "I just want this product in my store" flow. Calls import → preview → validate → push in sequence.

  • bulk-import — Multiple products from a search or category. Handles pagination, deduplication, and batch status tracking.

  • multi-push — One product to multiple stores with per-store pricing and customization. Useful when you run stores in different markets.

These aren't magic. They're just well-structured prompt templates that guide the AI to use the tools in the right order. But they save a ton of back-and-forth compared to starting from scratch every time.

Publishing to 8 Platforms (and What I Learned)

Building the server took about two weeks of evenings. Publishing it took... also about two weeks. That surprised me.

Here's where dsers-mcp-product is listed now:

  1. npm — The obvious first step. npx @lofder/dsers-mcp-product just works. Getting the package.json right for an MCP server that supports both stdio and SSE took some fiddling.

  2. MCP Registry — Anthropic's official directory. Straightforward submission, but you need good documentation. They actually review what you submit.

  3. Smithery — Probably the most developer-friendly MCP directory right now. Their submission process is smooth and they have nice testing tools.

  4. Glama — This one has an automated quality scoring system. dsers-mcp-product scored AAA, which I'm pretty proud of. Their criteria push you to have good error handling, proper schema definitions, and comprehensive tool descriptions.

  5. mcp.so — Community-driven directory. Quick to list, good for visibility.

  6. mcpservers.org — Another community directory. Similar process.

  7. punkpeye/awesome-mcp-servers — The canonical awesome-list for MCP servers on GitHub. Opened a PR, got merged. The maintainers are responsive.

  8. Claude Connectors — Anthropic's marketplace for Claude integrations. Still pending review as of writing.

What I Wish I Knew Before Publishing

Documentation is everything. Each platform wants slightly different things. Some want a README focused on installation, others want use-case examples, others want GIFs showing it in action. I ended up writing three different versions of the docs.

Schema quality matters more than you think. Glama's AAA rating pushed me to tighten up every tool's input/output schema. Better descriptions, stricter validation, more helpful error messages. This wasn't busywork — it directly improved how well AI clients used the tools.

The npm package name matters. I went with @lofder/dsers-mcp-product under my scope. If I could redo it, I'd think harder about discoverability. People search for "mcp dropshipping" or "dsers mcp", not my username.

Cross-platform testing is a time sink. Claude Desktop on macOS, Cursor, and various web-based MCP clients all have slightly different behaviors. stdio transport works everywhere but SSE had quirks in some clients. Budget time for this.

What Actually Changed in My Workflow

Before this project, importing 20 products and pushing them to two stores was a full evening's work. Now it's a conversation:

"Import the top 5 products from [category], skip anything under 4 stars, apply 2.5x markup for the US store and 3x for the EU store, and push them."

Claude calls the tools, shows me previews, validates against store rules, and pushes. I review and approve. The whole thing takes maybe 15 minutes, and I'm mostly just reading summaries and saying "yes."

Is it perfect? No. Sometimes the AI picks weird variants to feature. Sometimes the pricing logic needs a nudge. But it's a 10x improvement over clicking through web UIs.

Takeaways for MCP Server Builders

If you're thinking about building an MCP server, here's what I'd tell you:

  1. Start with the workflow, not the API. Map out what a human actually does, step by step. That's your tool list.

  2. Separate reads from writes. Let the AI gather info freely but gate mutations behind explicit tools.

  3. Invest in tool descriptions. The model reads them to decide which tool to call. Vague descriptions → wrong tool calls → bad UX.

  4. Ship to multiple directories early. Each platform has different audiences. Glama's scoring system alone made my server better.

  5. TypeScript + the official SDK is the path of least resistance. It works. Don't fight it.

The code is open source at github.com/lofder/dsers-mcp-product. If you're in the dropshipping space and want to try it, npx @lofder/dsers-mcp-product gets you running. Issues and PRs welcome.

And if you're building MCP servers for other e-commerce workflows, I'd genuinely love to hear about it. This ecosystem is still early and there's a lot of low-hanging fruit.


Find me on GitHub or my blog.

Top comments (0)