DEV Community

productsights
productsights

Posted on

Why Product Insights Belong in Your IDE

Why Product Insights Belong in Your IDE

You are three hours into debugging a payment processing edge case. You have six tabs open: your editor, the Stripe dashboard, your APM tool, the relevant GitHub issue. Then your PM pings you on Slack: "Hey, before you ship that fix, can you check if users are also reporting the retry logic failing?" Now you need tab seven -- the product feedback dashboard you log into twice a quarter.

You scan it, can't find the right filter, give up, and reply "I'll just fix what I can see in the logs." The fix ships. It addresses the symptom your monitoring caught, not the three other related issues that 40 customers reported last month.

This is the context-switching tax. Not the five seconds it takes to open a new tab, but the information you never look up because the friction is just high enough to skip it.

The problem with product feedback silos

In most SaaS teams, product feedback lives in a completely separate universe from the code that's supposed to address it. PMs aggregate it in spreadsheets, Productboard, or Notion. Engineers get a distilled version -- a Jira ticket that says "Users are having trouble with checkout" with no signal about how many users, which specific flows, or what they actually said.

The handoff is lossy by design. A PM reads 200 pieces of feedback, synthesizes it into a one-paragraph ticket description, and the engineer implements based on that summary. The raw signal -- the exact words customers used, the severity distribution, the related complaints -- gets compressed out.

This is not a people problem. It is an architecture problem. The feedback data and the development environment are disconnected systems. Engineers would use product insights if they were accessible without a workflow interrupt. They just never are.

What if feedback came to you?

Think about how you use your language server. You don't open a separate application to check type definitions. You hover over a symbol and the information appears. The data is there because the tooling brings it to your context.

Product intelligence should work the same way. When you are about to refactor the onboarding flow, you should be able to ask -- right in your editor -- "What are users saying about onboarding?" and get a structured answer in seconds. Not a Slack thread. Not a dashboard URL. An inline response with real data.

This is not hypothetical. The Model Context Protocol makes it possible today.

MCP as the bridge

If you have used Claude Code, Cursor, or GitHub Copilot in VS Code, you already interact with AI assistants that can call external tools. The Model Context Protocol (MCP) is an open standard that defines how these AI assistants discover and invoke tools exposed by external servers.

The architecture is straightforward: an MCP server is a lightweight process that exposes a set of typed tool definitions over stdio or HTTP. The AI assistant in your IDE discovers these tools, and when a query matches a tool's purpose, it calls the tool and incorporates the result into its response.

This matters for product intelligence because it eliminates the integration gap. Instead of building a bespoke VS Code extension with a custom UI, an MCP server exposes structured data that any MCP-compatible AI assistant can consume. One server, every IDE.

Concrete examples: what this looks like in practice

ProductSights ships an MCP server that exposes your product feedback data as tool calls. Here is what the tools look like and what they return.

Searching for insights about a specific feature

> search_insights("checkout crashes")
Enter fullscreen mode Exit fullscreen mode
{
  "results": [
    {
      "summary": "Checkout page crashes on Safari 17.2 when applying discount code",
      "category": "Bug Report",
      "sentiment": "negative",
      "priority": 87,
      "source": "Intercom",
      "date": "2026-03-19",
      "companies": ["Acme Corp", "Northwind"]
    },
    {
      "summary": "Crash during checkout when switching payment methods on mobile",
      "category": "Bug Report",
      "sentiment": "negative",
      "priority": 79,
      "source": "Zendesk",
      "date": "2026-03-17",
      "companies": ["Contoso"]
    }
  ],
  "total": 12
}
Enter fullscreen mode Exit fullscreen mode

Twelve reports, not the two your error tracker caught. Now you know the scope before you write the fix.

Getting top problems across the product

> get_top_problems()
Enter fullscreen mode Exit fullscreen mode
{
  "clusters": [
    {
      "title": "Checkout crashes on Safari and mobile browsers",
      "insightCount": 34,
      "companiesAffected": 18,
      "avgPriority": 82,
      "trend": "increasing",
      "status": "investigating"
    },
    {
      "title": "Onboarding wizard skips step 3 intermittently",
      "insightCount": 21,
      "companiesAffected": 12,
      "avgPriority": 74,
      "trend": "stable",
      "status": "proposed"
    },
    {
      "title": "CSV export times out for large datasets",
      "insightCount": 15,
      "companiesAffected": 9,
      "avgPriority": 68,
      "trend": "decreasing",
      "status": "accepted"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

ProductSights automatically groups similar feedback into clusters using vector embeddings. Each cluster represents a distinct problem with a count of how many customers reported it. This is the same data your PM looks at in the dashboard, now available inline in your editor.

Checking stats

> get_insight_stats()
Enter fullscreen mode Exit fullscreen mode
{
  "totalInsights": 1847,
  "thisWeek": 93,
  "topCategory": "Bug Report",
  "avgSentiment": -0.32,
  "topSource": "Intercom"
}
Enter fullscreen mode Exit fullscreen mode

Finding related feedback for a feature area

> find_related_insights("onboarding")
Enter fullscreen mode Exit fullscreen mode
{
  "results": [
    {
      "summary": "New onboarding flow is great but skips team invitation step",
      "category": "UX Issue",
      "sentiment": "neutral",
      "priority": 61
    },
    {
      "summary": "Would love a guided product tour after onboarding",
      "category": "Feature Request",
      "sentiment": "positive",
      "priority": 55
    },
    {
      "summary": "Onboarding email sequence doesn't mention API key setup",
      "category": "Feature Request",
      "sentiment": "neutral",
      "priority": 48
    }
  ],
  "total": 8
}
Enter fullscreen mode Exit fullscreen mode

Getting the latest incoming feedback

> get_recent_insights()
Enter fullscreen mode Exit fullscreen mode

Returns the most recent insights with optional category, sentiment, and date filters. Useful for a quick "what came in today" check before standup.

Real workflow scenarios

Before a sprint: scoping with signal

You are picking up a ticket to rework the settings page navigation. Before writing code, you ask your AI assistant:

"Find related insights about the settings page"
Enter fullscreen mode Exit fullscreen mode

The MCP server returns 14 pieces of feedback. Seven mention confusion about finding billing settings. Three mention the notification preferences being buried. Two are about settings not persisting on mobile. Now you know which part of "rework settings navigation" actually matters to users and can scope accordingly. Your PM did not have to distill this for you -- you pulled the raw signal yourself.

During debugging: confirming the blast radius

You find a race condition in the webhook handler. Before deciding whether this is a quick fix or a P1, you ask:

"Search insights for webhook failures"
Enter fullscreen mode Exit fullscreen mode

Twenty-three reports from 11 companies. Trend: increasing. That changes your priority call. You flag it in your PR description with the actual numbers.

In PR review: validating impact

A teammate submits a PR that refactors the CSV export. You want to know if this addresses real user pain:

"Get top problems related to CSV export"
Enter fullscreen mode Exit fullscreen mode

Fifteen reports about export timeouts. The PR adds pagination to the export query. The review context just got a lot richer.

How to set it up

Installation takes about two minutes.

1. Get an API key

In the ProductSights dashboard, go to Settings > API Keys and create a key. Copy it.

2. Configure your IDE

Claude Code (~/.claude/claude_code_config.json):

{
  "mcpServers": {
    "productsights": {
      "command": "npx",
      "args": ["@productsights/mcp-server"],
      "env": {
        "PRODUCTSIGHTS_API_KEY": "ps_your_key_here"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Cursor (.cursor/mcp.json in your project root):

{
  "mcpServers": {
    "productsights": {
      "command": "npx",
      "args": ["@productsights/mcp-server"],
      "env": {
        "PRODUCTSIGHTS_API_KEY": "ps_your_key_here"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

VS Code (.vscode/settings.json):

{
  "mcp": {
    "servers": {
      "productsights": {
        "command": "npx",
        "args": ["@productsights/mcp-server"],
        "env": {
          "PRODUCTSIGHTS_API_KEY": "ps_your_key_here"
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Start using it

Once connected, your AI assistant automatically discovers the available tools. Ask a natural language question about your product feedback and the assistant calls the appropriate MCP tool.

No new UI to learn. No dashboard to keep open. The data flows through the conversational interface you already use.

What this means for PM-engineering collaboration

The traditional feedback loop is: users report issues, support logs them, PM triages and synthesizes, PM writes tickets, engineers implement based on the ticket. Each handoff compresses signal.

With product insights in the IDE, engineers can pull raw signal directly. This does not replace the PM -- it changes the collaboration model. PMs still set priorities and own the roadmap. But engineers can independently verify assumptions, check if a bug fix addresses the most-reported variant of a problem, and contribute signal back ("I found 23 reports about this while debugging, should we escalate?").

The teams that ship the right things are the ones where everyone -- not just PMs -- has access to what users are saying. Bringing that data into the IDE is the lowest-friction way to make that happen.

ProductSights clusters automatically group feedback using vector embeddings, track execution status from proposed through shipped, and measure before/after impact. The MCP server is the read path into all of that from your development environment.

Try it

npx @productsights/mcp-server
Enter fullscreen mode Exit fullscreen mode

Full docs: ProductSights MCP Server documentation

If you are already using an AI coding assistant, adding product intelligence is one config block. The feedback your PM has been manually relaying to you is now one question away.

Top comments (0)