My previous post covered how FORMLOVA classifies 127 MCP tools into 4 safety levels. That was about making a single MCP server safe. This post is about what happens when multiple MCP servers share the same client.
The premise is simple: if a user has FORMLOVA and Slack and Linear and GitHub all connected to the same MCP client, the LLM can pass data between them. A form response becomes a Slack message becomes a Linear issue becomes a GitHub PR. No webhooks, no Zapier, no integration code.
This is not theoretical. Every service mentioned here has a production MCP server. I tested the cross-service flows.
The Architecture: There Is No Integration
Traditional integrations look like this:
Form Service --webhook--> Middleware (Zapier/Make) --API--> Slack
--API--> HubSpot
--API--> Linear
You build connectors. You maintain them. When an API changes, your integration breaks.
MCP cross-service orchestration looks like this:
User: "Post the new bug report to Slack and create a Linear issue"
LLM:
1. Calls FORMLOVA MCP: get_responses(form_id, limit=1)
2. Calls Slack MCP: post_message(channel="#bugs", text=formatted_response)
3. Calls Linear MCP: create_issue(title=bug_title, description=bug_details)
The LLM is the integration layer. Each MCP call is independent. The services do not know about each other. The LLM reads the output of one call and uses it as input for the next.
This means: FORMLOVA does not need a Slack integration. Or a Linear integration. Or a HubSpot integration. The user's MCP client handles the orchestration, and each service only needs to expose its own tools well.
What Actually Works Today
I tested cross-service flows with the following MCP servers, all of which have official production implementations:
| Service | MCP Server | Key Capabilities |
|---|---|---|
| Slack | Official (GA Feb 2026) | Search, post messages, manage channels |
| Notion | Official (hosted) | Read/write pages and databases |
| Linear | Official (remote) | Create/update issues, projects, milestones |
| Google Workspace | Official | Calendar, Sheets, Gmail, Drive, Docs |
| HubSpot | Official | Contacts, deals, lists, workflows |
| Salesforce | Official | CRUD on leads, contacts, opportunities |
| GitHub | Official | Repos, issues, PRs, branches, file ops |
| Shopify | Official (default on all stores) | Products, orders, customers, inventory |
| Stripe | Official | Payments, refunds, invoices, subscriptions |
| Asana | Official | Tasks, projects, members |
| Atlassian | Official | Jira issues, Confluence pages |
| Twilio | Official (Alpha) | SMS, phone calls |
Each of these servers exposes tools that an MCP client can call. When multiple servers are active in the same session, the LLM can chain them.
Five Patterns That Actually Work
Pattern 1: Bug Report to Code Fix
FORMLOVA + GitHub + Linear + Slack
A user submits a bug report through a form. The response contains: description, reproduction steps, severity, and environment info.
From a single conversation:
-
get_responses-- pull the latest bug report from FORMLOVA (L0) -
search_code-- GitHub MCP searches the repo for the relevant code path -
create_issue-- Linear MCP creates a prioritized issue with reproduction steps -
post_message-- Slack MCP posts to #bugs with the issue link -
create_branch-- GitHub MCP creates a fix branch -
push_files-- GitHub MCP commits the fix -
create_pull_request-- GitHub MCP opens a PR referencing the Linear issue
Steps 5-7 are the dangerous part. The LLM is writing and committing code based on a form response. For simple bugs -- typos, config errors, obvious logic fixes -- this works remarkably well. For complex bugs, steps 1-4 alone save significant triage time.
The important nuance: the LLM decides at each step whether to continue. If the code search in step 2 returns ambiguous results, it can stop and ask the user. This is not a rigid automation pipeline. It is a conversational workflow where the human stays in the loop.
Pattern 2: Lead Capture to Sales Pipeline
FORMLOVA + HubSpot + Slack + Google Calendar
A prospect fills out a demo request form. The response includes: company name, role, use case, and preferred meeting time.
-
get_responses-- pull the demo request from FORMLOVA -
create_contact-- HubSpot MCP creates or updates the contact with company and role -
create_deal-- HubSpot MCP creates a deal in the sales pipeline -
post_message-- Slack MCP posts to #sales: "New demo request from [Company], [Role]" -
create_event-- Google Calendar MCP books the meeting at the requested time
Each step uses the output of previous steps. The HubSpot contact ID from step 2 gets referenced in the deal creation in step 3. The meeting link from step 5 could be sent back through FORMLOVA's auto-reply email.
Pattern 3: NPS Feedback Loop
FORMLOVA + HubSpot + Slack + Linear
An NPS survey response comes in. The LLM reads the score and branches:
Score 9-10 (Promoter):
1. Update HubSpot contact: latest_nps = 10
2. FORMLOVA sends "Thank you" email with review request link
Score 7-8 (Passive):
1. Update HubSpot contact: latest_nps = 7
2. No further action
Score 0-6 (Detractor):
1. Update HubSpot contact: latest_nps = 3
2. Slack #cs-alert: "Detractor alert: [Name], NPS 3, reason: [verbatim]"
3. Linear: create follow-up task assigned to CS team
4. FORMLOVA sends "We hear you" email
The branching logic lives in the LLM, not in FORMLOVA's workflow engine. This means the routing rules can be as nuanced as natural language allows. "If the score is below 4 AND the free-text mentions billing, route to #billing-issues instead of #cs-alert" -- that is a single sentence instruction, not a condition builder configuration.
Pattern 4: Event Operations Pipeline
FORMLOVA + Google Calendar + Slack + Notion
An event registration form receives a submission:
-
get_responses-- pull the registration -
create_event-- Google Calendar adds the event to the attendee's calendar - Notion MCP adds a row to the attendee database with name, email, dietary preferences
- When capacity is reached, Slack gets notified: "Event X is full. 150/150 registered."
- Three days before the event, FORMLOVA sends reminder emails (this part is FORMLOVA-native, no MCP cross-service needed)
- After the event, FORMLOVA sends a follow-up survey form
- Survey results get summarized and posted to a Notion retrospective page
Steps 2-4 require cross-service orchestration. Steps 5-7 mix FORMLOVA-native automation with cross-service calls. The user does not need to know the difference.
Pattern 5: Incident Response
FORMLOVA + Jira + Slack + Notion + GitHub
An incident report form captures: timestamp, severity, affected service, symptoms.
-
get_responses-- pull the incident report - Jira MCP creates a P1 ticket with all fields mapped
- Slack #incidents gets an alert with the Jira link and severity
- GitHub MCP searches recent commits for changes to the affected service
- If a likely culprit commit is found, GitHub MCP creates a revert branch
- After resolution, Notion MCP creates a postmortem page from a template with timeline, root cause, and action items pre-filled
Step 4 is where this gets interesting. The LLM can correlate "auth service is returning 500s" with "commits touching src/auth/ in the last 24 hours" and surface the likely cause. It cannot always fix it, but it can narrow the search space dramatically.
What Cannot Be Automated (Yet)
I want to be clear about the boundary. These cross-service flows are chat-initiated, not event-driven. The user says "process this bug report" and the LLM executes the chain. The form response does not automatically trigger the chain without human involvement.
FORMLOVA's native workflow engine supports automatic triggers (response.created, capacity.reached, deadline.approaching), but those actions are limited to: send_email, update_field, and webhook. The engine cannot call Slack MCP or GitHub MCP directly, because the workflow engine is server-side code, not an MCP client.
For fully automatic cross-service flows, you still need either:
- A webhook from FORMLOVA to a middleware (Zapier, Make, n8n) that calls the other services' APIs
- A polling setup where the LLM periodically checks for new responses
The honest framing: MCP cross-service orchestration today is semi-automatic. The human triggers it from chat. But the execution -- reading responses, creating issues, posting messages, opening PRs -- is fully automated once triggered.
Why This Matters for MCP Server Builders
If you are building an MCP server, your tools do not exist in isolation. Users will connect your server alongside others and expect the LLM to chain them.
This has design implications:
1. Return structured data, not just messages.
If your tool returns a plain-text success message, the LLM has nothing to pass to the next tool. If it returns structured data with IDs, URLs, and key fields, the LLM can reference those in subsequent calls.
// Bad: the LLM cannot extract the issue ID reliably
return { text: "Issue created successfully!" };
// Good: the LLM can pass issue_id to the next tool
return {
text: "Issue created: PROJ-142",
issue_id: "PROJ-142",
url: "https://linear.app/team/PROJ-142"
};
2. Accept flexible identifiers.
Users will paste URLs, mention names, or use partial identifiers. If your tool only accepts exact IDs, the LLM has to ask the user for the ID, breaking the flow. Accept what humans naturally provide and resolve internally.
3. Make tools composable, not monolithic.
A single tool that "creates a contact and sends a welcome email and adds to a list" is useful in isolation but blocks cross-service composition. Separate tools for each action let the LLM interleave your tools with other services' tools.
The Testing Reality
Validating cross-service flows is simpler than it appears. Each MCP call is independent. If FORMLOVA-to-Slack works and Slack-to-Linear works, then FORMLOVA-to-Slack-to-Linear works. The LLM is the glue, and it handles each call the same way regardless of what came before.
What you actually need to test:
- Does each 1:1 pair work? (Your tool output -> their tool input)
- Is your tool output structured enough for the LLM to extract what it needs?
- Does the LLM maintain context across the chain? (Usually yes, within a session)
What you do not need to test:
- Every possible N-service combination
- The LLM's orchestration logic (that is the client's job, not yours)
What This Means for Form Services
Every form response is a structured data event. It has typed fields, metadata, timestamps, and context. That makes it an ideal trigger for cross-service workflows.
The form service that exposes its response data well through MCP becomes a universal trigger layer. Not because it built integrations with every other service, but because it made its data accessible to an orchestrator that can talk to anything.
I did not build a single integration. I built 127 tools that return structured data. The integrations build themselves every time a user connects another MCP server to their client.
If you are building MCP servers and thinking about cross-service composition, I would be interested to hear your approach. The ecosystem is new enough that there are no established patterns yet.
- How we handle safety for 127 tools
- Get started free | Setup guide
- Route Post-Publish Responses by Intent
- Product Hunt launch: April 15, 2026

Top comments (0)