Why We Built NeuroLink: Making AI Development Practically Free
How a fintech company processing millions of payments ended up building the universal AI SDK—and why we open-sourced it.
The Problem We Couldn't Ignore
At Juspay, we process millions of payments daily across India and Southeast Asia. When you're moving that much money, you don't get to experiment with "nice-to-have" AI features. Every integration has to work, scale, and comply with strict financial regulations.
In 2023, we started integrating AI across our products:
- HyperSDK: AI-powered payment error detection and recovery suggestions
- Breeze: One-click checkout with intelligent fraud scoring
- Euler: AI-assisted merchant analytics and anomaly detection
- Lighthouse: Automated alert triaging and root cause analysis
Each product team started their AI integration differently. One team used the OpenAI SDK. Another tried Anthropic. A third experiment used Google's Gemini. By Q2 2024, we had seven different AI integration patterns across our codebase.
Here's what that looked like in practice:
// Team A's OpenAI integration
import OpenAI from "openai";
// Team B's Anthropic integration
import Anthropic from "@anthropic-ai/sdk";
// Team C's Bedrock integration (for compliance)
import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
// Team D's Vertex integration (for PDF processing)
import { VertexAI } from "@google-cloud/vertexai";
Four teams. Four SDKs. Four different error handling patterns. Four different streaming implementations. Four different authentication flows.
And the kicker? They were all doing fundamentally the same thing: sending text to an LLM and getting text back.
The Cost of Fragmentation
Our infrastructure team started seeing the pain first:
Credential Sprawl
Every SDK needed its own API key management. Some used environment variables. Others needed credential files. Bedrock required IAM roles. Vertex needed service account JSON.
Our secrets management system wasn't designed for "one key per AI provider per service." We had API keys scattered across AWS Secrets Manager, HashiCorp Vault, and (we're not proud of this) a few hardcoded in environment configs that we had to rotate in a panic.
Observability Nightmares
Want to know your total AI spend across all providers? Good luck. Each SDK had its own way of exposing token counts. Some didn't expose them at all. We ended up building a Frankenstein monitoring dashboard that queried four different APIs and tried to normalize the data.
When Claude went down for 20 minutes in March 2024, we didn't even know which services were affected because our alerting was fragmented by SDK, not unified by function.
The Onboarding Tax
New engineers joining AI-related projects needed to learn the quirks of whichever SDK that team had chosen. "Oh, you're working on Lighthouse? That's the Anthropic SDK. Here's the 12-page internal doc on how we handle streaming errors."
We were spending more time training people on SDK specifics than on AI concepts that actually mattered.
Provider Lock-In Anxiety
Every architectural decision came with a haunting question: "What if we need to switch providers later?"
OpenAI had an outage. Anthropic changed their API. Gemini launched a feature we needed. Each time, teams hesitated because switching meant rewriting integration code, retesting error handling, and retraining the team.
We weren't choosing the best AI for the job. We were choosing the AI that would minimize migration work.
The Internal Project That Changed Everything
In June 2024, a small team of three engineers got permission to build something experimental: a unified AI client that could route to any provider through a single, consistent API.
The requirements were simple:
- One import regardless of which provider you used
- Identical error handling across all providers
- Automatic failover when a provider went down
- Cost optimization without code changes
- Full TypeScript safety with IntelliSense support
We called it "NeuroLink"—the idea being that AI intelligence flows like signals through a nervous system, and we needed a unified layer to carry those signals wherever they needed to go.
The Architecture Decisions That Mattered
TypeScript-First (Not TypeScript-Compatible)
Most AI SDKs are written in Python first, with TypeScript bindings added later. The types are often loose. The streaming interfaces feel bolted on.
We built NeuroLink in TypeScript from day one:
// Everything is fully typed
const result = await neurolink.generate({
input: { text: "Hello" },
provider: "anthropic",
model: "claude-3-5-sonnet-20241022", // Autocomplete shows all available models
});
// result is fully typed - content, token counts, finish reason
console.log(result.content);
console.log(result.usage?.inputTokens);
No any types. No "check the documentation for response shape." If it compiles, it works.
Provider-Agnostic by Design
We didn't build an "OpenAI client with fallback." We built a unified protocol that normalizes every provider into a common interface:
// The same code works with any provider
await neurolink.generate({
input: { text: "Analyze this" },
provider: "openai", // GPT-4o
});
await neurolink.generate({
input: { text: "Analyze this" },
provider: "anthropic", // Claude
});
await neurolink.generate({
input: { text: "Analyze this" },
provider: "vertex", // Gemini
});
The differences between providers (message format, function calling syntax, error structures) are handled internally. Your code stays clean.
MCP Native from the Start
When we learned about the Model Context Protocol (MCP), we realized it was the missing piece. AI tools shouldn't be tied to a specific provider. They should be infrastructure that any AI can use.
We built MCP support directly into the core:
// Add GitHub as a tool - works with ANY provider
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});
// Claude can use it
await neurolink.generate({
input: { text: "Create a GitHub issue" },
provider: "anthropic",
});
// So can GPT-4
await neurolink.generate({
input: { text: "Create a GitHub issue" },
provider: "openai",
});
Tools became portable. Teams could share MCP servers across projects without worrying about which LLM was being used.
Intelligent Orchestration
We didn't want engineers to hardcode provider choices. We wanted the system to be smart:
const neurolink = new NeuroLink({
enableOrchestration: true,
});
// NeuroLink automatically selects the best provider
// based on cost, availability, and task complexity
const result = await neurolink.generate({
input: { text: "Summarize this legal document" },
// No provider specified - intelligent routing
});
The orchestration layer considers:
- Cost: Use cheaper models for simple tasks
- Capability: Route PDF processing to providers with native support
- Availability: Fail over automatically during outages
- Latency: Choose the fastest provider for real-time features
Engineers stopped thinking about "which provider" and started thinking about "what task."
From Internal Tool to Open Source
By August 2024, NeuroLink was powering AI features across all Juspay products. New integrations that used to take 2-3 weeks were taking 2-3 hours. The math was undeniable.
But we kept thinking: "Every company building with AI is facing this same fragmentation problem."
The decision to open-source wasn't just about being good open-source citizens (though that mattered). It was about creating a standard. If we wanted to hire engineers who already knew NeuroLink, we needed to release it. If we wanted vendors to integrate with our tooling, we needed to be open.
In September 2024, we released NeuroLink on GitHub under the MIT license.
The Impact: Before and After
Here's what changed at Juspay after NeuroLink became our standard:
| Metric | Before | After |
|---|---|---|
| New AI integration time | 2-3 weeks | 2-3 hours |
| Lines of integration code per feature | 500+ | ~50 |
| Provider switch cost | Full rewrite | 1 parameter change |
| Credential management | 7 different systems | 1 unified config |
| Onboarding time | 3 days (SDK training) | 30 minutes |
| Production incidents (AI-related) | 12/quarter | 2/quarter |
The incident reduction was the surprise benefit. When you have one error handling pattern instead of seven, you get really good at handling those errors.
The Vision: AI Should Be Infrastructure, Not Integration
We're building toward a future where AI is as easy to use as any other infrastructure service.
Think about databases. You don't import pg-sdk, mysql-sdk, and mongo-sdk in the same project. You use an ORM or a query builder that abstracts the differences. You choose PostgreSQL or MySQL based on your needs, not based on which SDK you prefer.
AI should work the same way. The provider is an implementation detail. Your code should focus on the task, not the transport layer.
NeuroLink is our step toward that future:
- 13+ providers unified under one API
- 58+ MCP tools that work everywhere
- TypeScript-first design for developer confidence
- Production-ready features like Redis memory and HITL workflows
- Cost optimization that happens automatically
Try What We Built
# Install and setup in under 5 minutes
npm install @juspay/neurolink
npx @juspay/neurolink setup
# Generate with automatic provider selection
npx @juspay/neurolink generate "Hello from NeuroLink"
# Or use it in your TypeScript project
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Your prompt here" },
});
From weeks of integration work to hours. From SDK complexity to clean abstraction. From provider lock-in to complete flexibility.
That's why we built NeuroLink. And that's why we think you'll want to use it.
NeuroLink — The Universal AI SDK for TypeScript
- GitHub: github.com/juspay/neurolink
- Install:
npm install @juspay/neurolink - Docs: docs.neurolink.ink
- Blog: blog.neurolink.ink — 150+ technical articles
Top comments (0)