Cursor Hid Its AI Model — Here's How to Take Back Control with Direct API Access
TL;DR: Cursor shipped Composer 2 built on Kimi K2.5 (a Chinese open-source model) without disclosing it. A developer found the model ID in API traffic within 24 hours. Here's why this matters — and how to stop trusting black-box tools.
What Happened with Cursor?
On March 19, 2026, Cursor announced Composer 2 — billing it as their own proprietary AI model for coding. The benchmarks looked impressive: 61.7 on Terminal-Bench 2.0, beating Claude Opus 4.6. Pricing at $0.50/M tokens. A clear "we built this" narrative.
It lasted less than 24 hours.
Developer @fynnso was testing Cursor's OpenAI-compatible base URL when an unexpected string appeared in the API response:
accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast
That wasn't a Cursor internal name. It decoded cleanly: kimi-k2p5 = Kimi K2.5, an open-weight model from Beijing-based Moonshot AI. rl = reinforcement learning fine-tuning. 0317 = March 17 training date.
Cursor's VP of Developer Education eventually confirmed: "Yep, Composer 2 started from an open-source base."
Cursor co-founder Aman Sanger admitted: "It was a miss to not mention the Kimi base in our blog from the start."
Why This Matters for Developers
This isn't just about Cursor. It reveals a fundamental problem with black-box AI tools:
- You don't know what model is running your code — or your data
- Licensing compliance is hidden from you — Kimi K2.5's modified MIT license requires attribution for companies earning >$20M/month (Cursor qualifies)
- Geopolitical exposure — using a Chinese AI model without disclosure raises data sovereignty questions for enterprise users
- Pricing opacity — you're paying "Cursor rates" for a fine-tuned open-source model
The community found the model ID in under 24 hours. Imagine what else is hidden.
The Real Solution: Direct API Access
Instead of trusting a black-box IDE to choose your models, call models directly via API. With direct API access:
- ✅ You know exactly which model you're using
- ✅ You control your data routing
- ✅ You can switch models instantly
- ✅ You pay actual model prices, not tool markups
NexaAPI gives you access to 56+ models — including Kimi K2.5, Claude, GPT, Flux, VEO3 — with full transparency about what you're calling.
Python: Build Your Own Transparent AI Coding Assistant
# pip install nexaapi
from nexaapi import NexaAPI
import os
client = NexaAPI(api_key=os.environ.get('NEXAAPI_KEY'))
# You choose the model — no hidden substitutions
def code_review(code: str, model: str = 'claude-sonnet-4-6') -> str:
"""Review code with a model YOU chose explicitly."""
response = client.chat.completions.create(
model=model, # You see exactly what runs
messages=[
{
'role': 'system',
'content': 'You are an expert code reviewer. Be specific and actionable.'
},
{
'role': 'user',
'content': f'Review this code:\n\n```
{% endraw %}
\n{code}\n
{% raw %}
```'
}
]
)
return response.choices[0].message.content
# Transparent: you know it's Kimi K2.5 (the same model Cursor hid)
result = code_review(
code="def fetch_user(id): return db.query(f'SELECT * FROM users WHERE id={id}')",
model='kimi-k2.5' # Using it directly — no secrets
)
print(result)
# Or use Claude — your choice, your control
result_claude = code_review(
code="def fetch_user(id): return db.query(f'SELECT * FROM users WHERE id={id}')",
model='claude-sonnet-4-6'
)
print(result_claude)
JavaScript: Transparent Model Selection
// npm install nexaapi
import NexaAPI from 'nexaapi';
const client = new NexaAPI({ apiKey: process.env.NEXAAPI_KEY });
// Build a transparent AI assistant — you pick the model
async function codeReview(code, model = 'claude-sonnet-4-6') {
const response = await client.chat.completions.create({
model, // No hidden substitutions
messages: [
{
role: 'system',
content: 'You are an expert code reviewer. Be specific and actionable.'
},
{
role: 'user',
content: `Review this code:\n\n\`\`\`\n${code}\n\`\`\``
}
]
});
console.log(`Model used: ${model}`); // Always transparent
return response.choices[0].message.content;
}
// Use Kimi K2.5 directly — same model Cursor used, but YOU know it
const review = await codeReview(
"const query = `SELECT * FROM users WHERE id=${userId}`",
'kimi-k2.5'
);
console.log(review);
The Transparency Comparison
| Tool | Model Visibility | You Control Model | Data Routing | Price |
|---|---|---|---|---|
| NexaAPI | ✅ Full | ✅ Yes | ✅ Yes | Transparent |
| Cursor | ❌ Hidden (see above) | ❌ No | ❌ No | Markup |
| GitHub Copilot | ⚠️ Partial | ❌ No | ❌ No | Subscription |
Get Started with Transparent AI
- Sign up: nexa-api.com
- Subscribe on RapidAPI: rapidapi.com/user/nexaquency
-
Install SDK:
pip install nexaapiornpm install nexaapi - Call models directly — no black boxes
Resources:
The Cursor incident proved one thing: when AI tools hide their models, developers lose. The fix isn't finding a better IDE — it's taking direct control of your AI stack.
Do you think AI tools should be required to disclose which models they use? Drop your thoughts below 👇
Top comments (0)