Docs and developers have always had a symbiotic relationship. Product documentation has existed to give developers what they need to use a platform: setup instructions, code snippets, sample apps, tutorials, and more. In return, developers have been empowered (or held back) by how clearly and completely that information has been delivered.
Today, development is no longer just about browsing docs and writing code by hand. With AI, it’s about interacting, prompting, and building, often directly inside the IDE. Yet the traditional relationship still holds: an LLM client is only as good as the context it’s been given.
That means you need product documentation, compatibility rules, and a way to index code examples into your IDE. Without them, even the best LLM client can miss important details. But with the right context, it doesn’t just write accurate code. It can also reason about your system.
Whether you’re just getting started, picking up a new tool, or leaning on AI to carry the load, the question is: what docs do you need to get the most from your AI?
Tip: If you’d like to follow along with the examples, you’ll need an IDE that supports MCP servers. This post uses Cursor. You can also sign up for a free Cloudinary account to try the same tools. For hands-on help, see Cloudinary MCP servers and LLM tools (Beta).
From Lookup to Execution
LLM clients haven’t just changed how we use product documentation. They’ve changed where we use it. Inside your IDE, documentation isn’t something you look up anymore; it becomes part of your workflow.
Today, developers use a variety of LLM-powered IDEs, i.e., Cursor, VS Code extensions, or Claude Code, but the principle is the same: the more context you give, the better the LLM’s suggestions.
That means the code snippets, design patterns, and architectural suggestions provided by the AI are pulled directly from authoritative docs, keeping completions consistent and reliable.
And with MCP servers, you can tap into agentic behavior, such as performing background or configuration tasks without breaking flow. For example, you can upload a custom font to Cloudinary directly from your IDE. When documentation is properly integrated into your IDE, execution has a higher chance of succeeding without glitches.
Practical Guidance: Using Docs as Context
To make an LLM client effective inside your IDE, combine the following documentation inputs. Each plays a distinct role in ensuring accuracy, efficiency, and performance:
-
Reference the documentation site
Anchor your LLM client in the authoritative source of truth so its suggestions reflect not just valid syntax, but also best practices and consistent patterns drawn directly from the documentation.
-
Bring all code snippets and examples from the documentation directly into your IDE, so AI can autocomplete from them instead of guessing.
-
Supply the LLM client with structured, procedural rules (like Cloudinary’s transformation compatibility tables) so it produces valid, working code every time.
-
When the LLM client executes admin or configuration tasks in your IDE (like uploading assets or applying presets), MCP servers ensure those actions follow the documented patterns and succeed.
Of course, you can only incorporate the product documentation that the provider makes available.
Reference the Documentation Site
Documentation has always been the source of truth for developers. That hasn’t changed. What’s new is that docs are now also the source of truth for your LLM client, as well. They define the parameters, requirements, and best practices to produce code that not just works but is also scalable and consistent.
Take Cloudinary as an example. Suppose you’re preparing a Father's Day campaign landing page. Here's the original image:
https://res.cloudinary.com/demo/image/upload/fathers-day-banner.jpg
You need this, along with all hero images, cropped consistently at 1600×900.
If your LLM isn’t grounded in the docs, it might recommend an inline transformation every time you resize or crop:
// Works, but inconsistent across your app
https://res.cloudinary.com/demo/image/upload/w_1600,h_900,c_fill,g_auto/f_auto/q_auto/fathers_day_banner.jpg
That technically works, but it creates inconsistencies across projects. If parameters are rearranged, you risk generating unnecessary derived assets over and over.
Armed with context from the Retail and e-commerce guide, the LLM knows a better practice: use a named transformation. Named transformations bundle your parameters into a reusable recipe that can be applied consistently across projects and teams.
Because it has this context, the LLM can also apply the practice in the smartest way: delegate the one-time creation of the transformation to the MCP server, then always reference it in delivery URLs:
// One-time setup: MCP creates the named transformation
import { v2 as cloudinary } from "cloudinary";
cloudinary.api.create_transformation("campaign_hero", {
width: 1600,
height: 900,
crop: "fill",
gravity: "auto",
fetch_format: "auto",
quality: "auto"
})
Then, when coding your delivery URLs, just reference the name:
// Best practice: consistent use of named transformation
https://res.cloudinary.com/demo/image/upload/t_campaign_hero/fathers_day_banner.jpg
Both URLs deliver a valid image, but providing your LLM client the context increases the chances that it will code a solution that scales and implements best practices.
https://res.cloudinary.com/demo/image/upload/t_campaign_hero/fathers-day-banner.jpg
How to Add Docs in Cursor
Note: If you’re using a different IDE, check its MCP or docs integration options. Many of the same principles apply.
- Go to Cursor > Settings > Indexing & Docs
- In the Docs section, click + Add Doc
- Enter your doc site (e.g.
https://cloudinary.com/documentation
)
Tip: Some products make this even easier by providing Markdown versions of docs. With Cloudinary, every doc page includes:
- Open as Markdown — copy the URL and provide it as context.
- Download Markdown — save the file and upload it if your LLM doesn’t support URLs.
By anchoring your LLM to the doc site (or Markdown pages), you ensure its suggestions aren’t just valid code, but also reflect the documented best practices for your environment.
Index docs with Context7
If the documentation site is the source of truth, then Context7 makes that truth usable inside your IDE. It indexes the code snippets and examples from the docs so your LLM client can autocomplete from them directly.
That means the LLM client isn’t guessing at parameter order or inventing method names, it’s pulling from the same snippets you’d normally copy-paste from the docs.
Why does this matter? Because without indexed examples, your LLM is more likely to generate code that looks fine but won’t actually run.
Example: Uploading Campaign Assets
Suppose you’re adding images for your Father's Day campaign. Without Context7, the AI might suggest:
// Looks plausible, but this method doesn’t exist
cloudinary.upload("fathers_day_banner.jpg")
With Cloudinary’s docs indexed by Context7, your IDE can surface the real snippet from the Node.js SDK:
// Pulled directly from the Cloudinary SDK docs
import { v2 as cloudinary } from "cloudinary";
cloudinary.uploader.upload("fathers_day_banner.jpg", {
use_filename: true,
unique_filename: false,
overwrite: false,
upload_preset: "campaign_assets"
});
Now the code is syntactically correct, environment-aware, and aligned with documented best practices. Instead of breaking flow to check docs, you stay in the IDE and get it right the first time.
How to Add Context7 in Cursor
- Go to Cursor > Settings > Tools & Integrations
- From the MCP Tools section, click New MCP Server
- Add Context7 to your JSON list of MCP servers:
"context7": {
"url": "https://mcp.context7.com/mcp"
}
- Once set up, your LLM client can autocomplete directly from the indexed docs.
Tip: To enable doc ingestion with the Context 7 MCP server for a different IDE, check out the Context 7 GitHub repo README and choose an IDE.
Thousands of products already index their docs in Context7. If you’re using Cloudinary, you’re covered. If you’re using another product, make sure its documentation is included.
Leverage Rules
Examples and snippets are helpful, but they don’t cover the constraints that make code actually work. Documentation also defines rules: which parameters can be combined, how qualifiers interact, and what syntax is supported.
Without those rules as context, an LLM client might generate code that looks right but fails when you run it.
Example: Campaign Product Thumbnails
Suppose you’re creating thumbnails for product cards in your Father's Day campaign. You want them auto-cropped around the main subject. Your LLM knows about g_face
(automatically crop around the largest detected face), but if it isn’t grounded in the docs, it might pair it with the wrong crop mode:
// Invalid: g_face doesn’t work with c_pad
https://res.cloudinary.com/demo/image/upload/c_pad,g_face,w_500,h_500/e_upscale/f_auto/q_auto/fathers_day_banner.jpg
The output URL looks fine, but the transformation will fail.
With rules indexed from the docs, the AI knows g_face
only works with certain crop modes, such as c_auto
:
// Valid: g_face with c_auto
https://res.cloudinary.com/demo/image/upload/c_auto,g_face,w_500,h_500/e_upscale/f_auto/q_auto/fathers_day_banner.jpg
This way, instead of generating “almost right” code that you'll later have to debug, the LLM produces correct transformations that will run reliably in production.
How to Add a Rules File in Cursor
Note: Cursor also has its own “rules” feature. In this case, you’re not adding a Cursor rule. You’re adding the product’s rules file as documentation context, so your LLM suggestions follow the product’s documented constraints.
- Get the name of the rules file that the product you’re working with provides (e.g.
cloudinary_transformation_rules.md
). - Go to Cursor > Settings > Indexing & Docs
- In the Docs section, click + Add Doc
- Enter the name of the rules file
When writing a prompt about transformations, add this rules doc as context. In chat, use @ Add Context → Docs → + Add new doc, then paste the rules URL.
With rules in place, your LLM isn’t just copying snippets. It’s following the same guardrails defined in the product documentation.
Pair Docs with MCP Servers
Docs tell your LLM client what to do; MCP servers let it do it. Together they make documentation agentic so that the LLM client isn’t limited to suggesting code, but can also carry out actions correctly in your environment.
Example: Upload Presets for Campaign Assets
You need to standardize uploads for your Father's Day campaign, ensuring assets land in the right folder with consistent transformations and policies. Cloudinary supports this with upload presets, but the exact fields and allowed values (unsigned
, asset_folder
, tags
, transformation
, etc.) are defined in the documentation.
The MCP server can handle this configuration in the background so you can use the preset directly in your code. No need to open another interface or set it up manually.
Suppose you ask your LLM client to perform this agentic action:
“Make an unsigned upload preset called
campaign_assets
that stores images infathers_day_2025
with a 1600×900 crop”
If the LLM client guesses on the parameters instead of using the docs, the result is more likely to fail:
// Plausible, but invalid (wrong folder format, unsupported option)
await cloudinary.uploader.upload("fathers_day_banner.jpg", {
folder: "/fathers_day_2025", // leading slash not allowed, parameter deprecated for dynamic folder mode
crop_mode: "scale" // invalid field name
});
With docs + MCP, the LLM knows the correct fields and can act directly in your IDE to create and apply a preset:
// One-time agentic action: create an upload preset for campaign assets
await cloudinary.api.create_upload_preset({
name: "campaign_assets",
unsigned: false,
asset_folder: "fathers_day_2025",
transformation: [
{ width: 1600, height: 900, crop: "fill", gravity: "auto" }
],
tags: ["campaign", "fathers_day_2025"]
});
Once that's done, you can add the preset to your upload code:
// Use the preset to upload consistently
await cloudinary.uploader.upload("fathers_day_banner.jpg", {
upload_preset: "campaign_assets"
});
Here, the docs provide the valid fields and constraints, and MCP makes it actionable right inside your IDE. The result: assets uploaded consistently, without guesswork or manual dashboard setup.
How to Add MCP Servers in Cursor
- Go to Cursor > Settings > Tools & Integrations
- From the MCP Tools section, click New MCP Server
- Add the Cloudinary MCP servers to your JSON list:
"cloudinary-asset-mgmt-remote": {
"url": "https://asset-management.mcp.cloudinary.com/sse"
},
"cloudinary-env-config-remote": {
"url": "https://environment-config.mcp.cloudinary.com/sse"
},
"cloudinary-smd-remote": {
"url": "https://structured-metadata.mcp.cloudinary.com/sse"
}
- In Cursor Settings, click Needs Login for each MCP server and sign in with your Cloudinary email and password to connect to your product environment.
When documentation and MCP servers work together, your IDE doesn’t just generate suggestions. It also executes them correctly in your environment, turning “looks right” code into “works right” code.
Docs for Humans
And don’t worry, product documentation is still there for you, not just your LLM client. AI can speed up routine coding and handle a lot of boilerplate, but when you need to understand what’s happening, debug an issue, or step back for a high-level view of the platform, the docs are still your best tool.
Whether you’re learning a new capability, checking constraints, or exploring an unfamiliar API, reading the documentation yourself gives you the context and insight that no autocomplete can replace.
Because ultimately, you’re still the guide. Documentation now serves two audiences, developers and LLM clients, but you remain the one steering the system toward the results you want.
Adaptive and Agentic Docs
We’ve reached a point where documentation is no longer just static reference, but rather part of an adaptive, agentic workflow:
- Context-aware: available right where you code, without leaving your IDE.
- Agentic: paired with tools so your LLM client can not only explain concepts but also generate accurate code and perform actions.
- Human-first: still written to teach and guide developers, not just feed machines.
Looking ahead, documentation could become even more dynamic. Imagine a feedback loop where developer usage and AI interactions surface gaps, corrections, and improvements automatically. Instead of relying solely on manual updates, docs could evolve in near real time, reflecting how products are actually being used.
The future of product documentation is about giving LLM clients the best possible inputs, so they can reason and act safely while still giving developers the clarity they need to guide the process.
That’s how we’ll spend less time fixing and more time building. And as LLM clients grow more agentic, able to reason, plan, and execute, the need for accurate, structured, and adaptive documentation will only increase.
Let’s Build the Next Generation of Docs
We’re at an exciting point. AI can recommend, design, and even execute parts of our workflows—but only when given the right context. Documentation is no longer just for humans; it’s the fuel for intelligent, agentic tools.
Now it’s your turn:
-
Share your thoughts in the comments:
- What do you wish your docs did better?
- When have docs and AI worked well together for you?
- How do you want docs to support AI inside your IDE?
-
Try it out:
- Set up the Context 7 MCP server in your IDE.
- Pull in the Cloudinary MCP servers and doc tools, and upload an image from your local directory to Cloudinary.
Your feedback shapes the next generation of docs, for humans and AI. Let’s build smarter docs. And let’s keep them human.
Top comments (2)
Rock solid advice!!!
Thanks, @jenlooper for your vote of confidence. And yes, these suggestions really work!