April's release focuses on making AI Agent development easier to inspect, test, and migrate into production workflows.
If you are building agents, the hard part is often not the final answer. It is understanding everything that happened before the answer appeared:
- What did the agent infer from the user request?
- Which model call happened?
- Which tool did the agent choose?
- What did the MCP tool or Skill return?
- Did the failure come from the prompt, model configuration, tool parameters, or business logic?
This release adds practical tooling for that workflow: AI Agent Debugger, A2A Debugger, Postman API import for larger migrations, a better Ask AI experience in published docs, and custom model providers.
β New Updates
π₯ AI Agent Debugger: Inspect the Full Agent Run
Apidog already supported visual debugging for SSE endpoints, which is useful for streaming model responses, progress updates, real-time notifications, and other event-driven APIs.
Agent debugging needs more than a stream viewer.
A model response only shows the final output. In real projects, you need the execution trace that produced it. The new AI Agent Debugger helps you inspect the full run inside Apidog, including:
- Conversation turns
- Model calls
- MCP tool calls
- Custom Skill execution
- Tool results
- Final output
Use it when you need to answer questions like:
Did the prompt include enough context?
Did the agent choose the correct tool?
Did the MCP tool return the expected data?
Were the tool parameters correct?
Did the failure happen in model reasoning or business logic?
A typical debugging flow looks like this:
- Run the agent task.
- Open the recorded execution path.
- Inspect each conversation round and model call.
- Check tool invocations and returned values.
- Compare the final output with the expected behavior.
- Update the prompt, model configuration, tool schema, or business logic based on the failed step.
This is especially useful once an agent workflow includes multiple tools or several intermediate decisions.
π€ A2A Debugger: Test Agent-to-Agent Communication
Multi-agent systems need a reliable way to verify that agents can pass tasks, exchange messages, and return results correctly.
Apidog now supports debugging for Google's A2A, or Agent-to-Agent, protocol.
With the A2A Debugger, you can:
- Send A2A requests directly
- Inspect request parameters
- Check responses
- Verify the result of an agent-to-agent interaction
Use this when you need to test whether one agent can communicate with another without switching between multiple tools or manually reading raw protocol details.
The difference between the two new debugging tools is straightforward:
| Tool | Use it to debug |
|---|---|
| AI Agent Debugger | What happens inside a single agent run |
| A2A Debugger | Whether one agent can communicate with another agent |
Most teams building real agent systems will eventually need both.
π¦ Import Postman Data Through the Postman API
Postman migration now has a better path for larger teams.
Apidog already supported importing local Postman files. Now you can also import the following through the Postman API:
- Workspaces
- Collections
- Environments
This is designed for bulk migration when creating new projects. In practice, it is closer to moving an entire Postman Workspace into Apidog.
If your Postman account contains multiple Workspaces, Apidog will create corresponding projects after import.
A practical migration flow now looks like this:
- Choose the Postman API import option.
- Connect the relevant Postman account or API source.
- Select the Workspaces, Collections, and Environments to import.
- Let Apidog create the matching project structure.
- Review imported endpoints, environments, and documentation.
For small imports, local Postman files still work. For larger workspace migrations, importing through the Postman API reduces repetitive export, upload, and cleanup steps.
π Ask AI in Published Docs Now Opens in the Sidebar
Ask AI in published documentation now opens in a sidebar.
This lets readers keep the current API document open while asking questions about it.
A typical docs workflow becomes simpler:
- Open a published API document.
- Ask a question in the sidebar.
- Read the answer without losing your place.
- Continue browsing the same page.
- Ask follow-up questions when needed.
This is useful for longer API docs where the answer may exist on the page but takes time to locate manually.
π§ Custom AI Model Providers
Teams can now connect custom AI model providers with a custom Base URL.
This is useful if your company already uses:
- A self-hosted model service
- An internal model gateway
- A custom model provider endpoint
Instead of switching tools to debug AI-related workflows, you can bring that setup into Apidog and keep the debugging process closer to your API and agent development workflow.
π Bug Fixes and Smaller Improvements
This release also includes fixes and quality-of-life improvements:
- Fixed an issue where OpenAPI smart merge did not keep endpoint response examples.
- Fixed an issue where merging from a child branch into a protected main branch could include endpoints that were not selected.
- Fixed incorrect dropdown display when creating endpoint versions from branches.
- Fixed an issue where TestData and TestCases did not work when running tests through the CLI.
- Fixed an issue where OpenAPI export included response components from unrelated modules.
- Fixed Markdown export formatting for JSON with comments.
- Fixed a Word export error caused by
crypto is not defined. - Fixed an issue where importing Knife4j with Basic Auth enabled did not show username and password fields.
- Fixed an endpoint filtering error when tags were numbers.
- Fixed an issue where
apidog endpoint list --branchdid not return data for the specified branch. - Fixed several MCP tool parameter, filtering, and error message issues.
- Fixed an issue where generated code was missing the
typescriptThreePlusconfiguration option.
π What This Means
April's release is practical for teams building AI Agent products.
Use the new tools this way:
- Use AI Agent Debugger to inspect a single agent run step by step.
- Use A2A Debugger to test communication between agents.
- Use Postman API import to reduce migration work for larger Postman Workspaces.
- Use the Ask AI sidebar to make published API docs easier to navigate.
- Use custom model providers to connect internal or self-hosted AI infrastructure.
These updates are aimed at the point where agent development moves beyond demos and into real implementation work.
π¬ Join the Conversation
Connect with fellow API engineers and the Apidog team:
- Join our Discord community for real-time discussions and support.
- Participate in our Slack community for technical conversations.
- Follow us on X (Twitter) for the latest updates.
P.S. For the full details on all updates, check the Apidog Changelog!
Best Regards,
The Apidog Team


Top comments (0)