Let’s build an app with AI. That was the request from leadership. And so the product and engineering team, knowing that anything AI-related is really API-powered , set off on its API journey. They defined the problem space, brainstormed solutions, determined costs, documented functionality, gathered technical requirements, mapped out the API, and so forth.
Soon after the journey starts, the team confronts a challenge: not all APIs can integrate with AI APIs directly. This is due to several reasons: protocol limitations, architectural requirements, data/payload design, etc. GraphQL, XML, and non-HTTP APIs, for example, may require additional abstraction layers, wrappers, or cloud services to meet the requirements.
While there are an increasing number of tools and solutions to help facilitate integration with AI or large language model (LLM) APIs, HTTP (or REST) APIs utilizing OpenAPI specification is the dominant API format supported by AI models. This includes OpenAI, Github Co-Pilot, Azure OpenAI, Google Cloud Vision API, and DALL-E. In fact, several of these are built with or on top of OpenAI. For a deeper understanding, feel free to explore our guide on crafting the perfect API description that aligns with these standards.
And just like any API, there are several things developers can do to ensure that their API’s design is optimized for its intended purpose. This is why we now have the AI-Ready badge on our API Insights tool.
But first, let’s recap what API Insights does:
- analyzes JSON and YAML OpenAPI Specifications against 25+ standards and best practices
- grades API across three main categories:
1. Design or architecture of the API
- Performance or responsiveness and efficiency of the API
- Security or how well fortified the API is against breaches and vulnerabilities
- makes an actual request to the API to validate things like CDN usage, load time, and response size
- allows users to compare results against other APIs in the same industry
API Insights gives users visibility into potential issues with their API and is meant to provide governance and feedback on how API quality can improve. In addition to our web and Mac apps, we also provide API Insights directly in your code editor (via a VSCode extension) and in the Treblle CLI for automation.
If you want to know more on why we built API Insights, read our announcement blog post.
But back to our story. The team has made some important decisions about their API and an early version has been built out. They have an OpenAPI specification and are using Treblle’s API Insights to get quick feedback on the API’s quality. But there are still questions around how the API will interact with AI/LLM API.
This is where API Insights can help again!
We have added a series of tests that will help give you confidence that your API will integrate seamlessly with AI. So not only do you get your standard API Insights score, we will automatically show you an AI Ready badge if your API adheres to a set of criteria that typically gives the best user experience when integrating with AI/LLMs.
Now that the team has run its API through API Insights, they can make final adjustments necessary to meet a high level of quality, performance, security, and AI readiness. With this challenge overcome, the team can now move to the next step in their journey as they get closer to a successful launch.
This story illustrates our goal with Treblle and all of our free dev tools – to help API teams succeed quickly! Naturally we encourage you to give API Insights a try, and we always look forward to feedback and learning how we can improve.
If you’re interested in diving in a bit more, here are the additional checks that we run to test for AI Readiness. This does not affect your primary API Insights score at all. Also, not getting a badge doesn’t necessarily mean that your API won’t integrate with AI APIs. Most AI publishers have detailed integration guides and API documentation that you can consult for more detailed information.
To be AI-Ready, an API must have the following:
Endpoints Descriptions
Each endpoint has a detailed and user-friendly description.
Operation IDs
Each endpoint must have a distinct OperationID.
Parameters Descriptions
Each parameter should have a clear description, which includes how to use it, data type expected, and data format of input.
Response Descriptions
All responses must have HTTP status code and response description.
IF Schema
exists, must have Type and Description
. If a schema model exists, the schema type should be defined and the model should contain a schema description. If no schema exists, this test is skipped.
As you can see from the required fields, having clear descriptions – and especially defining data types and schemas – are important to ensure that AI/LLMs can understand what your API is returning and how that payload should be processed. Doing this work up front will ensure that your API is off to a smooth start when integrating with AI.
Top comments (0)