How to Build an AI-Powered API Gateway with Kong 3.5 and Claude 3.5 for Request Validation
API gateways act as the single entry point for all client requests to your backend services, handling cross-cutting concerns like authentication, rate limiting, and request validation. Traditional request validation relies on static schema checks (e.g., JSON Schema), which struggle with complex, context-dependent validation rules. By integrating Anthropic’s Claude 3.5 large language model (LLM) with Kong 3.5, you can build an AI-powered API gateway that performs intelligent, dynamic request validation, reducing manual overhead and improving security.
Prerequisites
- Kong 3.5 installed (self-hosted or Konnect managed)
- Anthropic Claude API key (sign up at console.anthropic.com)
- Docker (for local Kong deployment)
- Basic familiarity with API gateway concepts and Lua (for custom plugin development)
- curl or Postman for testing
Step 1: Deploy Kong 3.5 in DB-Less Mode
DB-less mode stores all Kong configuration in a single declarative configuration file, avoiding the need for a PostgreSQL database. First, create a kong.yml declarative config file:
# kong.yml
_format_version: "3.0"
services:
- name: mock-upstream
url: https://mockbin.org/bin/request
routes:
- name: mock-route
paths:
- /api/mock
Run Kong 3.5 with Docker, mounting the kong.yml file:
docker run -d \
--name kong-ai-gateway \
-p 8000:8000 \
-p 8001:8001 \
-v $(pwd)/kong.yml:/kong/declarative/kong.yml \
-e KONG_DATABASE=off \
-e KONG_DECLARATIVE_CONFIG=/kong/declarative/kong.yml \
-e KONG_PROXY_ACCESS_LOG=/dev/stdout \
-e KONG_ADMIN_ACCESS_LOG=/dev/stdout \
-e KONG_PROXY_ERROR_LOG=/dev/stderr \
-e KONG_ADMIN_ERROR_LOG=/dev/stderr \
kong/kong:3.5
Verify Kong is running by accessing the admin API: curl http://localhost:8001/status – you should see a 200 OK response with Kong status details.
Step 2: Configure Claude 3.5 for Request Validation
Claude 3.5 Sonnet (or Haiku for lower latency) can parse request payloads, compare them against dynamic rules, and return validation results. First, test your Claude API key with a sample validation prompt:
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: YOUR_CLAUDE_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "Validate the following JSON request payload against the rule: \"Payload must include a non-empty 'user_id' string and a 'amount' number greater than 0\". Payload: {\"user_id\": \"123\", \"amount\": 50}. Return only valid JSON: {\"is_valid\": true/false, \"reason\": \"...\"}"
}
]
}'
Claude will return a JSON response with is_valid: true for this valid payload. For an invalid payload (e.g., amount: -10), it will return is_valid: false with a reason.
Step 3: Build a Custom Kong Plugin for AI Validation
Kong’s Plugin Development Kit (PDK) lets you extend gateway functionality with custom Lua plugins. Create a new plugin named ai-request-validation:
First, create the plugin directory structure:
mkdir -p /usr/local/kong/plugins/ai-request-validation
cd /usr/local/kong/plugins/ai-request-validation
Create the plugin handler file handler.lua:
-- /usr/local/kong/plugins/ai-request-validation/handler.lua
local BasePlugin = require "kong.plugins.base_plugin"
local http = require "resty.http"
local cjson = require "cjson"
local AiRequestValidation = BasePlugin:extend()
AiRequestValidation.PRIORITY = 1000
AiRequestValidation.VERSION = "1.0.0"
function AiRequestValidation:access(config)
AiRequestValidation.super.access(self)
-- Get request body
local req_body = kong.request.get_body()
if not req_body then
kong.log.err("No request body found")
return kong.response.exit(400, { message = "Request body required" })
end
-- Prepare Claude API request
local httpc = http.new()
local claude_url = "https://api.anthropic.com/v1/messages"
local claude_payload = {
model = config.model_name,
max_tokens = 1024,
messages = {
{
role = "user",
content = string.format(
"Validate the following JSON request payload against this rule: %s. Payload: %s. Return only valid JSON: {\"is_valid\": boolean, \"reason\": string}",
config.validation_rule,
cjson.encode(req_body)
)
}
}
}
-- Send request to Claude
local res, err = httpc:request_uri(claude_url, {
method = "POST",
body = cjson.encode(claude_payload),
headers = {
["x-api-key"] = config.claude_api_key,
["anthropic-version"] = "2023-06-01",
["content-type"] = "application/json"
},
keepalive_timeout = 60,
keepalive_pool = 10
})
if not res then
kong.log.err("Failed to call Claude API: ", err)
return kong.response.exit(500, { message = "Validation service unavailable" })
end
-- Parse Claude response
local claude_response = cjson.decode(res.body)
local validation_result = cjson.decode(claude_response.content[1].text)
if not validation_result.is_valid then
kong.log.info("Invalid request: ", validation_result.reason)
return kong.response.exit(403, { message = "Request validation failed", reason = validation_result.reason })
end
kong.log.info("Request validated successfully")
end
return AiRequestValidation
Create the plugin schema file schema.lua to define configuration parameters:
-- /usr/local/kong/plugins/ai-request-validation/schema.lua
local typedefs = require "kong.db.schema.typedefs"
return {
name = "ai-request-validation",
fields = {
{ consumer = typedefs.no_consumer },
{ protocols = typedefs.protocols_http },
{ config = {
type = "record",
fields = {
{ claude_api_key = { type = "string", required = true, encrypted = true } },
{ model_name = { type = "string", default = "claude-3-5-haiku-20241022" } },
{ validation_rule = { type = "string", required = true } }
}
}
}
}
}
Step 4: Deploy the Plugin to Kong
First, add the plugin to Kong’s configuration. If using DB-less mode, update your kong.yml to include the plugin:
# kong.yml
_format_version: "3.0"
plugins:
- name: ai-request-validation
config:
claude_api_key: "YOUR_CLAUDE_API_KEY"
model_name: "claude-3-5-haiku-20241022"
validation_rule: "Payload must include non-empty user_id string and amount number > 0"
services:
- name: mock-upstream
url: https://mockbin.org/bin/request
routes:
- name: mock-route
paths:
- /api/mock
plugins:
- name: ai-request-validation
If using a database-backed Kong instance, enable the plugin via the Admin API:
curl -X POST http://localhost:8001/plugins \
-H "Content-Type: application/json" \
-d '{
"name": "ai-request-validation",
"config": {
"claude_api_key": "YOUR_CLAUDE_API_KEY",
"model_name": "claude-3-5-haiku-20241022",
"validation_rule": "Payload must include non-empty user_id string and amount number > 0"
},
"service": "mock-upstream"
}'
Restart Kong to load the new plugin: docker restart kong-ai-gateway
Step 5: Test Request Validation
Send a valid request to the Kong route:
curl -X POST http://localhost:8000/api/mock \
-H "Content-Type: application/json" \
-d '{"user_id": "123", "amount": 50}'
This should return the mock upstream response (200 OK). Now send an invalid request with a negative amount:
curl -X POST http://localhost:8000/api/mock \
-H "Content-Type: application/json" \
-d '{"user_id": "123", "amount": -10}'
Kong will block this request with a 403 Forbidden response, including the validation failure reason from Claude.
Advanced Configuration
- Caching: Add Redis caching for Claude responses to reduce API calls and latency for repeated validation rules/payloads.
- Error Handling: Implement retries for transient Claude API failures, and fallback to static schema validation if the AI service is unavailable.
- Content Type Support: Extend the plugin to handle XML or form-data payloads by converting them to JSON before sending to Claude.
- Rate Limiting: Add Kong rate limiting plugin to restrict the number of requests sent to the Claude API, avoiding unexpected costs.
Conclusion
Integrating Claude 3.5 with Kong 3.5 creates a powerful AI-powered API gateway that handles dynamic, context-aware request validation far beyond static schema checks. This setup reduces manual validation logic, improves security by catching malformed or malicious requests, and adapts to changing validation rules without code changes. As LLMs continue to improve, this pattern can be extended to other gateway functions like response validation, anomaly detection, and personalized rate limiting.
Top comments (0)