Adding AI features to NestJS usually takes much longer than it should.
With RapidKit, you can scaffold a production-style AI assistant module in minutes.
What You Build
- AI assistant endpoints in NestJS
- Provider abstraction (
echoby default) - Optional OpenAI provider via environment
- Completion + stream APIs
- Support ticket endpoint with urgency classification
- Health + Swagger visibility
Source code: https://github.com/getrapidkit/rapidkit-examples/tree/main/my-ai-workspace/ai-agent-nest
Prerequisites
- Node.js 20+
- npm 10+
- Python 3.10+
1) Create workspace and project
npx rapidkit my-ai-workspace
cd my-ai-workspace
rapidkit create project nestjs.standard ai-agent-nest
cd ai-agent-nest
rapidkit init
2) Add AI Assistant module
npx rapidkit add module ai_assistant
This generates:
src/modules/free/ai/ai_assistant/ai_assistant.module.tssrc/modules/free/ai/ai_assistant/ai_assistant.controller.tssrc/modules/free/ai/ai_assistant/ai_assistant.service.tssrc/modules/free/ai/ai_assistant/ai_assistant.validation.ts
3) Register module in src/modules/index.ts
For latest ai_assistant module versions this is injected automatically.
If you use an older generated project and routes are missing, add this entry to optionalModules:
registerOptionalModule(() => require('./free/ai/ai_assistant').AiAssistantModule as ModuleRef),
4) Run app
PORT=8013 npm run start:dev
or via RapidKit wrapper:
rapidkit dev -p 8013
To auto-register OpenAI provider:
OPENAI_API_KEY=your_key PORT=8013 npm run start:dev
5) Verify endpoints
curl http://127.0.0.1:8013/health
curl http://127.0.0.1:8013/ai/assistant/providers
curl http://127.0.0.1:8013/ai/assistant/health
Providers response:
["echo"]
Completion test:
curl -X POST http://127.0.0.1:8013/ai/assistant/completions \
-H "Content-Type: application/json" \
-d '{"prompt":"What is RapidKit?","provider":"echo"}'
Example response:
{
"provider": "echo",
"content": "[echo] What is RapidKit?",
"latencyMs": 0,
"cached": false
}
Stream test:
curl -X POST http://127.0.0.1:8013/ai/assistant/stream \
-H "Content-Type: application/json" \
-d '{"prompt":"Give me a deployment checklist","provider":"echo"}'
Cache clear:
curl -X DELETE http://127.0.0.1:8013/ai/assistant/cache
Support ticket parity endpoint:
curl -X POST http://127.0.0.1:8013/support/ticket \
-H "Content-Type: application/json" \
-d '{"message":"Customer cannot complete payment"}'
Example response:
{
"ticket_id": "TKT-12345",
"urgency": "high",
"ai_response": "[echo] Customer cannot complete payment",
"latency_ms": 0,
"next_action": "escalate",
"provider": "echo"
}
6) Open API docs
http://127.0.0.1:8013/docs
Why this setup is useful
- Fast baseline for real backend teams
- Clear module boundaries for scaling
- Easy to swap provider strategy later
FastAPI parity now exists in this example for the support-ticket flow and optional OpenAI path.
If you want, next step is adding OpenAI provider + retry/fallback in this same NestJS codebase.
Top comments (0)