The LLMs are really just "Garbage in, Gaarbage out"(GIGO) system. If a user provide an input that is too long, contains malicious characters, or missing required field, the AI might be driven crazy.....
Why Zod
one out of the most is: Zod is Runtime safety, unlike typescript interfaces, Zod checks data at runtime before it ever hits the OpenAI API
**const chatRequestSchema = z.object({
prompt: z.string()
.trim()
.min(1, 'Prompt is required')
.max(1000, 'Prompt is too long'),
conversationId: z.string().uuid(),
})**
app.post('/api/chat', async (req: Request, res:Response) => {
const parseResult = chatRequestSchema.safeParse(req.body);
if (!parseResult.success) {
res.status(400).json({ error: parseResult.error.format() });
return;
}
const {prompt, conversationId} = req.body;
try{
const response: ChatResponse = await chatService.sendRequest(prompt, conversationId);
console.log("Response from OpenAI:", response);
res.json({ message: response.message})
} catch (error) {
res.status(500).json({ error: 'An error occurred while processing your request.' });
return;
}
})
Advanced way of using Zod could be: Zod.infer, to be contionued
Top comments (1)
Garbage in, Gaarbage out"
Typo