DEV Community

Maggie Ma
Maggie Ma

Posted on

TIL on Zod...

The LLMs are really just "Garbage in, Gaarbage out"(GIGO) system. If a user provide an input that is too long, contains malicious characters, or missing required field, the AI might be driven crazy.....

Why Zod
one out of the most is: Zod is Runtime safety, unlike typescript interfaces, Zod checks data at runtime before it ever hits the OpenAI API

**const chatRequestSchema = z.object({
    prompt: z.string()
    .trim()
    .min(1, 'Prompt is required')
    .max(1000, 'Prompt is too long'),
    conversationId: z.string().uuid(),
})**

app.post('/api/chat', async (req: Request, res:Response) => {
    const parseResult = chatRequestSchema.safeParse(req.body);
    if (!parseResult.success) {
        res.status(400).json({ error: parseResult.error.format() });
        return;
    }

    const {prompt, conversationId} = req.body;

    try{
        const response: ChatResponse = await chatService.sendRequest(prompt, conversationId);
        console.log("Response from OpenAI:", response);
        res.json({ message: response.message})

    } catch (error) {
        res.status(500).json({ error: 'An error occurred while processing your request.' });
        return;
    }

})
Enter fullscreen mode Exit fullscreen mode

Advanced way of using Zod could be: Zod.infer, to be contionued

Top comments (1)

Collapse
 
zhe_zhang_7bcec3553c983d7 profile image
Zhe Zhang

Garbage in, Gaarbage out"

Typo