DEV Community

Cover image for Carific.ai: From AI Slop to Actionable Feedback - Structured Output with Zod and the AI SDK
Abdullah Jan
Abdullah Jan

Posted on

Carific.ai: From AI Slop to Actionable Feedback - Structured Output with Zod and the AI SDK

This is my third dev.to post. If you missed the previous ones: Building the Auth System and Building the AI Resume Analyzer.

"Spearheaded cross-functional initiatives to leverage synergies..."

That's what my AI resume analyzer suggested. I stared at the screen, realizing I'd built exactly what I was trying to help users avoid: generic, buzzword-filled nonsense.

The resume analyzer worked. It streamed markdown. It looked impressive. But when I asked myself "Would I actually use this feedback?" - the answer was no.

This is the story of how I rebuilt the entire output system to be genuinely useful.


The Stack

Package Version Purpose
Next.js 16.0.7 App framework
AI SDK 5.0.108 generateObject for structured output
Zod 4.1.13 Schema validation
Lucide React 0.555.0 Icons
Sonner 2.0.7 Toast notifications

Chapter 1: The Problem with Streaming Markdown

The first version of the resume analyzer used streamText from the AI SDK. It worked like this:

// ❌ The old approach
const result = await streamText({
  model: MODEL,
  system: SYSTEM_PROMPT,
  prompt: `Analyze this resume...`,
});

return result.toTextStreamResponse();
Enter fullscreen mode Exit fullscreen mode

The frontend would receive chunks of markdown and render them progressively. Cool demo. Terrible UX.

Problems:

  1. No structure - The AI could return anything. Sometimes bullet points, sometimes paragraphs, sometimes a mix.
  2. No copy functionality - Users couldn't easily copy suggested improvements.
  3. No persistence - Can't save unstructured markdown to a database meaningfully.
  4. Generic advice - "Add more metrics to your bullet points" tells users nothing actionable.

The breaking point was when I tested it with my own resume. The AI suggested "Ready-to-Use Bullet Points" that had nothing to do with my actual experience. Where was I supposed to put them? The AI didn't say.


Chapter 2: The Switch to Structured Output

The AI SDK has a generateObject function that returns typed JSON validated against a Zod schema. This was the fix.

// ✅ The new approach
// lib/ai/resume-analyzer.ts

import { generateObject } from "ai";
import { ResumeAnalysisOutputSchema } from "@/lib/validations/resume-analysis";

export async function analyzeResume({ resumeText, jobDescription }) {
  const { object } = await generateObject({
    model: "google/gemini-2.5-flash-lite",
    schema: ResumeAnalysisOutputSchema,
    system: RESUME_ANALYSIS_SYSTEM_PROMPT,
    prompt: `Analyze this resume against the job description...`,
  });

  return object;
}
Enter fullscreen mode Exit fullscreen mode

The schema defines exactly what the AI must return:

// lib/validations/resume-analysis.ts

export const ResumeAnalysisOutputSchema = z.object({
  score: z.number().min(0).max(100),
  scoreLabel: z.enum(["Poor", "Fair", "Good", "Strong", "Excellent"]),
  scoreSummary: z.string(),

  missingKeywords: z.array(MissingKeywordSchema).min(1),
  bulletFixes: z.array(BulletFixSchema),
  priorityActions: z.array(z.string()).min(1).max(3),
  sectionFeedback: z.array(SectionFeedbackSchema).length(5),
  lengthAssessment: z.object({
    currentLength: z.enum(["Too Short", "Appropriate", "Too Long"]),
    recommendation: z.string(),
  }),
});
Enter fullscreen mode Exit fullscreen mode

Now the AI can't return random markdown. It must fill every field, and Zod validates the response before it reaches the frontend.


Chapter 3: Making Feedback Actually Actionable

Having structured output was step one. But the content still needed work.

The "Before/After" Bullet Fix

The old approach: "Here are some sample bullet points you could use."

The problem: Users don't know where to put them or how they relate to their actual resume.

The fix: Find weak bullets in the user's resume and show exactly how to improve them.

export const BulletFixSchema = z.object({
  location: z
    .string()
    .describe(
      "Where this bullet is, e.g. 'Experience → Acme Corp → 2nd bullet'"
    ),
  original: z.string().describe("The exact text from the user's resume"),
  improved: z.string().describe("The suggested replacement"),
  reason: z.string().describe("Why this helps - reference job requirements"),
  impact: z.enum(["High", "Medium"]),
});
Enter fullscreen mode Exit fullscreen mode

The prompt enforces this:

### Bullet Fixes
- original: Exact text from the resume (must match verbatim)
- improved: Rewritten with action verb, metrics, and relevance to job

Rules:
- original must be text that exists in the resume
- Target vague phrases: "Responsible for", "Worked on", "Helped with"
Enter fullscreen mode Exit fullscreen mode

Now users see their actual bullet point, the improved version, and a copy button. No guessing.

Killing AI Slop

The first improved bullet started with "Spearheaded." Classic AI resume speak.

I added explicit rules to the prompt:

- DO NOT use these overused words: Spearheaded, Leveraged, Synergy,
  Utilize, Facilitated, Orchestrated, Pioneered, Revolutionized,
  Streamlined, Championed
- Use plain, professional verbs: Led, Built, Created, Reduced,
  Increased, Managed, Designed, Developed, Improved, Launched
Enter fullscreen mode Exit fullscreen mode

And for the overall tone:

## Writing Style
- Direct and concise
- No filler phrases ("I'd recommend", "You might consider")
- No exclamation marks
- State facts, not opinions
Enter fullscreen mode Exit fullscreen mode

The output now reads like feedback from a senior colleague, not a chatbot.


Chapter 4: Skill Gap Categorization

Missing keywords were originally a flat list. "Docker, Leadership, HIPAA, Python" - all treated the same.

But these require completely different actions:

  • Docker - I can learn this in a weekend
  • Leadership - I need to reframe existing experience
  • HIPAA - Either I have compliance experience or I don't

So I added categorization:

export const MissingKeywordSchema = z.object({
  keyword: z.string(),
  category: z.enum(["Hard Skill", "Soft Skill", "Domain"]),
  importance: z.enum(["Critical", "Important", "Nice to Have"]),
  whereToAdd: z.string(),
});
Enter fullscreen mode Exit fullscreen mode

The UI now groups keywords by category with actionable descriptions:

const CATEGORY_CONFIG = {
  "Hard Skill": {
    icon: Wrench,
    label: "Hard Skills",
    description: "Learnable, measurable skills you can add",
  },
  "Soft Skill": {
    icon: Users,
    label: "Soft Skills",
    description: "Reframe existing experience to highlight these",
  },
  Domain: {
    icon: BookOpen,
    label: "Domain Knowledge",
    description: "Industry-specific expertise",
  },
};
Enter fullscreen mode Exit fullscreen mode

Users immediately understand: "I'm missing 3 hard skills I can learn, 1 soft skill I need to reframe, and 2 domain areas where I might not be a fit."


Chapter 5: Section Completeness & Length Assessment

Two more features that add real value:

Section Feedback

Check if standard resume sections exist and are complete:

export const SectionFeedbackSchema = z.object({
  section: z.enum(["Contact", "Summary", "Experience", "Education", "Skills"]),
  status: z.enum(["Present", "Missing", "Incomplete"]),
  feedback: z.string(),
});
Enter fullscreen mode Exit fullscreen mode

The UI only shows sections with issues - if everything is present, this card doesn't appear.

Length Assessment

lengthAssessment: z.object({
  currentLength: z.enum(["Too Short", "Appropriate", "Too Long"]),
  recommendation: z.string(),
}),
Enter fullscreen mode Exit fullscreen mode

The prompt includes context:

- Entry-level (0-2 years): 1 page ideal
- Mid-level (3-7 years): 1-2 pages
- Senior (8+ years): 2 pages acceptable
Enter fullscreen mode Exit fullscreen mode

Again, only shown if there's an issue. No noise.


The Final Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Resume Analyzer Flow                      │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  1. User uploads PDF + pastes job description                │
│                    ↓                                         │
│  2. Zod validates input (ResumeAnalysisSchema)               │
│                    ↓                                         │
│  3. generateObject() calls AI with structured schema         │
│                    ↓                                         │
│  4. AI returns typed JSON (ResumeAnalysisOutputSchema)       │
│                    ↓                                         │
│  5. Frontend renders typed components                        │
│     - PriorityActions (what to do first)                     │
│     - BulletFixes (before/after with copy)                   │
│     - MissingKeywords (grouped by category)                  │
│     - SectionFeedback (only if issues)                       │
│     - LengthAssessment (only if issues)                      │
│     - ScoreCard (de-emphasized at bottom)                    │
│                                                              │
└─────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

TL;DR

Problem Solution
Unstructured markdown output generateObject with Zod schema
Generic "sample bullets" Before/after fixes on actual resume content
AI buzzwords ("Spearheaded") Explicit banned word list in prompt
Flat keyword list Categorized: Hard Skill / Soft Skill / Domain
Information overload Only show sections with issues
Vague advice Specific locations: "Add to Skills section"

Key Lessons

1. Structured output changes everything.

generateObject + Zod means the AI can't return garbage. Every field is validated. The frontend knows exactly what to expect.

2. "Actionable" means specific.

"Add more metrics" is useless. "Change your 2nd bullet at Acme Corp from X to Y because the job requires Z" is actionable.

3. Ban the slop explicitly.

LLMs default to resume-speak. You have to explicitly tell them not to use "Spearheaded" and friends.

4. Show less, not more.

If sections are complete, don't show the section feedback card. If length is appropriate, don't mention it. Every piece of UI should require action.

5. Categorization adds context.

A flat list of missing keywords is overwhelming. Grouping by Hard Skill / Soft Skill / Domain tells users what kind of gap they're dealing with.


What's Next

  • ATS compatibility check - Analyze the PDF structure, not just text
  • Save analysis history - Now that output is structured, we can persist it

Why Open Source?

Every iteration, every refactor - it's all in the repo. Not because I'm proud of the first "Spearheaded" output, but because someone else is probably fighting the same battle with AI slop right now.

If this post saves you from shipping generic AI advice to your users, it was worth writing.

The repo: github.com/ImAbdullahJan/carific.ai

If you find this useful, consider starring the repo - it helps others discover the project!

Key files from this post:

  • lib/ai/resume-analyzer.ts - AI integration with structured output
  • lib/validations/resume-analysis.ts - Zod schemas
  • components/analysis/ - All the UI components

Your Turn

I'd love feedback:

  • On the code: See something that could be better? Open an issue or PR.
  • On the post: Too long? Missing something? Tell me.
  • On AI features: How do you handle structured output in your projects?

Building in public only works if there's a public to build with.


If this post helped you, drop a ❤️. It means more than you know.


Let's connect:

Third post of many. See you in the next one.

Top comments (1)

Collapse
 
shariq_dd8f2e45b2d21e8505 profile image
Shariq

Building in public opens new doors for you and others. All the best!