DEV Community

Chishan
Chishan

Posted on

Parsing Immigration Law with TypeScript: Building an AI Assessment Engine

The Challenge: Making Immigration Law Machine-Readable

Immigration law is one of the most complex areas of US legal code. The EB-1A extraordinary ability visa, for example, requires applicants to meet at least 3 of 10 criteria defined in INA Section 203(b)(1)(A). Each criterion involves nuanced interpretation that has been shaped by decades of case law, most notably Kazarian v. USCIS (2010).

As developers, we naturally think about how to model these kinds of structured evaluation frameworks programmatically. In this post, I will walk through how we approached building an AI-powered assessment engine in TypeScript that helps professionals understand where they stand on these criteria.

Disclaimer: This tool does not provide legal advice. It is an educational self-assessment tool. Always consult a qualified immigration attorney for legal guidance.

Data Modeling: EB-1A Criteria as TypeScript Types

The first step was defining a clean type system for the 10 USCIS criteria:

interface EB1ACriterion {
  id: string;
  name: string;
  description: string;
  evidenceTypes: EvidenceType[];
  caseReferences: CaseReference[];
}

type CriterionId = 
  | "awards"
  | "membership"
  | "press"
  | "judging"
  | "original_contribution"
  | "scholarly_articles"
  | "exhibitions"
  | "leading_role"
  | "high_remuneration"
  | "commercial_success";

interface AssessmentResult {
  criterionId: CriterionId;
  score: number; // 0-100
  confidence: number;
  evidence: string[];
  suggestions: string[];
}
Enter fullscreen mode Exit fullscreen mode

This gave us strong typing for the entire evaluation pipeline.

The Assessment Pipeline

The core challenge was converting freeform user input (describing their professional achievements) into structured evaluations against each criterion. We used a multi-stage pipeline:

  1. Input Parsing: Extract structured facts from natural language descriptions
  2. Criterion Matching: Map achievements to relevant EB-1A criteria
  3. Evidence Evaluation: Score the strength of evidence for each criterion
  4. Result Synthesis: Generate actionable feedback
async function assessEligibility(
  input: UserProfile
): Promise<AssessmentResult[]> {
  const facts = await extractFacts(input);
  const matches = mapToCriteria(facts);
  const evaluations = await evaluateEvidence(matches);
  return synthesizeResults(evaluations);
}
Enter fullscreen mode Exit fullscreen mode

Integrating AI for Nuanced Evaluation

The tricky part is that immigration law is not black and white. The same achievement might be strong evidence for one criterion and irrelevant for another. We leveraged the Vercel AI SDK with structured outputs to ensure consistency:

import { generateObject } from "ai";
import { z } from "zod";

const evaluationSchema = z.object({
  criterionScores: z.array(z.object({
    criterion: z.string(),
    score: z.number().min(0).max(100),
    reasoning: z.string(),
    strengthOfEvidence: z.enum(["strong", "moderate", "weak", "none"]),
  })),
  overallAssessment: z.string(),
  recommendedNextSteps: z.array(z.string()),
});
Enter fullscreen mode Exit fullscreen mode

Using Zod schemas with the AI SDK gave us type-safe outputs that we could reliably render in the UI.

YMYL Compliance: The Non-Negotiable

Since immigration decisions affect peoples lives and finances (a YMYL topic in SEO terms), we had to be extremely careful:

  • Every AI output includes a clear disclaimer
  • The tool never says "you qualify" — it says "your profile shows strength in these areas"
  • We link to authoritative sources (USCIS.gov, case law references) for every criterion
  • The assessment is framed as educational, not advisory

You can see this approach in action on VisaCanvas, where users get a structured breakdown of their profile against all 10 criteria.

Performance Considerations

Processing a full 10-criterion assessment can be computationally intensive. We used several optimization strategies:

  • Streaming responses: Users see results progressively rather than waiting for the full evaluation
  • Edge Runtime: Deployed on Vercel Edge for lower latency
  • Incremental evaluation: Criteria are evaluated in parallel where possible

Lessons Learned

  1. Legal domain knowledge is essential: You cannot build a YMYL tool without deeply understanding the domain. We spent weeks reading USCIS policy manuals and case law before writing a single line of code.

  2. Structured AI outputs are a game changer: The Vercel AI SDK structured output feature eliminated an entire class of parsing bugs.

  3. Type safety matters more in sensitive domains: In a legal tech context, a type error could mislead someone about their immigration eligibility. TypeScript was the right choice.

  4. User trust requires transparency: Showing the reasoning behind each score, along with references to specific regulations, is what makes users trust the tool.

For a deeper dive into EB-1A requirements and what each criterion actually means, check out this comprehensive guide.

Conclusion

Building AI tools for legal domains is both technically challenging and socially important. The key is combining strong engineering practices (type safety, structured outputs, streaming) with domain expertise and ethical responsibility.

If you are interested in the intersection of AI and immigration law, I would love to hear your thoughts in the comments.


This article describes the technical approach behind VisaCanvas, a free AI-powered EB-1A eligibility assessment tool. The tool is for educational purposes only and does not constitute legal advice.

Top comments (0)