Building AI tools for government-regulated domains is a different beast than your typical SaaS. Here are five hard-won lessons from building an AI-powered immigration eligibility assessment platform.
1. Domain Knowledge Cannot Be Shortcut
Before writing a single line of code, we spent weeks reading the USCIS Policy Manual, AAO decision archives, and key case law. The 2010 Kazarian v. USCIS decision alone changed how EB-1A applications are evaluated — if your tool does not understand the two-step framework, it is essentially broken.
Takeaway: In regulated domains, developer time reading primary sources is never wasted. Budget for it.
2. YMYL Content Requires a Different Engineering Mindset
Google classifies content that could impact someone's health, finances, or legal status as YMYL (Your Money Your Life). For us, this meant:
- Every output needs a disclaimer. Not buried in footer text — prominently displayed alongside the assessment results
- Never use definitive language. "Your profile shows strength in..." instead of "You qualify for..."
- Citation everything. Link to the specific INA section, CFR regulation, or case law that supports each point
// Bad
const result = "You meet the EB-1A requirements";
// Good
const result = {
message: "Your profile shows indicators of strength in 4 of 10 criteria",
disclaimer: "This is an educational assessment, not legal advice.",
references: [
{ source: "INA § 203(b)(1)(A)", url: "https://www.uscis.gov/..." },
{ source: "8 CFR § 204.5(h)(3)", url: "https://www.ecfr.gov/..." }
]
};
3. Structured AI Outputs Eliminate Entire Bug Categories
Early prototypes used free-text AI responses that we then parsed. This was a nightmare — the AI would change its formatting subtly between calls, breaking our parser.
Switching to the Vercel AI SDK's structured output feature with Zod schemas was a game-changer:
import { generateObject } from "ai";
import { z } from "zod";
const schema = z.object({
criteriaScores: z.array(
z.object({
name: z.string(),
score: z.number().min(0).max(100),
evidence: z.array(z.string()),
})
),
});
The AI is forced to return data that matches your schema exactly. No parsing. No regex. No surprises.
4. Conservative Scoring Builds Trust
Our first version was too optimistic — it was telling almost everyone they had a chance. Users who then consulted attorneys found out their cases were much weaker than the tool suggested.
We recalibrated to be deliberately conservative. It is better to pleasantly surprise someone at a lawyer's office than to give them false hope about a life-changing immigration decision.
You can try the calibrated assessment tool at VisaCanvas.
5. The 10-Criteria Framework Maps Beautifully to Software
The EB-1A evaluation framework is actually one of the best-structured legal criteria I've encountered:
| Criterion | Evidence Type | Evaluation Complexity |
|---|---|---|
| Awards | Verifiable, discrete | Low |
| Membership | Verifiable, discrete | Low |
| Press coverage | Semi-structured | Medium |
| Judging | Verifiable | Low |
| Original contribution | Highly subjective | High |
| Scholarly articles | Quantifiable | Low |
| Exhibitions | Verifiable | Low |
| Leading role | Context-dependent | Medium |
| High salary | Quantifiable | Low |
| Commercial success | Quantifiable | Medium |
For a complete breakdown of all 10 criteria with legal references, see the EB-1A guide on VisaCanvas.
Bonus: What I Would Do Differently
- Start with the disclaimer system, not add it later. It should be a first-class feature, not an afterthought
- Build a feedback loop earlier. Users who consult attorneys after using the tool are invaluable sources of calibration data
- Invest in edge cases from day one. Immigration law has a lot of corner cases that only surface with real users
This reflects my experience building VisaCanvas, a free AI-powered EB-1A/NIW eligibility assessment tool. It is not legal advice — please consult a qualified immigration attorney for guidance specific to your situation.
Top comments (0)