DEV Community

Chishan
Chishan

Posted on • Originally published at visacanvas.com

How We Approach USCIS Criteria Parsing in an AI Assessment Tool

Building an AI tool that assesses immigration eligibility involves parsing complex legal criteria into structured evaluation frameworks. Here is how we approached this challenge and what we learned along the way.

The Problem

USCIS defines eligibility criteria for employment-based immigration categories like EB-1A (Extraordinary Ability) and NIW (National Interest Waiver) in terms that are intentionally broad. For EB-1A, there are 10 criteria covering everything from awards to publications to high salary. An applicant needs to meet at least 3 of these 10.

The challenge is translating these legal definitions into something an AI system can evaluate consistently.

Our Approach: Structured Criterion Decomposition

Rather than trying to have an AI directly interpret legal language, we decomposed each USCIS criterion into sub-components:

interface CriterionDefinition {
  id: string;
  title: string;
  uscisDescription: string;
  subComponents: SubComponent[];
  evidenceTypes: EvidenceType[];
  scoringRubric: ScoringRubric;
}

interface SubComponent {
  name: string;
  weight: number;
  evaluationPrompt: string;
  positiveIndicators: string[];
  negativeIndicators: string[];
}
Enter fullscreen mode Exit fullscreen mode

Each of the 10 EB-1A criteria gets broken down into 3-5 sub-components, each with its own evaluation logic.

Criterion Example: Original Contributions

Criterion 5 (Original Contributions of Major Significance) is one of the most commonly claimed. We decompose it as:

  1. Originality Assessment: Is the contribution genuinely novel?
  2. Impact Measurement: What is the scope of impact (team, organization, industry, field)?
  3. Recognition Evidence: Has the contribution been recognized by others?
  4. Documentation Quality: How well can the contribution be documented?

Scoring Methodology

We use a weighted scoring system where each sub-component receives a score from 0-10:

Score Range Interpretation
8-10 Strong evidence, likely meets USCIS standard
5-7 Moderate evidence, may need strengthening
3-4 Weak evidence, significant gaps
0-2 Insufficient evidence

The overall criterion score is a weighted average of sub-component scores.

Technical Challenges

1. Subjectivity Handling

Legal criteria are inherently subjective. We address this by:

  • Using multiple evaluation passes with different prompting strategies
  • Providing confidence intervals rather than single scores
  • Clearly indicating where professional legal review is needed

2. False Confidence Prevention

The biggest risk is an AI tool giving users false confidence about their eligibility. We mitigate this by:

  • Setting conservative score thresholds
  • Always recommending attorney consultation
  • Clearly labeling the tool as informational, not legal advice

3. Criterion Interdependence

Some criteria overlap in evidence they accept. We handle this by tracking evidence reuse and noting when the same achievement supports multiple criteria.

Results and Limitations

The tool provides a structured self-assessment that helps users organize their thinking before consulting an attorney. Key metrics:

  • Assessment completion time: ~15 minutes
  • User-reported usefulness: 4.2/5
  • Attorney consultation rate after assessment: 68%

The primary limitation is that no AI tool can replace professional legal judgment. USCIS adjudicators consider the totality of evidence in ways that resist algorithmic modeling.

Try It

The EB-1A assessment tool is free to use. For detailed explanations of each criterion, see the EB-1A guide.


Disclaimer: This article and the referenced tool provide general information only and do not constitute legal advice. Immigration law is complex, and individual circumstances vary significantly. Always consult a qualified immigration attorney for advice specific to your situation.

Top comments (0)