DEV Community

Chishan
Chishan

Posted on

How AI Is Changing Immigration Self-Assessment: Building an EB1A Eligibility Checker

How AI Is Changing Immigration Self-Assessment: Building an EB1A Eligibility Checker

Immigration law sits at an unusual intersection: highly structured criteria, deeply personal circumstances, and a sea of misinformation. The EB1A visa category for "extraordinary ability" is a prime example. USCIS defines 10 specific criteria, yet most applicants spend thousands on lawyers just to understand if they qualify.

This got me thinking: could AI parse these structured legal criteria and provide a meaningful preliminary assessment? Here's what I learned building an approach to this problem.

The EB1A Criteria Problem

The EB1A (Employment-Based First Preference, Category A) green card is reserved for individuals who demonstrate extraordinary ability in sciences, arts, education, business, or athletics. Applicants must satisfy at least 3 of 10 USCIS criteria:

  1. Awards or prizes for excellence
  2. Membership in associations requiring outstanding achievement
  3. Published material about the applicant in major media
  4. Judging the work of others in the field
  5. Original contributions of major significance
  6. Authorship of scholarly articles
  7. Artistic exhibitions or showcases
  8. Leading or critical role at distinguished organizations
  9. High salary relative to others in the field
  10. Commercial success in performing arts

Each criterion has its own nuances. "Original contributions of major significance" is far more subjective than "high salary." A patent might count, but so might an open-source library with widespread adoption. The interpretation depends on the field, the evidence, and (frankly) the adjudicator.

Why This Is a Good AI Problem

Several characteristics make EB1A assessment suitable for AI:

Structured input space. The 10 criteria are well-defined. Unlike open-ended legal questions ("Is this contract enforceable?"), EB1A assessment maps to a finite set of categories with known evidence types.

Pattern matching opportunity. Immigration lawyers assess EB1A eligibility by matching a person's achievements against known patterns. Published 10+ papers in peer-reviewed journals? That maps to criterion 6. Served on an NIH review panel? That's criterion 4. These patterns are learnable.

Precedent-rich domain. USCIS publishes policy manuals and case decisions. The AAO (Administrative Appeals Office) has issued hundreds of decisions clarifying what evidence meets each criterion. This creates a structured knowledge base.

The Technical Architecture

Here's how an AI-powered EB1A checker can work:

Step 1: Structured Data Collection

Rather than asking users to upload their entire CV, the system needs targeted questions mapped to specific criteria:

// Example: Criterion mapping structure
interface CriterionQuestion {
  criterionId: number;
  question: string;
  evidenceType: 'quantitative' | 'qualitative' | 'binary';
  followUp?: (answer: string) => string;
}

const questions: CriterionQuestion[] = [
  {
    criterionId: 6,
    question: "How many peer-reviewed publications do you have?",
    evidenceType: 'quantitative',
    followUp: (count) =>
      parseInt(count) > 0
        ? "What is the total citation count across your publications?"
        : undefined
  },
  {
    criterionId: 5,
    question: "Describe your most significant professional contribution",
    evidenceType: 'qualitative'
  }
];
Enter fullscreen mode Exit fullscreen mode

Step 2: NLP-Powered Evidence Parsing

The key technical challenge is interpreting free-text responses against legal standards. A user might say "I reviewed papers for IEEE Transactions" — the system needs to understand this maps to criterion 4 (judging).

// Simplified evidence classification
async function classifyEvidence(
  response: string,
  context: { field: string; role: string }
): Promise<CriterionMatch[]> {
  const prompt = `
    Given this professional achievement: "${response}"
    In the field of: ${context.field}

    Evaluate against USCIS EB1A criteria.
    For each potentially matching criterion, assess:
    - Relevance (0-1): How closely does this map?
    - Strength (0-1): How compelling is this evidence?
    - Gaps: What additional evidence would strengthen this?
  `;

  return await aiModel.evaluate(prompt);
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Cross-Criterion Analysis

Individual criterion scoring isn't enough. The system needs to assess the overall profile holistically — just like an immigration officer would:

function assessProfile(criteriaScores: CriterionScore[]): Assessment {
  const strongCriteria = criteriaScores.filter(c => c.strength > 0.7);
  const moderateCriteria = criteriaScores.filter(
    c => c.strength > 0.4 && c.strength <= 0.7
  );

  // USCIS requires meeting at least 3 criteria
  const meetsThreshold = strongCriteria.length >= 3;

  // But also considers the "final merits" determination
  const overallStrength = calculateMeritScore(criteriaScores);

  return {
    criteriamet: strongCriteria.length,
    overallAssessment: meetsThreshold && overallStrength > 0.6
      ? 'strong_candidate'
      : meetsThreshold
        ? 'potential_candidate'
        : 'needs_strengthening',
    recommendations: generateRecommendations(criteriaScores)
  };
}
Enter fullscreen mode Exit fullscreen mode

Key Technical Decisions

Choosing the Right AI Model

For legal text interpretation, the model needs:

  • Domain knowledge: Understanding of immigration terminology and USCIS standards
  • Calibrated confidence: Overly optimistic assessments could mislead users
  • Explainability: Users need to understand why they scored a certain way

I found that combining a large language model with structured prompting (including relevant USCIS policy manual excerpts) produces more reliable results than fine-tuning on limited immigration data.

Handling Subjectivity

Some criteria are inherently subjective. "Original contributions of major significance" means different things in different fields. A contribution in machine learning might be measured by citations and adoption. In business, it might be measured by revenue impact.

The approach: field-normalized scoring with explicit uncertainty:

interface FieldNormalizedScore {
  rawScore: number;
  fieldContext: string;
  confidenceInterval: [number, number];
  comparativeNote: string; // e.g., "Above average for software engineers"
}
Enter fullscreen mode Exit fullscreen mode

Avoiding the Unauthorized Practice of Law

This is the elephant in the room. Any AI tool in this space must be crystal clear: it provides information, not legal advice. The distinction matters legally and ethically.

Design decisions that reinforce this:

  • Every output includes a disclaimer
  • Language uses "may qualify" rather than "qualifies"
  • The tool suggests consulting an attorney for final evaluation
  • No tool replaces a lawyer's assessment of the "final merits" determination

What I Learned

Building in this space revealed several insights:

Structured legal criteria are surprisingly AI-friendly. The EB1A's 10-criterion framework provides natural guardrails for AI analysis. Compared to open-ended legal questions, this is a well-bounded problem.

Calibration matters more than accuracy. An overconfident system that tells unqualified applicants they're strong candidates does real harm. Conservative scoring with clear uncertainty communication is essential.

Context is everything. The same achievement (say, 50 citations) means vastly different things in computer science vs. sociology. Field normalization isn't optional — it's fundamental.

Users want actionable guidance, not just scores. The most valuable output isn't "you score 7/10" but "your publications evidence is strong, but you should strengthen your leadership criterion by documenting your role as lead architect at [Company]."

A Working Example

If you want to see these concepts in action, VisaCanvas implements a version of this approach. It walks users through targeted questions, evaluates responses against all 10 USCIS criteria, and generates a personalized assessment report. Their EB1A guide is also worth reading for context on how each criterion is interpreted.

The broader pattern — AI parsing structured professional criteria — extends beyond immigration. Similar approaches could work for professional certifications, academic tenure review, or grant eligibility assessment. The key insight is identifying domains where criteria are well-defined but interpretation requires domain expertise.

Wrapping Up

AI won't replace immigration lawyers any time soon. The final merits determination, strategy decisions about which criteria to pursue, and the nuances of evidence presentation still require human judgment.

But AI can democratize the first step: understanding whether you might qualify. For the researcher in Mumbai wondering if their publications count, or the engineer in Lagos unsure if their patents matter — accessible preliminary assessment removes a significant barrier.

The technical challenge is interesting. The human impact is what makes it worthwhile.


Have you worked on AI applications in regulated domains? I'd be curious to hear about the challenges you faced with calibration and compliance. Drop a comment below.

Top comments (0)