DEV Community

Context First AI
Context First AI

Posted on

We stopped teaching AI and started shipping client projects with volunteer teams. Here's the architecture.

The AI skills gap isn't knowledge — it's context. Mesh is a community programme where small volunteer teams (3-6 people) work on real client AI projects with live data, multi-jurisdictional compliance requirements, and production deliverables. We built the legal and operational scaffolding first, then started onboarding practitioners. This post covers what we built, what broke, and what we'd do differently.

Most AI communities are group chats with better branding. A Slack channel, a handful of webinars, maybe a shared resource folder that nobody updates after month two. Mesh isn't that. Mesh is what happens when you give practitioners real client projects, real data, and real consequences — then build the scaffolding so they don't fall off.

The Conference Badge Problem

I spent the better part of last year watching something quietly frustrating unfold. Smart, capable people — data analysts, ops leads, product managers — kept showing up to AI meetups and communities, notebooks open, ready to contribute. They'd participate in a workshop, build a demo, maybe even prototype something interesting over a weekend. And then Monday would arrive. The demo would sit in a GitHub repo. The workshop notes would gather dust. The gap between "I learned a thing" and "I shipped a thing that mattered" stayed exactly where it was.

It's the conference badge problem. You attend, you network, you collect the badge. But wearing a badge doesn't mean you've done the work.

I kept coming back to the same question: what if a community didn't just teach you about AI implementation — what if it was AI implementation?

The Gap Nobody Talks About

There's an uncomfortable truth in the AI skills space right now. Courses are everywhere. Certifications are multiplying. LinkedIn is full of people who've completed their fifth prompt engineering bootcamp. And yet, when you talk to the hiring managers, the team leads, the people actually building AI into their operations — they'll tell you the same thing. Finding someone who's worked with live data, navigated real constraints, and delivered under ambiguity? That's a different talent pool entirely.

The gap isn't knowledge. It's context.

Knowing how a retrieval-augmented generation pipeline works is table stakes. Knowing how it breaks when your client's data is messy, multilingual, and spread across three legacy systems — that's the part nobody demos. And it's the part that matters when you're sitting across from a stakeholder who needs results by Thursday.

Here's a simplified version of what that looks like in code. The tutorial version:

from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings

 The clean version everyone demos
vectorstore = Chroma.from_documents(documents, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
result = retriever.get_relevant_documents("What is our refund policy?")
Enter fullscreen mode Exit fullscreen mode

And what you actually deal with on a Mesh project:

import re
from langchain.text_splitter import RecursiveCharacterTextSplitter

def preprocess_client_data(raw_docs: list[dict]) -> list[str]:
    """
    In practice: client data arrives as a mix of English, Hindi,
    and legacy system export formats nobody documented.
    Three weeks of a six-week project went here.
    """
    cleaned = []
    for doc in raw_docs:
        text = doc.get("content", "")

         Edge case that bites you: legacy exports with
         Windows-1252 encoding buried inside UTF-8 files
        try:
            text = text.encode("utf-8").decode("utf-8")
        except UnicodeDecodeError:
            text = text.encode("latin-1").decode("utf-8", errors="replace")

        # Client's CRM exports include HTML entities in plain text fields
        text = re.sub(r"&[a-zA-Z]+;", " ", text)

        # Data minimisation: strip PII before it reaches the pipeline
        # This isn't optional — it's a GDPR/DPDP Act requirement
        text = redact_pii(text)  # your implementation here

        if len(text.strip()) > 50:  # threshold from trial and error
            cleaned.append(text)

    return cleaned

 The splitter config that survived contact with production
splitter = RecursiveCharacterTextSplitter(
    chunk_size=800,       # not 1000 — client docs are shorter than expected
    chunk_overlap=120,    # tuned after retrieval quality dropped at 200
    separators=["\n\n", "\n", ". ", " "]
)
Enter fullscreen mode Exit fullscreen mode

The first version is what you learn. The second is what you ship. The distance between them is what Mesh exists to close.

I've watched talented volunteers struggle not because they lacked technical skill, but because they'd never had to scope a deliverable against a client's actual business problem. They'd never had to write a data processing agreement that satisfies both UK GDPR and India's DPDP Act 2023 in the same document. They'd never had to tell a project lead, "this approach won't work with the data we've been given," and then figure out what will.

That's what was missing. Not another curriculum. An environment.

So We Built Mesh

Mesh is the community pillar of Context First AI. In practice, it works like this: we partner with organisations that have genuine AI implementation challenges — not toy problems, not hackathon prompts — and we assemble small volunteer teams to work on them. Real briefs. Real customer data. Real deliverables that go into production.

(I should note here that "real customer data" is a phrase that makes legal teams nervous, and rightly so. We spent a disproportionate amount of time building out multi-jurisdictional confidentiality and IP frameworks before we onboarded a single volunteer. The legal scaffolding came first. That's not the exciting part of the story, but it's the part that makes everything else possible.)

Each project runs with structured requirements documentation — branded briefs that function as both technical specifications and onboarding materials. Volunteers aren't thrown into the deep end with a vague Notion page and a "figure it out" attitude. They get context. They get scope. They get guardrails.

The project structure, simplified:

 mesh-project-brief.yaml
 Each project gets one of these before any code is written

project:
  name: "QR Educational Verification System"
  client_sector: "Education"
  duration_weeks: 8
  status: "active"

team:
  size: 4
  roles:
    - data_engineer
    - domain_expert     # education sector
    - ux_designer
    - backend_dev

compliance:
  jurisdictions:
    - UK_GDPR
    - DPDP_ACT_2023    # India
  data_classification: "personal"
  pii_handling: "redact_before_processing"
  retention_policy: "project_duration_plus_30_days"
  ip_assignment: "client"

deliverables:
  - type: "working_prototype"
    deadline: "week_6"
  - type: "documentation"
    deadline: "week_7"
  - type: "handover_and_iteration"
    deadline: "week_8"

 The part nobody puts in their project brief:
known_unknowns:
  - "Integration constraints with client's existing auth system"
  - "Actual data quality vs. what client described in scoping"
  - "Whether the UX patterns we've chosen work for the end users"
Enter fullscreen mode Exit fullscreen mode

The teams are deliberately small. Three to six people, typically cross-functional. A data person, someone with domain expertise, maybe a designer or a comms lead depending on the project. The constraint is intentional — it forces collaboration patterns that mirror how AI work actually gets done inside organisations, not how it gets taught in classrooms.

What This Looks Like in Practice

One of our current projects involves building a QR-based educational verification system. On paper, that sounds like a weekend hackathon project. In practice, it's a tangle of authentication flows, data minimisation requirements, user experience decisions that affect adoption rates, and integration constraints we didn't anticipate until we were three weeks in.

The verification flow alone required more decisions than expected:

 Simplified verification pipeline
 The part that looks clean in architecture diagrams

class VerificationPipeline:
    """
    What we learned: the happy path was ~20% of the work.
    Edge cases in credential validation took the rest.
    """

    def verify(self, qr_payload: dict) -> VerificationResult:
        # Step 1: Decode and validate QR payload structure
        credential = self.decode_qr(qr_payload)

         Step 2: Check against issuer registry
         Edge case: some issuers use inconsistent identifier formats
         We burned a week on this before adding normalisation
        issuer = self.resolve_issuer(
            credential.issuer_id,
            normalise=True  # added after week 3
        )

        # Step 3: Cryptographic verification
        if not self.verify_signature(credential, issuer.public_key):
            return VerificationResult(
                status="INVALID",
                reason="signature_mismatch",
                # The part nobody thinks about: user-facing error messages
                # that are accurate without leaking implementation details
                display_message="This credential could not be verified. "
                               "Please contact the issuing institution."
            )

        # Step 4: Check revocation status
        # This API call adds 200-400ms latency that affects UX
        # Team debated caching vs. freshness for two standups
        revocation_status = self.check_revocation(
            credential.id,
            cache_ttl=300  # compromise: 5 min cache
        )

        return VerificationResult(
            status="VALID" if not revocation_status.revoked else "REVOKED",
            credential_summary=self.minimise_output(credential),
            # Data minimisation: only return what the verifier needs to see
            # Not what's convenient to return
        )
Enter fullscreen mode Exit fullscreen mode

I'm not entirely sure we've got the team composition formula right yet. We're still learning what the ideal mix of experience levels looks like, and whether pairing junior volunteers with senior ones creates mentorship dynamics or just bottlenecks. Early signs suggest it depends almost entirely on the project type, which isn't the clean answer I'd prefer.

The Part That's Deliberately Uncomfortable

Here's my slightly bold take on this: most AI communities are too comfortable. They're optimised for engagement metrics — posts, reactions, event attendance — not for growth. Growth is uncomfortable. Growth is submitting a deliverable and getting honest feedback. Growth is realising your technical approach doesn't survive contact with the client's actual infrastructure.

Mesh projects have deadlines. They have stakeholders. They have moments where the work isn't going well and the team has to have a direct conversation about what to change. About 37% of the volunteers we've spoken to during onboarding say they've never had that experience in a learning context before. They've had it at work, sure. But never in a space explicitly designed for development.

That discomfort is the product.

We're not building a community where everyone feels good about AI. We're building one where everyone gets better at it. Those are different things, and the distinction matters more than most community builders want to admit.

What Volunteers Actually Walk Away With

The tangible outputs are straightforward. Portfolio-ready project work. Experience with live data across jurisdictions. Exposure to the full lifecycle of an AI implementation — scoping, data handling, development, delivery, and the inevitable post-delivery "wait, we need to change this" iteration.

But the less obvious stuff is what I keep hearing about in feedback conversations. Volunteers talk about finally understanding why data governance exists, not as an abstract compliance checkbox but as the thing that determines whether a project can actually ship. They talk about learning to communicate technical constraints to non-technical stakeholders — a skill that's worth more than any certification I've seen.

Here's the kind of thing that actually comes up — the conversation that no course prepares you for:

Stakeholder Update Email (Week 4 of 8)

Subject: Verification project — scope adjustment needed

Hi [Client],

After working with the actual credential data this week, we've
identified two issues that affect our original timeline:

1. Issuer identifier formats are inconsistent across your partner
   institutions. We assumed ISO-standard identifiers based on the
   spec doc. In practice, ~30% of records use legacy internal IDs.
   We need to build a normalisation layer — adds ~1 week.

2. Revocation check latency. The third-party revocation API
   averages 350ms per call. At the expected verification volume,
   this creates a noticeable UX delay. We're proposing a 5-minute
   cache as a compromise between freshness and responsiveness.

Recommendation: extend the prototype deadline by one week and
deliver the documentation and handover in parallel rather than
sequentially.

Happy to walk through the trade-offs on a call this week.
Enter fullscreen mode Exit fullscreen mode

Writing that email — scoping the problem, proposing a solution, being direct about what changed and why — is a skill that matters more than your model architecture choices. And it's one that only develops under real project conditions.

Key Takeaways

Context beats curriculum. Working on real problems with real constraints teaches things that structured courses can't replicate. If you're choosing between another certification and a live project, pick the project.

Legal infrastructure isn't overhead — it's foundation. The ability to work with real client data is what separates Mesh from most community programmes. That ability only exists because the compliance frameworks were built first. Boring work. Essential work.

Small teams, real stakes. The combination of tight team sizes and genuine deliverables creates collaboration patterns that transfer directly to professional AI work. The social dynamics of a six-person team under a deadline are a better teacher than most training programmes.

Discomfort is a feature. If a learning environment never makes you uncomfortable, it's probably not teaching you the things that matter most.

Cross-functional contribution is the norm, not the exception. In practice, the most valuable contributions on Mesh projects frequently come from people working outside their stated expertise. Build teams accordingly.

Where Context First AI Fits

Mesh is one of four pillars at Context First AI, alongside Vectors (structured learning), Stack (SaaS products), and Orchestration (enterprise transformation consulting). The pillars aren't independent — they feed each other.

┌─────────────────────────────────────────────────┐
│              Context First AI                    │
│      "AI tools are everywhere.                   │
│       Understanding how to use them isn't."      │
├────────────┬───────────┬──────────┬──────────────┤
│  Vectors   │   Stack   │   Mesh   │ Orchestration│
│ (Learning) │  (SaaS)   │(Community│ (Enterprise) │
│            │           │ Projects)│              │
├────────────┴───────────┴──────────┴──────────────┤
│                                                   │
│  Mesh patterns ──► Vectors case material          │
│  Mesh governance frameworks ──► Orchestration     │
│  Vectors theory ──► Mesh practitioner readiness   │
│  Orchestration clients ──► Mesh project pipeline  │
│                                                   │
└───────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Patterns we observe in Mesh projects directly inform how we design Vectors content. The implementation challenges volunteers encounter become case material. The governance frameworks we build for Mesh become templates for Orchestration clients.

Context First AI exists because we kept seeing the same disconnect: organisations adopting AI tools without developing AI fluency across their teams. AI tools are everywhere. Understanding how to use them isn't. That's the tagline, and it's also the operational reality we see in every client engagement.

Mesh specifically addresses the practice gap — the space between knowing and doing. If Vectors teaches you the theory and Orchestration brings transformation to your organisation, Mesh is where you build the muscle memory. It's where you learn what breaks, what holds, and what you're actually capable of when the project is real and the data doesn't come pre-cleaned.

We're pre-revenue and building in the open, which means the community isn't just participating in projects — they're shaping the programme itself. That's either exciting or terrifying depending on your tolerance for ambiguity. For the people Mesh is designed for, it's usually both. And that's rather the point.

Where This Goes

Mesh is early. The legal frameworks are solid, the project pipeline is growing, and the volunteer community is starting to develop its own momentum. We're not at scale yet, and I'm wary of pretending otherwise. What we have is a model that works at the size it currently is, and a set of hypotheses about how it scales that we haven't fully tested.

What I do know is this: the people who've gone through Mesh projects carry themselves differently afterward. Not with the false confidence of another completed course, but with the quiet assurance of someone who's actually done the work. Shipped something. Navigated the mess. Come out the other side knowing more about what they don't know — which, in this field, might be the most valuable thing of all.

If that sounds like something worth being part of, we're building it right now.

Resources

This article was created with AI assistance and reviewed by a human author. For more AI-assisted content, visit [Context First AI].

Top comments (0)