The Hook—Personal Testimony as Diagnostic
I operate a cybersecurity consultancy. I hold CompTIA A+ through CySA+, AWS certifications, and I'm actively pursuing AI/ML credentials. I've published six books on security education. I build threat models for agentic AI systems.
And yet, when DEV.to's application asked me to list my "areas of expertise," I froze.
Not because I lack credentials. Not because I haven't put in the hours. But because every day I learn something new—and that fact makes me question whether I'm actually an expert at anything.
This isn't a beginner's lament about gatekeeping. I've earned the right to perform expertise. I can walk into a client's office and architect defensible controls. I can translate technical risk into language families and small businesses understand. I can publish frameworks that withstand scrutiny.
But choosing not to claim "expert" is deliberate. It's a refusal to perform certainty in a field where certainty is dangerous. It's an acknowledgment that the more I know, the more I see the edges of what I don't. Vulnerability, in this context, isn't weakness—it's diagnostic. It reveals which practitioners can tolerate uncertainty—and which ones need the label to survive professionally.
The Problem—Forensic Analysis of Credential Inflation
We live in an expertise performance economy. Credentials have become currency, and the treadmill never stops.
Junior developer takes a weekend bootcamp → claims "React expert" on LinkedIn.
Mid-level engineer gives three conference talks → adds "thought leader" to their bio.
Senior engineer, already seasoned, must chase increasingly exotic certifications just to differentiate from the noise.
The trap is psychological. Claiming expertise requires performing certainty. Performing certainty requires suppressing learning edges. Suppressing learning edges calcifies competence.
The structural incentives reinforce this. Hiring algorithms scan for keywords like "expert." They can't evaluate demonstrated competence. So everyone inflates terminology to survive filters.
I've seen "AI Expert" in bios of people who took a single Coursera course. I've seen "Cybersecurity Expert" from folks who've never run an incident response. The label has been devalued to meaninglessness.
Software engineering hiring has partially addressed this through practical assessments—live coding, system design, portfolio reviews. You can claim "React expert" on your resume, but you'll still need to demonstrate competence in the interview. The credential gets you in the door; the demonstration closes the deal.
But this model breaks down in domains where competence unfolds over time rather than in controlled environments: security architecture, incident response, technical education, solutions consulting. You can't simulate a zero-day breach in a 45-minute interview. You can't test someone's ability to educate non-technical stakeholders with a whiteboard problem. In these domains, credential performance still dominates—because we haven't built better evaluation mechanisms.
This isn't just annoying—it's dangerous. When everyone's an "expert," clients can't distinguish competence from performance until systems fail in production.
This isn't personal anxiety—it's systemic dysfunction. The credential treadmill rewards inflation, punishes humility, and erodes the very thing expertise is supposed to represent: competence under pressure, proven in practice.
The Reframe—What Demonstrated Competence Actually Looks Like
So instead of asking, "Am I an expert?" I ask:
What can I prove I've done repeatedly, successfully, at a level others would pay me to replicate?
That reframing yields a practical audit framework:
Repetition: Have I done this enough times to recognize patterns and edge cases?
Success: Do my implementations actually work in production or real-world contexts?
Economic validation: Would someone pay me to do this again based on prior results?
Teaching capacity: Could I write a 1,500-word guide practitioners would trust?
Applied to my own work:
✅ Demonstrated competence: Cybersecurity for resource-constrained SMBs Evidence: Client work, published frameworks, operational implementations. Test: I can build defensible security controls for organizations without enterprise budgets.
✅ Demonstrated competence: Technical education for non-technical audiences Evidence: Six books, CybersecurityWitwear platform, trauma-informed materials. Test: I can translate complex security concepts families and small businesses can actually use.
🔄 Working knowledge: Cloud security architecture. Evidence: AWS certifications, active AI/ML training, emerging practice. Test: I'm building implementation skills—not claiming mastery yet.
❌ Not demonstrated: Kubernetes at scale. Reality: I'd need significant study before I could architect production K8s for an enterprise client.
This framework is brutally honest. It strips away performance and forces clarity. It's not about whether I can wear the badge of "expert." It's about whether I can defend competence in specific, auditable domains.
The Paradox—Why "Learning Every Day" Is Evidence FOR Competence
Here's the paradox: the more I learn, the less comfortable I am claiming expertise. And that discomfort is itself evidence of competence.
The Dunning-Kruger curve illustrates it:
Novices: "I took a course, I'm an expert."
Journeymen: "I've done this for years and realize how much I don't know."
Experts: "Every project reveals new complexity I hadn't anticipated."
Real expertise isn't omniscience. It's:
Recognizing domain boundaries—knowing where your knowledge ends.
Anticipating failure modes—because you've seen them before.
Continuous learning—because the field evolves whether you keep up or not.
If a doctor told you they stopped learning new techniques after medical school, you'd find another doctor. Why do we expect technical practitioners to perform omniscience? Intellectual humility isn't disqualifying—it's the marker of someone who's competent enough to know the stakes.
The Provocation—What This Means for Hiring, Judging, Teaching
Imagine if we replaced "expert" with "demonstrated competence."
Job descriptions: "Expert in React" → "Has shipped production React applications with documented results." Filterable, verifiable, honest.
Technical writing: "As an AI expert…" → "Based on implementing X systems with Y outcomes…" Claims become testable, not performative.
Conference submissions: Instead of "Jane Doe, AI Expert," try "Jane Doe, shipped ML models for healthcare compliance with 99.7% accuracy over 18 months." One is a costume. The other is a resume.
Peer review/judging: Don't ask, "Are they an expert?" Ask, "Can they demonstrate competence in evaluating this specific domain?"
This shift would change hiring pipelines, technical publishing, and peer evaluation. It would reward practitioners who can prove outcomes, not just inflate bios. It would normalize humility as a professional stance. And it would dismantle the credential treadmill by replacing performance with evidence.
The Closing—Timestamp and Invitation
I still don't know if I'm an expert. But I know what I can defend:
I've built security controls for SMBs that survived real-world attacks.
I've educated non-technical audiences who successfully implemented my frameworks.
I've compressed complex concepts into forms that retained precision under constraint.
That's demonstrable. That's timestamped. That's auditable.
If that makes me an expert, the label still feels uncomfortable. If it doesn't, I can live with "competent practitioner."
Either way, I'm writing this so the people who need to perform expertise can self-select out—and the people who value intellectual honesty can find each other.
You know which camp you're in.
Top comments (1)
Great post