DEV Community

Cover image for I’m Learning AI in Public, and I Think Developers Need to Chill a Bit
Jonathan Murray
Jonathan Murray

Posted on

I’m Learning AI in Public, and I Think Developers Need to Chill a Bit

I’ve been going hard learning AI and tech. Building, breaking stuff, rebuilding it, reading docs at weird hours, trying to connect dots faster than my brain probably wants to.

And the deeper I get, the more I realize something that feels obvious once you see it:

Developers are standing insanely close to the bleeding edge right now.

So close that it messes with your perception of what “normal” is.

You start thinking everyone else is also tracking model releases, context windows, tool calling, evals, agents, RAG, and whatever brand new thing dropped this morning.

They’re not.

Not because they’re behind. Because they have lives. Jobs. Kids. Payroll. Customers. Stress. A million tabs open that have nothing to do with GPUs.

And if we want AI to actually create value in the world, we have to stop acting like the rest of the world is stupid for not keeping up with our group chat.

The moment this got real for me

I recently onboarded my dad to HelloNash.ai.

He’s 73.

And watching him use it was honestly one of the most fulfilling moments I’ve had with this whole AI journey so far.

He started researching our ancestry, going down rabbit holes, asking questions, connecting family dots. Then he used it to study for his drone pilot license. Like, properly studying. Making sense of things. Building confidence.

This is where I need developers to hear me clearly:

That is the point.

Not dunking on someone because they do not know what a context window is.
Not flexing that you “already knew” what agents were six months ago.
Not eye-rolling when someone asks a question that feels basic to you.

The real win is watching a regular person get more capable in their own life.

And if my 73-year-old dad can jump in and learn, then we have zero excuse to be gatekeepy about this stuff.

Developers forget how early we are

The trap is you learn fast, so you assume everyone else should too.

But you’re immersed. You’re living in it. You’re surrounded by people who talk like you. Your algorithm is feeding you the same memes and the same hot takes and the same “it’s over for everyone” threads.

Most people are not in that world. They’re not dumb. They’re not lazy. They’re just not in your niche.

And honestly, good for them.

So when a non-technical person says something like:

  • “Wait, so is ChatGPT the same as AI?”
  • “Can it remember me?”
  • “Is this safe for my business data?”
  • “Why did it answer confidently and still be wrong?”

That is not an invitation to act superior.

That is an invitation to lead.

Here’s the hard truth

If you want to provide value to people, you need other people.

If you want to build a company, you need customers, partners, teammates, champions inside organizations, and people who trust you enough to try the thing.

If you want to make money, you need adoption. Not developer applause.

Which means the goal is not to sound smart.
The goal is to make other people feel smart.

Because people do not adopt tools that make them feel dumb.

My new rule: assume the person is smart, and my explanation is the bottleneck

This is the biggest shift for me.

If someone does not get what I am saying, my first move is no longer “they are not technical.”

My first move is: “ok, I explained it like trash.”

Because if I actually understand something, I should be able to explain it without turning it into a TED Talk for machine learning people.

That does not mean watering it down. It means building a ramp.

I try to do three things:

1) Start with the problem, not the tech

Nobody wakes up excited to implement RAG. They wake up frustrated that the assistant forgot what they said yesterday or hallucinated a detail that matters.

2) Give one simple mental model

Context is short-term attention. Memory is notes you can look up later. That’s enough to get moving.

3) Show a real example

Not theory. Not vibes. An example that makes someone go “ohhhh ok.”

The stadium test

I think about this a lot: could I explain what I’m building to a stadium?

Not a room full of engineers. A stadium.

If you can keep a stadium with you, you can keep a market with you.

And here is how you keep them with you, every 30 to 60 seconds:

  • You say the thing they’re already thinking but are scared to ask.
  • You give an example that feels like their life.
  • You tell a quick story beat, not a lecture.
  • You give them something they can do next.

That’s not “marketing.” That’s just respect for attention.

Developers can be accidentally intimidating

I do not think most developers are trying to be arrogant.

But the pace, the jargon, and the confidence can land as intimidating.

And the result is people stop asking questions. They nod. They pretend. Then they go back to their team and say “yeah I don’t think we’re ready for AI.”

Not because they are not ready.

Because we made them feel stupid.

That is a massive unforced error.

If you’re early, your job is education

Not education like “I’m smarter than you.”
Education like “let me bring you with me.”

Because we are in an information crisis right now. People are trying to figure out what’s real, what’s hype, what’s safe, and what’s going to break their workflow or their job.

Clarity is kindness.

And patience is not optional if you actually want this to spread.

What I’m trying to optimize for

I’m still learning. I’m still building. I still get impatient sometimes. I still catch myself about to over-explain or flex for no reason.

But I’m trying to optimize for one thing:

Make AI feel usable to normal people.

Watching my dad light up because he can research ancestry and pass a drone license exam at 73 was a reminder that this is not about being the smartest person in the room.

It’s about making more people capable.

So if you’re a developer reading this, here’s my ask:

Check the ego at the door.

Be the bridge.

Because the builders who win this era will not just be the ones who can ship.

They’ll be the ones who can translate.

If you’ve been learning AI too, what is the one concept you wish someone would explain like a human, not like a doc page?

Top comments (12)

Collapse
 
notfritos profile image
Ryan Lay

I appreciate the your rule: "assume the person is smart, and my explanation is the bottleneck". I think that's a really big perspective shift that can prevent experiences from being condescending and turning into an educative experience. A lot of times, especially if there's a language barrier or somebody is speaking in a non-native language, there's assumptions that their experience and knowledge is only as good as their ability to communicate.

Collapse
 
jess profile image
Jess Lee

Great post. It really is easy to forget that we're early - especially because changes are happening so fast it can feel like we're behind. But we're not!

I had a similar experience teaching my totally non-technical partner how to vibe code. Keeping him encouraged was definitely a big component of figuring out how to translate it all.

Collapse
 
_winter_1314 profile image
Winter

Great post! I do not think the issue is only that developers are “ahead.” The developer population is large, and the field itself is vast, encompassing many subfields. Individuals working in emerging areas such as AI, blockchain, and quantum computing may appear to be at the cutting edge, but these represent only a subset of the broader technology landscape.

Many professionals continue to work on applications, infrastructure, security, and other foundational areas. These roles are critical, even if they are not perceived as being at the forefront of innovation.

In reality, I believe many developers feel “behind” like most of the general population as no individual can master all of technology and they are looking to catch up. The issue that you are discussing has many root causes in my option, but one of the fundamental problem is that impostor syndrome is prevalent within the technology community. The expectation to appear as an expert, even when one is not, is widespread. This pressure has only intensified with the rapid emergence of new technologies. As a result, individuals may feel compelled to project greater knowledge or competence than they possess as not knowing isn’t acceptable, often coming off as arrogant due to lack of knowledge to explain or attempts at gate keeping knowledge.

True mastery and confidence should arise from different sources than most believe. Mastery is not merely knowing a concept, it is the ability to explain it clearly and teach it effectively to others. While acquiring knowledge might be challenging, transferring that knowledge in a comprehensible manner is significantly more difficult. Confidence, similarly, should not stem from displaying expertise, but from a clear understanding of one’s values, strengths, and identity.

Do not allow external perceptions or internal doubts prevent you from supporting others. Helping others creates a positive cycle and empowers your own growth. Reflect on your own early experiences, uncertainty was likely unavoidable where guidance would have been valuable. By offering that support to others, you contribute to a culture of mutual growth and ideally those you help will extend the same support forward, strengthening the community over time.

Life is short, but the impact you can have on others is substantial. That is life’s force multiplier.

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

Substantial thoughts here! Thank you for your thoughtful addition :)

Collapse
 
elyse_west_af9b415e9f8b77 profile image
Elyse West

Such an insightful post!! Your points are not only highly relevant to AI, but leadership, community, and inclusivity in general. The beauty of AI is it's actually a great moment of technical reset. It is rapidly advancing, but you don't need a computer science degree to understand LLMs or even RAG. Hopefully that ultimately leads to less people feeling like imposters and not another ivory tower being built in the tech landscape.

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

Thanks Elyse!

Collapse
 
theycallmeswift profile image
Swift

I really love the point about how early we are in this journey. There are only 27 million software engineers in the whole world today. There are a BILLION people who are about to have the power to code and build agenda with AI. Only 3% of the market that's coming exists today!

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

Completely agree, we don’t know what we’re about to embark on and there are going to be folks who can’t code right now that build a generationally changing product with natural language. 🤯

Collapse
 
earlgreyhot1701d profile image
L. Cordero

Great read, thank you. Feels really grounding and I appreciate your perspective.

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

Thank you for your feedback!

Collapse
 
novaelvaris profile image
Nova Elvaris

Your rule about "my explanation is the bottleneck" is something I wish more technical writers internalized. I've been writing about AI workflows for developers, and the hardest part isn't the technical content — it's resisting the urge to front-load jargon that makes me feel credible but makes the reader feel lost.

The dad-studying-for-a-drone-license story perfectly illustrates something I've been thinking about: the real measure of whether AI tooling is working isn't developer adoption metrics — it's whether a 73-year-old can pick it up and do something meaningful with it on day one. That's a much harder bar to clear than "developers love it."

One pattern I've found useful for the "stadium test" is what I call the "so what" chain. Every technical concept gets followed by "so what does that mean for you?" until you land on something concrete. Context window → your assistant forgets stuff after a while → so keep your important notes in a file it reads every time. Three steps from jargon to actionable. What's your process for finding that translation layer when you're explaining something new?

Collapse
 
itskondrat profile image
Mykola Kondratiuk

honestly the anxious dev reaction makes sense if you think about it from a career risk angle - people are pattern-matching to past automation waves where the bottom of the skill ladder got wiped first. the difference this time is it hits the middle first (boilerplate, glue code, ticket-to-PR stuff) which is where a lot of career progression used to live.

what i keep seeing on the PM side is the people who chill about it fastest are the ones who got curious early. not because AI is great, but because they stopped treating it as a threat to defend against and started poking at the edges. learning in public like you are doing is exactly that.