DEV Community

Dan Walsh
Dan Walsh

Posted on • Originally published at danwalsh.co

What Claude Taught Me About Using Claude

I've had 104 conversations with Claude over 10 months. When I looked back at the first one versus the most recent, the gulf in my old and new interaction styles is obvious. Not because I learned amazing new prompting tricks — more that I learned to think more clearly about what I actually needed before I opened my mouth (metaphorically - moved my fingers, whatever).

The irony of getting better at talking to - or working with - AI is that it has very little to do with AI.

The Exercise

I ran a retro on my own Claude usage history. Pulled conversations from across the full timeline, scored them on things like context richness, specificity of the ask, how well I course-corrected when the output wasn't right. I essentially used Claude to performance review my usage of Claude. How delightfully meta eh.

What emerged was a clear three-phase progression. Not in Claude's capabilities — those improved too, but that's not for this post. The progression was in mine. In how clearly I was thinking about what I needed before I asked for it, and how I communicated that.

Asking Questions (months 1 - 4)

My early conversations were broad and exploratory. "Do you know of any books that can help develop <insert almost-specific-but-still-very-broad software engineering topic> skills?" or "Generate some passive income strategies that can help my family escape the permanent underclass." The outputs were fine — comprehensive, well-structured — but generic. They read like they could have been written for anyone, because the prompts could have been written by anyone. Oops.

The pattern was pretty predictable: ask broad, receive broad, spend three or four follow-up messages narrowing down to what I actually wanted. That narrowing work could have been done upfront. I was essentially using Claude as a search engine with manners.

The handle brainstorming session is the example I keep coming back to. I asked Claude to brainstorm online handles derived from a specific root. The first batch came back as literal, on-the-nose word combinations — the kind of handles that sound like a default Wi-Fi network name. My correction was one sentence clarifying that I wanted abbreviations, derivations, and lateral references rather than obvious mashups. That single point around specificity transformed the output. Suddenly the suggestions had character — shorthand, relevant code snippets, things that felt like they could actually be mine.

The lesson wasn't that Claude needed better instructions. It was that I hadn't figured out what I wanted until I saw what I didn't want. Almost like there's some kind of lesson there already.

Framing Problems (months 4 – 8)

Somewhere around month four, my prompts shifted from "give me information" to "help me achieve an outcome." The difference sounds subtle but the results aren't.

The biggest single upgrade was the numbered sub-task. Instead of "review my CV", I started writing things like: "First — update the experience sections based on source X. Second — review the current format in an ATS context. Third — review older entries and whether we should truncate them." Breaking the ask into explicit steps gave Claude a clear execution path rather than asking it to figure out priorities. It also meant I could approve step one, redirect step two, and skip step three without confusion.

The next upgrade was specifying the consumer. "Write a blog post" produces generic content. "Write an internal tech blog post accessible to non-engineers while primarily targeting engineers, with product owners and UX designers also able to learn from it" produces something you'd actually publish. The difference is defining who will read it, not just what it says.

The third — and this one took me longer to notice — was pre-loading my own thinking. For a meeting prep conversation, I included my own initial questions and asked Claude to build on them rather than generating from scratch. Something clicked. When Claude sees the calibre of what you've already produced, it calibrates to match. Show the standard you expect and the output rises to meet it.

File attachments appeared in this phase too. I stopped describing my CV and started attaching it. Stopped explaining a company's product and started pasting their info pack. Letting source materials speak for themselves instead of filtering them through my summary — another one of those things that seems obvious in hindsight.

Orchestrating Workflows (months 8 – 10)

This is where things got interesting. The shift from single conversations to multi-conversation pipelines.

The single most useful technique I developed — and one I hadn't encountered elsewhere — is what I think of as the Research Prompt Pipeline. The workflow:

  1. Conversation A: Define the research question. Discuss scope, constraints, and what good output looks like. Collaboratively design a research prompt — essentially a brief for a future conversation.
  2. Refine the prompt until it specifies deliverables, success criteria, and all necessary context.
  3. Conversation B: Start fresh. Attach the research prompt plus all relevant files — project code, previous analysis, reference documents. Execute.
  4. Conversation C: Take the research results back to a new conversation alongside the original project context. Synthesise.

Why does this work? Each conversation gets maximum context window devoted to its specific task. The research prompt acts as a contract — output is measurable against a defined standard. And starting fresh for execution means the context window isn't polluted with the exploratory discussion from the design phase.

This pattern emerged from working on a creative coding project where I needed comprehensive research on balancing automation with manual control in creative tools. The research prompt alone was hundreds of words — specifying five investigation areas, eight targeted questions, specific deliverables, and success criteria. The resulting analysis synthesised insights across professional creative tools, academic HCI research, and JavaScript libraries — something I couldn't have produced alone. And the quality came from the pipeline design, not from any single clever prompt.

The next evolution was designing outputs not just for me but for AI agents. Implementation guides with atomic tasks, file paths, code examples, success criteria, and testing procedures — structured so an AI coding assistant could consume and execute them. The output of one AI conversation became the input for another.

It all starts to feel a lot less like prompt engineering and more like workflow architecture.

The Mirror

Here's what surprised me on reflection (but really shouldn't have): getting better at communicating with Claude made me better at communicating with humans. That's right. All you people reading this.

When you start providing context before asking a question, defining what success looks like before requesting work, and specifying who will consume the output before creating it — you find yourself doing it everywhere. In design documents. In Slack messages to your team. In briefs to stakeholders.

The skills start to look a lot less like 'AI prompting skills' and a lot more like communication skills that AI made obvious because it gave me rapid, consequence-free feedback on how clearly I was expressing my needs.

Every vague prompt that produced a generic response was a signal — not that the AI was limited, but that my thinking was. You don't get that feedback loop with humans. People will politely try to work with a vague brief. Claude just gives you a vague answer.

What I Still Get Wrong

I developed the habit of asking "do you have any clarifying questions before we begin?" in complex conversations — but I still don't do it consistently. Sometimes the missing context only becomes obvious two turns in, which is two turns too late.

I'm good at specifying what I want but inconsistent at specifying how I want it structured. Then I'm surprised when I get a bullet-point list instead of narrative prose. That's on me, not the AI.

And I almost never ask Claude to self-assess. "What are the weakest assumptions in your recommendation?" or "rate your confidence in each section" — potentially very powerful challenges that I know about and rarely think or remember to use. Working on it. Ask me again in a few months.

Closing

The gap between your first AI conversation and your most recent one isn't a measure of prompting skill. It's a measure of thinking clarity. The AI is a mirror. And we all love reflecting on ourselves, right?

If you want to get better at using Claude, don't study prompt engineering guides. Look back at your own conversations. Find the moment where a vague ask produced a generic answer, then find the later moment where you framed the same type of question with context, constraints, and success criteria.

If you want a starting point — before your next complex prompt, ask yourself three things: have I explained my situation, have I defined what good looks like, and have I said who'll use this output? Those three alone would have saved me dozens of follow-up messages in the early months.

The difference between your early prompts and your later ones is your growth — not as a prompter, but as a thinker. That's right - as Auguste Rodin's 1904 sculpture.

Top comments (1)

Collapse
 
nedcodes profile image
Ned C

the research prompt pipeline is the part that clicked for me. i've been doing something similar without naming it, where i'll use one conversation to figure out what i actually want, then start fresh with that as the prompt. the context window stays clean and the output is way better than trying to course-correct mid-conversation.

the "mirror" section is the part i keep thinking about though. i noticed the same thing happening with my Cursor rules. writing rules that the AI can follow forced me to articulate what i actually wanted from my code, and that clarity started showing up in how i wrote specs and tickets for humans too. turns out "be specific about what you want" is universal advice that nobody follows until a machine punishes them for it.