Right now, if you were to create a reusable public prompt, your best bet is using Github and checking the prompt in there. In fact, there are quite a few repos with reusable prompts and all you need to do is search "github [toolname] prompts" and you'll get a great starting list.
My issue is that majority of prompts require their own assessment and tweaking and whatever else to make it work for you. I don't believe most devs check these prompts, read them, or modify them which creates the issue that these prompts aren't as effective as they could be.
So I had this idea, what if instead of "prompts", we create a prompt to create prompts? And what if these directions were written for a human rather than AI?
That's when it hit me, there's a huge wealth of tutorials, guides, etc. that an AI agent could read to generate prompts. What struck me as the next obvious step would be to write a tutorial/guide that a human could follow and an AI could use to generate a prompt.
Why reusable prompts in the first place?
I think this is an easy question to answer because the answer is the same as why we'd want to isolate and create libraries, create open source code, etc.
Reusable prompts are powerful because they can:
- keep AI behavior more consistent across codebases/uses
- get a better starting point for more focused prompts
- enable various common functionalities between codebases
I mean, why not? Being able to port skills, agent definitions, etc. across codebases is great. Being able to keep the same coding standards is the same thing. I'm very much in favor it.
So why blog posts?
I wrote a blog post on best practices for AI-assisted coding as a test for how blog posts could work for prompt generation. The post is essentially a large prompt for an AI tool to use for prompt generation with explanations and step by step instructions for the developers utilizing it.
There are essentially two reasons why you'd want to have this "pre-prompt" with a human explanation instead of static prompts in Github.
The Human Aspect
With AI/agentic coding gaining traction as a productivity tool, it's easy to forget that at some point a human has to be involved. It won't be claude code running its first /init, it'll be a dev (or an agent setup by a dev). The first prompts in a repository will most likely be auto-generated but if you really want to get good value from a properly setup CLAUDE.md or an AGENTS.md, a human will need to be involved in the process.
With that said, that dev may very likely prompt Claude, OpenCode, or whatever other tool like so:
Setup appropriate prompts for this repository. Research prompting best practices. Utilize open source prompts such as the prompts from [some] company.
And be happy with the initial output. However, there will be very little understanding of why prompts are setup the way they are, why those best practices exist. If a developer then keeps up with best practices by themselves, they'll still need to prompt their tool appropriately based on what they've learned.
As a hybrid approach, and a nod to literate programming, I created a blog post that a developer can read so that:
- they can understand why the final prompt looks the way it does
- they can learn best practices along the way
- they have a reference on how to fix possible issues
With a developer having this understanding, writing and fixing prompts should be a lot more straightforward. I'll draw a parallel to reading library documentation when you're hand-coding.
The Pre-Prompt
By utilizing the aforementioned blog post as a prompt for generating prompts, you end up with several adventages including your AI understanding how to customize the prompt for your specific codebase and usecase. It can function as a much better-suited prompt for your codebase in comparison to doing a raw /init.
The AI will be able to digest hints to do research on what libraries are used, what's deprecated, and much more. A copied prompt can't do that much heavy lifting. Since the article is public, you can reuse this pre-prompt for any codebase and even use it to ask your favorite AI tool to adjust its current prompt according to those criteria specified within.
All in all, it's just a better approach to creating prompts.
Note: This is the case right now. I've heard of quite a few users that don't even use
CLAUDE.mdor use a very thin prompt with a lot of success. That hasn't been my experience but, obviously, we're living in the wild west of AI prompting right now.
Prompts as Public Knowledge
If you're not sold on having public blog posts that function this way, then let me make my argument here.
Publishing blog posts publicly accomplishes several things:
- A blog post about "How to Prompt Claude Code for Svelte Components" is easily searchable, linkable, and it can be referenced in discussions and shared with colleagues (across companies/jobs)
- it also provides a super easy way for your AI to see it without requiring auth or access to a folder or what have you
- When you publish a prompt, you're inviting others to critique it and provide their own expertise to help improve it
- It creates documentation -- A good blog post about a prompt explains why the prompt is structured that way. It explains what problems it solves. It serves as documentation for future use.
- It's readily accessible for new projects
- It contributes to the collective knowledge. The AI ecosystem is young. When you share what works, you're helping others avoid the trial-and-error phase you went through.
Creating your own
I'm not against using Github at all. Publishing prompt blog posts (prompt post?) on Github may be even a better idea than doing that on a blogging platform; however, I find that Github changes your perception of the prompt post and makes it seem more like just a prompt, eliminating the human aspect.
Creating the prompt post itself isn't terribly difficult. I think the parallel to literate programming is key here. You're writing a post and you're explaining a "template" for the final prompt in easily-digestible parts. I generally like to structure these as:
## Description
Explain what the resulting prompt should do and why.
## Core Logic
Core logic of the prompt goes here along with explanations of each part.
### Standards
For example, this section would explain coding standards such as using or not using classes in your TypeScript code, usage of `any`, and similar.
\`\`\`
## Standards
- DO use Enums for any options, variants, etc.
- DO NOT use `any` in TypeScript code
- DO NOT use classes
\`\`\`
## Expected output
Ensure that you always tell your AI your expectations.
\`\`\`
## Output
Create documentation for coding standards in this codebase
\`\`\`
## Conclusion/More Resources
List out any resources or tips on how to handle certain situations, how to extend the resulting prompt, or list out links to further helpful resources
With this basic structure, you should be able to create a robust prompt post that you, your team, and your AI can fully utilize.
What about you?
What's worked well for you? I've used this system multiple times with unpublished pre-prompts and unfortunately, I don't have access to those anymore (should've made them public!) but this approach has worked for me time and time again to create a solid prompt start for different projects.
Tip: One extra level of inception here would be to reference this post to your AI so it could generate a starting point for your prompt post!
Top comments (0)