In the last article, I walked you through how to organize a growing prompt library. I went through that whole process myself: notes, spreadsheets, browser plugins. It worked, but it was tedious.
Then system prompts came onto the scene and let me set those instructions once for an entire conversation. I could not have been more relieved. It solved the problem I had been working around for months, and it is what eventually made most of my prompt library unnecessary.
What is a system prompt?
A system prompt is a set of instructions that runs behind the scenes in every conversation. It shapes how the AI responds before you even type your first message.
Think of it this way: your regular prompt is what you say to the AI. The system prompt is the briefing the AI received before you walked into the room. It is the difference between talking to a general-purpose assistant and talking to someone who already knows your preferences, your context, and exactly how you want them to communicate.
What changes with a good system prompt
The difference is noticeable immediately. Without a system prompt, every conversation starts from zero. The AI has no idea who you are, what you do, or how you like your responses formatted. So it defaults to generic: medium length, neutral tone, broad assumptions.
With a system prompt, you can set:
Tone and style. "Respond in a direct, conversational tone. Avoid corporate jargon. Do not use bullet points unless I ask for them." Now every response in the conversation follows those rules without you having to repeat them.
Expertise level. "I am a senior software engineer. Do not explain basic programming concepts. Assume I understand the fundamentals and focus on the nuanced details." This alone eliminates the filler that makes many AI responses feel like they are written for beginners.
Role and context. "You are helping me plan a product launch for a B2B SaaS company targeting mid-market HR teams. I am the product manager." Now every response is grounded in your actual situation instead of making generic assumptions.
Constraints. "Keep responses under 200 words unless I ask for more detail. Always suggest next steps at the end." These kinds of constraints shape the output in ways that save you time on every single exchange.
A real example
Here is a system prompt I actually use:
"I am a software engineer and AI practitioner. I prefer direct, technical responses without unnecessary preamble. When I ask for code, give me the code first and explain after. When I ask for advice, give me your honest opinion, not a list of options with no recommendation. Push back if you think I am approaching something the wrong way."
With this in place, every conversation I start already feels like talking to someone who knows how I work. I do not have to re-establish my preferences. I do not get the "Great question!" filler. The AI just gets to work.
Where to find it
There are actually a few different levels of customization available, and they work differently. Understanding the distinction helps you use the right one for the right situation.
User-level preferences
These apply to every conversation you have. They are your default settings.
ChatGPT: Settings → Personalization → Custom Instructions. There are two fields: one for information about you, and one for how you want responses formatted.
Claude: Settings → Profile → Custom Instructions. A single text field for your preferences. Claude also offers preset response styles (Concise, Explanatory, Formal) and lets you create custom styles.
Gemini: Settings, under preferences. Google rearranges this periodically, so if it is not where you expect, search for "personalization" in the settings.
Copilot: Settings → Personalization. The customization here is basic compared to ChatGPT and Claude, but you can set tone and topic preferences.
Project-level instructions
This is where things get really useful. Projects let you group conversations around a topic and set instructions that apply to all conversations within that project. This is different from user-level preferences because you can have different instructions for different types of work.
ChatGPT: Projects. Group conversations with shared instructions and uploaded files that the AI can reference.
Claude: Projects. Same concept. Create a project, add custom instructions and knowledge files, and every conversation inside that project inherits those settings.
Gemini and Copilot do not have strong project-level equivalents in their consumer products yet, though this is an area that keeps evolving.
Custom AI personas
A step beyond projects: some tools let you build a standalone customized AI with specific instructions, behavior, and knowledge baked in.
ChatGPT: GPTs. You can build (or use someone else's) custom versions of ChatGPT designed for specific tasks. Available through the GPT Store.
Gemini: Gems. Custom AI personas with specific instructions and behavior, available to Gemini Advanced subscribers.
The specifics of where to find these settings change as the products evolve. If the exact menu path is different when you look, search for "custom instructions," "projects," or "system prompt" in the tool's settings.
Once you start using system prompts, you realize that a lot of the "AI is not that useful" frustration comes from having to re-teach the AI who you are every time you start a conversation. A good system prompt eliminates that entirely. I have found that spending 15 minutes writing a solid set of custom instructions saves hours of repetitive context-setting across dozens of conversations.
Here is something you can do right now: open your AI tool's settings, find the custom instructions field, and write three sentences. Who you are, what you do, and how you want responses formatted. That is your starter system prompt. You can refine it over time, but even a basic one will make an immediate difference in every conversation you have from this point forward.
Next time: your AI keeps forgetting what you told it at the start of a conversation. Why that happens and what to do about it.
If there is anything I left out or could have explained better, tell me in the comments.
Top comments (0)