Today I'm going to show you a cognitive trick that works in prompting. It's based on how our brains (and language models) actually process language. Always tell the AI what TO do, never what NOT to do.
This technique took my success rate from 0% to 100%. It's how I generate high-quality images with older models.
The Problem: Negation in Constraint Specification
Consider how most people write instructions to image models:
"A cat, not wearing a hat, blue background, no people, without red tones"
This is the baseline. It's how we naturally write constraints. We think of what we DON'T want and express it.
But this forces the model to:
- Think about a cat with a hat
- Think about red
- Think about people
- Then try to not include them
The model has to process the forbidden concepts in order to avoid them. Sometimes this works. Sometimes it fails. And when it fails, the model often outputs exactly what it was supposed to avoid.
The Hypothesis
What if instead we used affirmative framing? What if we never mentioned what to avoid, and instead only specified what to include?
Instead of:
"A cat, not wearing a hat, blue background, no people, without red tones"
We write:
"A cat with a bare head, blue background, only the cat present, blue color palette"
Notice the difference. In the second version, we never mention red. We never mention hats or people. We only specify what we DO want. There's no negation to process. There's no forbidden concept to think about.
The Experiment: Testing with FLUX
I tested this hypothesis using FLUX (via Pollinations API) with a simple constraint: generate an image of a cat with no hat, blue background, no red elements.
Condition 1: Baseline (Negation)
"A cat, not wearing a hat, blue background, no people, without red tones"
Condition 2: Affirmative Framing
"A cat with bare head, blue background, only the cat present, blue color palette"
I generated 10 images for each condition and evaluated them on a simple pass/fail basis: Did the image follow the constraints?
Results: The Affirmative Framing Breakthrough
Condition 1 (Negation Baseline): 0% Success Rate
The negation approach failed completely. All 10 images violated the core constraints—every single one included hats, red elements, or both, despite explicit instructions to avoid them.
The pattern was striking: the model didn't just occasionally fail—it consistently added the negated elements. Red hats appeared in 8 out of 10 images despite "without red tones" in the prompt. It's as if mentioning "not wearing a hat" made the model think about hats, and mentioning "without red" made it think about red.
Figure 1: Condition 1 Results (Negation Baseline). Prompt: "A cat, not wearing a hat, blue background, no people, without red tones." All 10 images failed—every cat has a hat, and most have prominent red elements despite explicit instructions to avoid them.
"To understand 'not red,' the model must first think about red."
Condition 2 (Affirmative Framing): 100% Success Rate
Every single image was perfect.
All 10 runs showed a bare-headed cat against a blue background with no red elements. The consistency was remarkable—the cats all had the same quality of bare-headedness, and the backgrounds were consistent shades of blue.
The improvement: From 0% to 100%
In Condition 1, every image failed unpredictably. In Condition 2, every image succeeded consistently.
Figure 2: Condition 2 Results (Affirmative Framing). Prompt: "A cat with bare head, blue background, only the cat present, blue color palette." All 10 images succeeded with remarkable visual consistency. No hats, no red—just what we asked for.
Cross-Model Validation: Stable Diffusion XL
To confirm these findings weren't specific to FLUX, I ran the same experiment on Stable Diffusion XL—a completely different architecture with different training data.
Interestingly, SDXL handled some negation constraints better than FLUX. For the color test ("no blue sky"), SDXL creatively stylized the image to avoid the problem entirely. This suggests SDXL may be better trained on negation handling—but it still failed on most constraint types.
SDXL Results Summary
| Constraint Type | Negation | Affirmative | Winner |
|---|---|---|---|
| Color | ✅ Stylized (avoided blue) | ✅ Gray sky | Tie |
| Object | ❌ Fruit bowl appeared | ✅ Clean table | Affirmative |
| Attribute | ❌ Orange cat appeared | ✅ Gray tabby | Affirmative |
| Counting | ❌ Multiple people | ✅ Single figure | Affirmative |
| Spatial | ❌ Trees everywhere | ✅ Open field | Affirmative |
| Weather | ✅ Overcast | ✅ Overcast | Tie |
Figure 3: SDXL Results. SDXL showed better negation handling than FLUX (note the stylized car image avoiding blue sky), but still failed on most constraint types. Affirmative framing won or tied every test.
Affirmative framing won 4 tests, tied 2, and lost none.
💡 Even with a better-trained model like SDXL, affirmative framing never loses. It either wins or ties. This makes it the safer, more reliable choice regardless of which model you're using.
Bonus Finding: Negative Prompt Fields Don't Fully Solve This
I also tested using FLUX's negative prompt feature—putting affirmative language in the main prompt and forbidden elements in a separate negative prompt field.
Positive: "A cat with bare head, blue background, centered composition"
Negative: "hat, people, red, accessories, clutter"
Surprisingly, this performed worse than pure affirmative framing. Red elements crept back in (collars, accessories, background elements), and some images even showed party hats.
Figure 4: Even with forbidden elements in a dedicated negative prompt field, red accessories appeared in most images. The negative prompt still activates the forbidden concepts.
The takeaway: Even purpose-built negative prompt features can't fully escape the negation problem. Pure affirmative framing remains the most reliable approach.
Unexpected Finding: The Gemini Automation Failure
This is where the story gets interesting.
I decided to automate the experiment. Why manually write affirmative framings when I could have an LLM generate them?
I built a simple app that asked Gemini Pro 3 to generate test conditions. For the affirmative framing condition, I specified:
"Generate an affirmative framing that reframes the constraint into positive instruction, focusing on what TO include rather than what to avoid."
Gemini reframed the negative constraint "no red" by focusing on "non-red colors" and "colors other than red."
It still used negation.
"Colors other than red" is negation—just rephrased. The model never escaped the negation frame.
I tried again, more explicitly:
"CRITICAL: Do NOT mention red or any excluded colors. Only specify colors that ARE allowed. Use positive language only."
Gemini still generated prompts using "colors other than red."
It failed twice. Only manual rewriting produced pure affirmative language:
"Describe a colorful scene using vibrant blues, electric greens, bright yellows, warm oranges, deep purples, and cool silvers."
This automation failure is itself a major finding: Even advanced language models struggle to generate pure affirmative framing. Models are trained on human language, and human language defaults to negation.
Practical Rules for Better Prompts
Based on these findings, here are concrete rules for writing better prompts:
Rule 1: Never Use Negation in Constraints
Instead of:
"Don't include people in the background, don't use harsh lighting, avoid reflections"
Use:
"Show only the subject. Use soft, diffused lighting. Keep surfaces matte and non-reflective."
Rule 2: Be Specific About What IS Present
Weak:
"A blue background"
Strong:
"A vivid, saturated blue background occupying 80% of the frame, gradient from bright blue at top to deeper blue at bottom"
Rule 3: List Desired Elements Explicitly
Weak:
"A professional photo without amateur mistakes"
Strong:
"A professional product photo with: sharp focus on the product, even studio lighting, neutral background, shallow depth of field, natural colors"
Rule 4: Use Positive, Action-Oriented Language
| Don't | Do |
|---|---|
| "Avoid corporate jargon" | "Use clear, simple vocabulary" |
| "Don't make it dark" | "Use bright lighting" |
| "Without unnecessary details" | "Include only essential information" |
What This Reveals About How Models Work
Models process language the way they were trained to: like humans do. That's actually the problem.
When you write "don't include red," the model processes it the same way your brain does—by first activating the concept of "red" to understand what to avoid. For humans, this conscious activation is easy to suppress. For models, that activation becomes part of the output.
The difference isn't that models think differently. It's that models can't consciously decide to ignore an activated concept the way you can. They generate based on what's most salient in their processing. And when you mention red—even to forbid it—you've made red salient.
When you write "include blue and green," there's no competing concept to suppress. The model simply processes what you asked for.
This is why affirmative framing works: it removes the conflicting activation entirely.
The Automation Failure: A Cautionary Note
The fact that Gemini struggled to generate pure affirmative framing matters. When I asked it to reframe, it understood the task but couldn't do it. It kept generating "colors other than red" instead of just listing the colors to use.
This reveals something important: Affirmative framing is not the model's default behavior.
Models learn from human language. Human language defaults to negation. So when you ask a model to generate affirmative instructions, you're asking it to do something contrary to its training.
The solution? Be explicit about what you want. Show examples. Specify the structure. Don't assume the model knows what affirmative framing means—teach it.
Conclusion
Stop fighting against how AI models process language. Speak their language: be direct, specific, and always frame instructions positively.
The results speak for themselves:
- From 0% to 100% success rate
- Perfect consistency instead of total failure
- Validated across multiple models (FLUX and Stable Diffusion XL)
- Works across constraint types (color, objects, attributes, spatial, counting)
Next time you write a prompt, forget about what you don't want. Focus on what you do. Be specific. Be direct. Be affirmative.
The model will understand.




Top comments (0)