This is a submission for the Prompt Engineering Tool
What I Built with Google Gemini
The idea came from a very practical frustration. While working with AI models, I realized I was spending more time refining prompts than building features. A slightly vague instruction would produce an average response. A more structured instruction would produce something great. But figuring out that structure required trial, error, and wasted API calls.
PromptCraft acts as an intelligent middle layer between the user and Gemini.
Instead of sending raw prompts directly to the model, users pass them through PromptCraft first. The system:
- Analyzes clarity and intent
- Suggests improvements
- Rewrites the prompt in a more structured format
- Explains why the improved version is better
_In simple terms, Gemini is not just generating answers — it is being used to improve how we talk to AI.
_
Architecture Overview
The system works in three stages:
- User Prompt Input
- Prompt Analysis & Refinement (Gemini API)
- Improved Prompt + Explanation Output I used carefully designed system instructions to guide Gemini into behaving like a prompt engineering mentor instead of a general assistant.
Key technical considerations:
- Controlled temperature to reduce randomness
- Strict output formatting instructions
- Structured response parsing
- Clear role-based instruction design
- Error handling for incomplete responses
One of the most important design decisions was separating:
- The analysis role (evaluate prompt quality)
- The refinement role (rewrite clearly)
- The explanation role (teach the user what changed) That separation significantly improved consistency.
Demo
GitHub Repository: https://github.com/omprakash2929/PromptCraft
Example workflow:
Gemini-Refined Prompt via PromptCraft:
“Provide a beginner-friendly explanation of DevOps including its key principles, core practices (CI/CD, automation, monitoring), and why it is important in modern software development. Use simple language and a short real-world example.”
The difference in output quality is immediately noticeable.
This shows how structured prompting changes model performance without changing the model itself.
What I Learned
1. AI Is Easy to Use — Hard to Control
Calling the Gemini API was straightforward.
Controlling response quality was not.
The real complexity lies in instruction design. Small changes in phrasing caused large differences in output structure and usefulness.
I learned that prompt engineering is not about longer prompts — it’s about clearer constraints.
2. Deterministic Thinking Doesn’t Apply
As developers, we expect predictable outputs for identical inputs. AI systems don’t behave like that.
Even with the same prompt, slight variations occur. To manage this, I:
- Lowered temperature for refinement tasks
- Explicitly constrained format requirements
- Added structured output expectations
This reduced unpredictability significantly.
3. System Prompts Matter More Than Business Logic
Most of the product quality improvement didn’t come from adding more code.
It came from refining:
- Role instructions
- Tone constraints
- Output format templates
- Evaluation criteria
That was a surprising but powerful lesson.
4. Soft Skills Improved Too
Working with Gemini improved:
- Clarity in writing
- Instruction precision
- Thinking from the model’s perspective
- Patience with iterative improvements
It also made me more intentional about language in documentation and architecture design.
Google Gemini Feedback
What Worked Well
- Strong contextual understanding
- High-quality natural rewriting
- Good balance between structure and creativity
- Fast response times
- Reliable intent interpretation
Gemini performed particularly well when tasked with structured rewriting and clarity improvement.
Where I Faced Friction
- Occasionally verbose responses even when conciseness was requested
- Sometimes superficial improvements instead of deeper structural refinement
Required very explicit formatting instructions for consistent output
To address this:I reinforced system-level constraints.
I added role clarity (e.g., “You are a prompt engineering expert…”).
I constrained output format to bullet structures where needed.
Once properly guided, Gemini’s consistency improved significantly.
Reflection
- This project changed how I view AI integration.
- Initially, I thought I was building a tool that improves prompts.
- In reality, I was learning how to design better instructions for intelligent systems.
- PromptCraft taught me that AI isn’t magic. It responds directly to clarity, structure, and constraint.
- The better you communicate, the better it performs.
- And that realization will influence how I build future AI-powered products.
What’s Next
- Planned improvements:
- Prompt scoring system
- Token usage analytics
- Prompt history tracking
- Backend proxy for secure API handling
- Deployment on Cloud Run
- Browser extension version
The long-term vision is to evolve PromptCraft into a developer-focused toolkit for serious prompt engineering workflows.




Top comments (0)