π οΈ Why I Built It
As someone working with AI tools and content, I kept wondering β can image editing be fully natural-language based? Tools like Photoshop are powerful, but not always fast or intuitive.
I decided to build a simple prompt-based image editor, powered by the Kontext model (Flux.1 architecture).
βοΈ How It Works
You upload an image, then type something like:
"Add a blue sky in the background"
"Make the person wear a black suit"
"Remove the text on the bottom right"
And it runs inference via the Kontext model (hosted via API), returning the updated image.
π‘ What I'm Still Improving
- Result quality varies depending on prompt structure
- Processing time needs optimization (GPU warmups are inconsistent)
- No batching yet β model runs per request
π Try the Tool
Kontext has very high requirements for prompt words in UI interface. It needs precise prompt words to achieve better results. Therefore, we chose to split the ability to modify images into different scenes and templates.
Iβd love your input or bug reports:
π https://www.picsman.ai/tools/prompt-image-editor
π Looking for Feedback
I'm especially looking for:
- Prompt styles that work well vs fail
- UX feedback (too minimal? unclear UI?)
- Suggestions on caching or performance tweaks
If you're experimenting with image models or prompt-based interfaces, I'd love to connect!
Top comments (0)