DEV Community

gagaisyou
gagaisyou

Posted on

Train an obedient AI! I made a free AI prompt optimizer!

Hello everyone, I'm Gaga.

Lately, I've noticed that AI has become quite unhelpful in my work. It often generates a lot of useless information or completely misunderstands what I mean.

This happens because the problem isn't described clearly enough. It's like two strangers trying to communicate: if one speaks in riddles, the other will struggle to understand. However, we humans always hope the other person can read our minds, requiring minimal language for them to grasp our thoughts.

That's why I created a prompt optimization tool.

It can analyze our needs and tell us what information the AI requires to produce better results.

More articles

Let me give you an example.
If I directly ask Gemini to "generate a picture of a beautiful woman," the result is this:
a girl

HolyShit!!
As you can see, Gemini doesn't know what kind of woman I prefer, so it can only guess. Now, let's try with an optimized prompt:

opt.png
another.png

It's only a slight improvement, but the optimizer provides an analysis of the problem. For instance, it tells us which variables are needed to generate a picture of a beautiful woman. We can then modify this structured prompt based on our preferences, like making her wear pink pants.


Technical Implementation

LangChain
LangChain is a powerful development framework for integrating AI models. Currently, I've connected two free models, Gemini and Cloudflare AI, through LangChain. It's quite convenient to use. You can build a pipeline like this to execute large language models and then choose streaming output.

try {
  const { modelName = "gemini", prompt = defaultPrompt } = options;
  const promptTemplate = ChatPromptTemplate.fromTemplate(prompt);
  const model = modelName === "cloudflare" ? this.cloudflareModel : this.googleModel;
  console.log("Creating RunnableSequence chain");
  const chain = RunnableSequence.from([
    {
      query: new RunnablePassthrough(),
    },
    promptTemplate,
    model,
    new StringOutputParser(),
  ]);
  const result = await chain.invote(query);
  return result;
} catch (error) {}
Enter fullscreen mode Exit fullscreen mode

Top comments (2)

Collapse
 
michael_nielsen_70ab83d55 profile image
Michael Nielsen

This is indeed one of the biggest issues with AI at the moment. Quite interesting solution you have suggested. It's definitely something I will be taking a closer look at!

Collapse
 
dotallio profile image
Dotallio

This is cool! Prompt clarity makes such a difference, especially with models that tend to hallucinate. Have you found any types of prompts or tasks where your optimizer works best?