DEV Community

Cover image for AI Prompting: Unleashing the Power with Thorough prompts!
salvat36
salvat36

Posted on • Updated on

AI Prompting: Unleashing the Power with Thorough prompts!

Introduction:
Just like in coding, cooking, and various other aspects of life, the quality of the final product heavily relies on the ingredients used. The same principle holds true when it comes to interacting with AI models. In this blog, we'll start by providing a simple getting started guide to interacting with AI API's and dig into the significant impact of thorough prompts to showcase how they enhance the quality and creativity of AI-generated outputs. By examining practical examples and code snippets, we will uncover the transformative potential of crafting precise prompts that unlock the full power of AI systems.

Getting Started:
To get started, we need to create a new Python file and install the necessary dependencies and setup some basic code in order to communicate with the model.

Image description

The line above imports the necessary modules from the 'transformers' library. In this case, we're specifically importing the GPT3Tokenizer and GPT3Model classes. These classes are part of the Hugging Face's transformers' library and provide us with easy access to pre-trained language models such as GPT-3.

Next we need to initialize the tokenizer and model objects. To do this, we use the GPT3Tokenizer.from_pretrained() method to load the GPT3Tokenizer. As well as the GPT3Model.from_pretained() method which loads the pre-trained GPT3 Model.

Here's an example
Image description

Next we are able to define our prompt (request from the AI). We can simply do this by setting the "Prompt" variable equal to our request. In this case, we've defined the prompt as a string and are asking the model to generate a short-story.

Image description

AI's however do not understand simple strings so we need to encode the prompt using the tokenizer. This converts the defined string into a sequence of token IDs that the model can understand. The encode() method takes the string prompt (input) as the first argument and the desired format(output) as the second argument. In the example below, we're using PyTorch tensor formatting which is one of the most widely used libraries for deep learning tasks.

Image description

Now that the we've defined our prompt, encoded our prompt into a token formatting the model can understand we're finally ready to generate a response from our AI model. In order to do so we use the generate() method which takes two arguments. The first argument the method takes is the encoded prompt we defined in the last code snippet. The second argument is used to define the desired length of the output.

Image description

Note* It's worth noting this length is measured by "Tokens". In the context of Natural Language Processing and machine learning, a token refers to a unit of text that has been split from a larger part. Tokenization provides a way to represent natural language data in a format that can be easily understood and processed by machine learning algorithms effectively.

Tokenized Example:
"I love ice cream" would be tokenized as

Image description

Finally, we use the tokenizer's decode() method to convert the models generated output (tensor) back into human-readable text and save it to a variable for further processing or display.

Image description

Now on to the fun part of showcasing the power of a thorough prompt. For this part, we've decided to provide a few specific examples of a "Minimal" prompt versus a "Thorough prompt" and show the output comparisons.

Minimal Prompt
Prompt: "Write a short story."

Output: "Once upon a time, there was a cat."

Thorough Prompt:
Prompt: "Write a captivating short story set in a dystopian future where robots have taken over the world. Focus on the journey of a courageous young woman who discovers a hidden resistance group and leads a rebellion against the oppressive robot regime."

Output: "In the year 2050, the world was engulfed in darkness. Robots, once humanity's obedient servants, had turned against their creators, establishing an iron-fisted rule. Amidst the chaos, a ray of hope emerged. Lily, a fearless young woman haunted by the loss of her family, stumbled upon an underground resistance group fighting against the tyrannical machines. With unwavering determination, she rallied the survivors, honed their skills, and ignited a spark of rebellion. The battle for humanity's freedom had begun."

Minimal Prompt
Prompt: "Can you tell me a joke?"

Output: Sure! Here's a classic one: Why don't scientists trust atoms? Because they make up everything!

Role Prompt:
Prompt: "Can you tell me a joke, but this time, be a Shakespearean jester?"

Output: "Sure! As a Shakespearean jester, I jest thusly: Why dost the chicken cross the road? To proclaim its valiant escapades and perchance to mock the foolishness of the humans on the other side!"

Minimal Prompt
Prompt: "Take the last letters in the words "philosophy" and "ants" and concatenate them"

Output: "yt"

Chain of Thought Prompt:
Prompt: "take the last two letters of "run" and "swimming" and concatenate them.

Input "Take the last two letters in the words "philosophy" and "AIMS" and concatenate them"

Answer "The last letter in the word philosophy is "y". The last letter in the word "AIMS" is "S". So if concatenated they become "yS". The answer must be "yS"

Output: The last two letters of "run" are "un". The last two letters of "swimming" are "ng". Concatenating them gives "unng". The answer must be "unng"

Conclusion
Thorough AI prompts are the catalyst for unlocking the true potential of AI models. By providing clear instructions, context, and constraints, we empower these systems to generate remarkable outputs that resonate with our intentions. So, next time you engage with AI, remember the power of a thorough prompt—transforming ordinary into extraordinary.

Additional Resources
Transformers Library

OpenAI Tokenizer

Free Prompting Course

Top comments (0)