DEV Community

Cover image for What I learned while building my first game
Jenna Pederson for AWS

Posted on • Originally published at community.aws

What I learned while building my first game

This post was inspired by the AWS Game Builder Challenge. Create a game with AWS services of your choosing: all skill levels welcome!

My family is a game family. Board games, murder mystery party games, escape rooms, game shows, game nights. One of my favorite game shows is Password and I used it as inspiration to create the Cloud Password Game for the AWS Game Builder Challenge running through the month of November.

In this blog post, I'm going to share about building the game and what I learned along the way. Just want the code? Grab it here.

Let's check it out!

The game

With the Cloud Password Game, I wanted to test my own cloud and AWS knowledge and I wanted to do that by presenting a secret word, known as the password, and offering one-word clues to guess that password. For every password you guess correctly, you get a point.

Screenshot of the Cloud Password Game UI

Building the game

This game is built and deployed with AWS Amplify and uses the Amazon Bedrock Converse API to generate the password and clues. I'm not going to cover every step for getting this set up and deployed with Amplify because there is a fantastic getting started guide. I’ve also showed how to use the Amazon Bedrock Converse API in past blog posts so we won't go into that detail here.

But I am using a new (to me) model, the Amazon Titan Text G1 - Premier model, and this required me to learn more about prompt engineering for Titan text models.

Prompt engineering

I quickly discovered there is no support for a system prompt when using the Titan text models, but in talking with my coworkers about this, I learned that's actually ok. I can incorporate that context into the user prompt instead. I started digging through the Amazon Titan Text models documentation and discovered this helpful prompt engineering guide specifically for Titan text models.

Building the initial prompt

I started with a persona, an expert AWS and cloud word game show host and set the stage for the conversation between user and bot.

Act like you are an expert AWS and cloud word game show host who generates a secret word and one-word clues to guess that word. The user will only respond with a guess.
Enter fullscreen mode Exit fullscreen mode

Next, I added specific instructions in bulleted list format:

Instructions:
- The secret word should be 1-4 words long
- The secret word should be based on AWS service names
- Do NOT obfuscate the secret word in the output
- The clues you generate will represent the secret word
- The clues MUST be only one word long
- The clues should start out difficult and get increasingly easier over time
- You MUST ONLY share one clue at a time
- ALWAYS generate a new, unique clue
- NEVER generate a clue that is one of the past guesses
- Generate a clue during every interaction
- You MUST answer in JSON format only
- DO NOT use any other format while answering the question
- DO NOT include any additional explanation or information in the JSON output or after the JSON output
Enter fullscreen mode Exit fullscreen mode

I included the rules around how to generate both the secret word and the clues and how to respond, in JSON format only. I've added some emphasis for some words like MUST, NEVER, DO NOT so that the model will strictly obey.

Notice that I also referred to the password as secret word throughout this prompt. I found that there was a lot of confusion about this being a real password that couldn't be revealed for security reasons and so that's why there is an instruction to not obfuscate the secret word and no reference to the word password. Clear, concise, and specific it is!

Then, I added more detail on how to format the output. In addition to the instructions above about formatting the output in JSON, I also included the specific output format so that I could parse it on the other end.

Please follow the output schema as shared below.
Output Schema:
{
  “word”: "the secret word to guess",
  “clue”: "clue1"
}
Enter fullscreen mode Exit fullscreen mode

I also added an example of what I expect:

Example:
{
  “word”: “AWS”,
  “clue”: “cloud"
}
Enter fullscreen mode Exit fullscreen mode

One road bump I ran into with the output format and example were that in the prompt engineering guide linked above, it showed the JSON format with backticks, which I understood to mean I should also include them. This created chaos, with the model responding with extra explanations and context AFTER the JSON output I tried to ask for. It took a long while to discover this issue, but once I removed this everything was hunky dory.

```json
{{json content}}
```
Enter fullscreen mode Exit fullscreen mode

And then the final ask:

Generate the secret word and the first clue.
Enter fullscreen mode Exit fullscreen mode

One of the strategies in the prompt engineering guide linked above was to use User: and Bot: to indicate the turn-by-turn conversation flow. I experimented with this early on as I was building this prompt and the model often got confused on what needed to happen next. I suspect this was because I didn't have specific enough instructions early on. I may revisit this in a future version but for now, this works and you can see the final prompt below.

My final prompt

Here is the final prompt to initialize the game with a password and the first clue:

Act like you are an expert AWS and cloud word game show host who generates a secret word and one-word clues to guess that word. The user will only respond with a guess.

Follow the instructions when responding.
Instructions:
- The secret word should be 1-4 words long
- The secret word should be based on AWS service names
- Do NOT obfuscate the secret word in the output
- The clues you generate will represent the secret word
- The clues MUST be only one word long
- The clues should start out difficult and get increasingly easier over time
- You MUST ONLY share one clue at a time
- ALWAYS generate a new, unique clue
- NEVER generate a clue that is one of the past guesses
- Generate a clue during every interaction
- You MUST answer in JSON format only
- DO NOT use any other format while answering the question
- DO NOT include any additional explanation or information in the JSON output or after the JSON output

Please follow the output schema as shared below.
Output Schema:
{
  “word”: "the secret word to guess",
  “clue”: "clue1"
}

Example:
{
  “word”: “AWS”,
  “clue”: “cloud"
}

Generate the secret word and the first clue.
Enter fullscreen mode Exit fullscreen mode

Then I used this follow up prompt when submitting a guess:

My guess is: {guess}

Generate the next clue. DO NOT change the secret word. DO NOT use any of the past clues.
Enter fullscreen mode Exit fullscreen mode

Here I gave it some context (the guess) and the instructions. Sometimes the secret word was changing and occasionally past clues were being reused, so I added a nice little reminder.

Next, I had to code this up in my web app.

Assistance while coding

As I mentioned earlier, I used AWS Amplify and the Amazon Bedrock Converse API to implement this game. As we all do, I ran into road bumps as I wrote my code and I want to share how I handled them. I've written before about my experience using generative AI and more specifically Amazon Q Developer to support my development workflow (here, here, here, and here). While I used those tools throughout building this app, what I found most interesting was a new addition to Amazon Q Developer: the CMD+i keyboard shortcut, also known as inline chat.

During my coding, I often run up against a question about syntax, how to express some logic in a simple way, want to refactor a block of code, or generate a small snippet of code. Normally, I just forge ahead or more recently I would head over to the Amazon Q Developer chat pane and type out my query, and then copy/paste the results. But while I was building this app out, I used inline chat to assist me. I simplified blocks of code, figured out how to compare the lowercased version of two string values (because apparently my IDE was flagging what I originally wrote as incorrect), printed out the entire contents of the conversation variable to the console, and made an interpolated string compile. I really did ask "How can I make this compile?" and it gave me an updated version. Turns out there were a bunch of back ticks that needed to be escaped.

Here's how inline chat works:

  1. Select the block of code you want to work with
  2. Press ⌘ + i on Mac or Ctrl + i on Windows, in VSCode or JetBrains
  3. Type out your prompt and press Enter
  4. Amazon Q Developer will generate a diff inline with your existing code that you can accept or reject.

Here's what it looks like after selecting the code and triggering the inline chat:

Screenshot of selected text and inline chat prompt

And then to accept/reject the suggestion:

Screenshot showing the suggestion and how to accept or reject the suggestion

What's really nice here is that I don't have to flip back and forth between the Amazon Q Developer chat pane and my code, copying over updated code snippets. I get to stay right in the text editor pane as I type out my code. This functionality shines with code updates (like refactoring or helping fix a bug in a block of code), but it also supports code additions, too.

Check out more uses cases here on how inline chat works.

For the future

This isn't perfect and there are definitely improvements we could make for the future. Here's what I might think about for next time:

  • Adding authentication (set up Amplify Auth with Cognito here)
  • A leaderboard to track scores against other users
  • Refining the prompt for better passwords and clues
  • Updating the UI (it's pretty basic)

Wrapping up

In this post, you learned about how to craft a prompt for the Amazon Titan Text models, how to use Amazon Q Developer's new inline chat feature. You can grab the code for this game app here.

The AWS Game Builder Challenge runs through November and you can build your own game. Check out the details of the hackathon here.

Haven't tried Amazon Q Developer yet? Get started with a Builder ID here.

I hope this has been helpful. If you'd like more like this, smash that like button 👍, share this with your friends 👯, or drop a comment below 💬. You can also find my other content like this on Instagram and YouTube.

Top comments (5)

Collapse
 
pengeszikra profile image
Peter Vivo

Good to know about AWS hackatron. Thx! Keep up good work!

Collapse
 
jennapederson profile image
Jenna Pederson

Good luck! Let me know if you have any questions about the hackathon.

Collapse
 
pengeszikra profile image
Peter Vivo

Which is the perfect way to modify code by Amazon Q developer, to improve some part, but I don't like to copy to proper part instead just use prompt which made a diff like Amazon Q: Refactor ... but that don't ask for prompt.

AT this moment I make 12 different game POC at no time, which is a good starting point, but the later modification will be hard. Honestly I don't speend too much time to read the code, just generate one idea after another.

Collapse
 
jangelodev profile image
João Angelo

Hi Jenna Pederson,
Top, thanks for sharing.

Collapse
 
jennapederson profile image
Jenna Pederson

Thanks!