DEV Community

Cover image for Comprehending JIRA Tickets with Amazon Bedrock

Comprehending JIRA Tickets with Amazon Bedrock

Back in August, I wanted to gain insights from JIRA ticket data to better understand what our customers needed from our team, and if there were any trends I could identify to provide better self-service and FAQ documentation. I started that journey using AWS Comprehend which you can read about here: https://community.aws/posts/comprehend-jira-tickets

For those keeping track, Amazon Bedrock became generally available in September of 2023. My team had access to a preview, so when the AWS Comprehend entity analysis did not lend itself well to my use case; and I didn't feel like training a model, I started to get familiar with Bedrock. The following post is a follow-on to the Community article above and fleshes out a few details that will help those newer to Amazon Bedrock navigate the product.

First, getting into Amazon Bedrock from the AWS Console is pretty simple. You select "Bedrock" from the console and select "get started" and it takes you to a great Overview page where you can explore several foundation models. These foundation models are pre-trained by industry so you don't have to pay to train a model of your own. These models are ready to be applied to your use case.

Image description

To get access to a model and use it, that can be a bit counter-intuitive. You need to click on the "model access" link to see which models your account has access to, if any,

Image description

As you can see in the image below, some models are "available to request" and some others list "access granted". If this is your first time using Bedrock, most likely, you will need to request access to the models you are interested in for your project.

Image description

To finalize the request for the Large Language Model (LLM) of your choice, you have to click on "Manage model access".

Image description

Once you've clicked the checkbox next to the model you're interested in, you need to "save changes" and wait a few minutes for the model to become available in your account. AWS points out that you don't incur fees for using these LLM's until you've requested them and started using them.

Now that you have access to a specific LLM, you can start working with it.

For my Jira ticket use case, I leveraged the Bedrock Text Playground and selected Claude V2 as the LLM that I wanted to test using a sample of Jira ticket data. Leveraging the playground, I could drop a large amount of text, and then use prompt engineering to see what the Claude V2 model could pull out. I was happy to see that Bedrock's Anthropic option worked out of the box and appeared to support my use case in a way AWS Comprehend could not.

What continues to surprise me is prompt engineering. How you ask a question can drastically change the results returned by the LLM.

For example, if I ask Claude V2 the following question:
"Provide a list of tickets that contain text AWS"
Claude V2 replies with: "Here is the list of tickets that contain the text "AWS":"

BUT,

If I ask Claude V2 to "Provide a list of tickets that reference "AWS""
It responds with this: "Here are some of the tickets in the provided summary that reference AWS:"

The two ticket lists will include different tickets. Using Claude V2 for this type of analysis seems to require defining a vocabulary for questions that will elicit the response my human brain is expecting.

This outcome has been educational and highlights the opportunity to dive deeper into prompt engineering to ensure the Claude v2 output is what my human brain expects.

All this being said, it is eye-opening how crucial someone's understanding of a native language is to be able to determine if the results you receive from an LLM are truly accurate. You need to fully understand the data you're using and diagnose many different outcomes before you achieve the result you're looking for.

Stay tuned for my next writeup that will discuss how to work with Amazon Bedrock from VisualStudio Code.

Top comments (0)