DEV Community

Johnson Lin for Contenda

Posted on • Originally published at blog.brainstory.ai on

Customize code detail, overall length, and headings

Last week we announced the ability to override some of our default parameters through the API and released our prompts for inspiration.

Today we’re going to be following up on it and exploring some of the other parameters in more detail. We understand that content is subjective, so our base model is meant to give you a good starting point. But to help you get to a blog that satisfies your needs, I’m going to be going over how you can further customize things like code detail, overall length, and headings. We’ll be using the same video as last time (Nicholas Renotte’s video: Building a Neural Network with PyTorch in 15 Minutes).

Code Similarity Threshold

The parameter code_similarity_score_threshold is how we trim down on redundant code snippets. After we extract all the text from a video and pull together all the code snippets, we check them against each other since often we’re just building on top of the same code. If there’s a significant overlap between adjacent code snippets (defined by the cosine similarity score threshold here), we just take the longest one with the assumption being that the longest one is most complete.

The default parameter here is 0.5. But let’s change it to something much higher and try to include every code snippet we can:

{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "code_similarity_score_threshold": 0.99
  }
}
Enter fullscreen mode Exit fullscreen mode

We are now using a cosine similarity score threhsold of 0.99 which means as long as the code snippets aren’t practically identical, it will be used. So now we’re passing every code snippet we’ve extracted from the video to GPT when generating a segment. The expected result would be that GPT includes more code snippets, some of which are redundant. Here’s a segment from the default parameters:

Now, we need to create a training function to train our model. We're going to implement
the training process within the standard Python if __name__ == " __main__": block.
We'll train our model for 10 epochs, looping through the dataset:

if __name__ == " __main__":
    # Train for 10 epochs
    for epoch in range(10):
        # Loop through the dataset
        for batch in dataset:
            # Training code goes here

Inside this loop, we'll include the necessary code to perform the forward pass, calculate
the loss, perform backpropagation, and update the model weights.

Now, here’s the same segment with the higher threshold:

To set up the training flow, we will use a standard Python function. First, include the
typical if __name__ == " __main__" statement to ensure the following code runs only
when this file is executed directly:

if __name__ == " __main__":

Next, create a loop to iterate through the epochs. In this example, we'll train the model
for 10 epochs:

if __name__ == " __main__":
    num_epochs = 10
    for epoch in range(num_epochs):
        # Training code goes here

Inside the loop, we'll iterate through the batches in our dataset:

if __name__ == " __main__":
    num_epochs = 10
    for epoch in range(num_epochs):
        for batch in dataset:
            # Batch processing code goes here

Now we're ready to write the training code for processing each batch.

As expected, GPT is more likely to use all the code snippets which means continuously explaining, rather than just give us the explanation at the end. We found this result to be unnecessary and made editing more tedious. But, if you’d prefer a step-by-step explanation, you should try bumping this threshold higher than 0.5. Or maybe you’d like to go the other way and cut down on code snippets even more, in which case you should try lowering this threshold.

Heading Prompts

Another prompt you can play with is our heading prompts! This is the prompt template we use to generate our headings:

Write a subheader for these paragraphs:

{TEXT}

Subheader: 
Enter fullscreen mode Exit fullscreen mode

And that’s it. We actually don’t use a system message for our headings. We found that GPT was honestly pretty satisfactory at this task just out of the box. But, if you want something more specific than what we have, you can easily add a system message to it:

{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "heading_segment": {
      "system_message_string": "[INSERT EDITED SYSTEM MESSAGE HERE]"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Let’s say you prefer your headings to be much shorter. We can give GPT specific instructions in our system text like this:

When given text, you write a heading that contains 5 word or less that summarizes the 
main topics.
Enter fullscreen mode Exit fullscreen mode

Now here’s a couple of the headings from our original blog:

Taking on the 15-Minute PyTorch Deep Learning Challenge

...

Setting Up the Python Environment and Building a Neural Network with PyTorch

...

Monitoring Training Progress and Saving the Model

And here’s what GPT turned them into:

15-Minute PyTorch Challenge

...

TorchVision Dependencies and Dataset

...

Monitoring Training Progress, Saving Model

It’s very good at listening to directions. As awkward as that last heading might seem, at least GPT was consistent in writing a headings in 5 words or less.

A common pattern also is if you want to give GPT instructions that are a bit more complex, it’s best to make it very explicit and to give at an example or two! Let’s say you want GPT to write your headings in a very specific format:

When given text, you write a heading that will always follow this format: 
'(overarching theme): (main topic)'.

Example headings:
Breaking into tech: Cassidy's journey as an enginer
Aquatic beasts: Why stingrays are so cool
Most beloved desserts: Why you should be drinking boba
Enter fullscreen mode Exit fullscreen mode

This is what we get:

Code That series: Building a PyTorch deep learning model in 15 minutes

...

Building a neural network: Creating an image classifier with PyTorch

...

Deep learning with PyTorch: Setting up the neural network, optimizer, and training process

It really follows it to a tee. Now, do you want every heading to look like Heading 2: Electric Boogaloo? That’s a separate question, but at least now we know we can.

Max Input Segment

The parameter max_input_segment refers to how big our transcript segment chunks are allowed to be. We use different ways to find what we think are natural places to split up a transcript. We break up a transcript since most videos are long enough that it’s impossible to fit the whole transcript and have enough tokens to ask for a blog from GPT.

Our default max_input_segment is 1500 tokens. If you increase it any further, you might not see any changes. This is because the models we have in place are what determine where we would want our transcript to be segmented and this number merely determines what is too long of a segment, not what is too short. But, we will see some change if we decrease it, since we do have mechanisms in place to split up our segments even further. The API call would look like this:

{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "max_input_segment": 200
  }
}
Enter fullscreen mode Exit fullscreen mode

Our original blog has 21 segments. But, now with a much smaller max size for segments, we get 30 segments. Even from the introduction, we can see the change. The original blog’s 2 segments becomes 3 segments in the new blog.

Original:

In a recent attempt to challenge myself, I decided to code a PyTorch deep learning model in
just 15 minutes, despite having looked at the PyTorch documentation for the first time the
day before. This experiment was part of my "Code That" series, where I build projects
within a tight time frame.

For this task, I set some rules to make it even more challenging. The primary constraint
was the time limit of 15 minutes. Additionally, I wasn't allowed to look at any
documentation or pre-existing code, nor could I use CodePilot (though my subscription
had expired anyway). If I were to break any of these rules, a one-minute time penalty
would be deducted from my total time limit. The goal was to build a deep neural network

using PyTorch as quickly as possible under these conditions.

To raise the stakes for this challenge, I decided that if I failed to meet the 15-minute
time limit, I would give away a $50 Amazon gift card. In the last two episodes, I didn't
quite make the time limit, so I was eager to put in extra effort this time.

Before diving into the task, let's briefly discuss what PyTorch is. PyTorch is an
open-source deep learning framework mainly used with Python. Developed by the team at
Facebook, it accelerates the process of building deep learning models and is widely
used in cutting-edge models and deep learning research. Now that we have a basic
understanding of PyTorch, let's get started with the challenge!

New:

In a recent experiment, I dove into the PyTorch documentation for the first time and
attempted to code a deep learning model using PyTorch in just 15 minutes. This was
quite a challenge, considering the time constraint and the fact that it was my first
experience with PyTorch. The goal was to build a deep neural network as quickly as
possible, setting a strict time limit for the task. Stay tuned to find out how this
high-pressure coding session went and if the deep learning model was successfully

created within the allotted time.

As with previous challenges, the time limit for this task was set to 15 minutes. However,
there were some additional constraints to make the challenge more interesting. During this
coding session, I was not allowed to reference any documentation, pre-existing code, or use
CodePilot (though my subscription had already expired). If I were to break these rules and
use any existing resources, a one-minute penalty would be deducted from the total time
limit. With these constraints in place, the challenge was set to be a true test of my

coding skills and knowledge of PyTorch.

For this challenge, there were also stakes involved. If I failed to meet the 15-minute time
limit, I would be giving away a $50 Amazon gift card to one of you. Considering the results
from the last two episodes where the time limit wasn't reached, this attempt definitely
required extra effort.

Before diving into the coding session, let's briefly discuss what PyTorch is. PyTorch is
an open-source deep learning framework, primarily used with Python. Developed by the team
at Facebook, it significantly accelerates the process of building deep learning models.
PyTorch is widely used in cutting-edge models and deep learning research. With this
background information, we were ready to embark on the challenge.

Content wise, it’s about the same. But it does get lengthier when you have more segments. Our original blog comes out to around 3200 words. Our new blog comes out to around 3600 words. We prefer to keep our blogs a bit more concise, but definitely experiment with lowering the max_input_size if you want more content from GPT that you can edit down.

Completion Max Tokens

Lastly, we have completion_max_tokens. This refers to the amount of tokens we’re asking back from GPT. Now, this by itself will likely not change your blog a ton. But, I wanted to highlight it to guide your process of experimenting with our parameters. For the sake of the experiment, let’s set our completion_max_tokens really low:

{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "body_segment": {
      "completion_max_tokens": 100
    },
    "body_code_segment": {
      "completion_max_tokens": 100
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

We’ll get blog segments back that look like this:

The 15-Minute PyTorch Deep Learning Model Challenge

In a recent episode of Code That, a challenge was taken up to code a PyTorch deep learning
model in just 15 minutes. This was attempted just a day after looking at the PyTorch
documentation for the first time. The rules of the challenge were simple but strict:

  1. A time limit of 15 minutes.
  2. No access to documentation or pre-existing code.
  3. No use of CodePilot (though the subscription had expired anyway).

Breaking any of these rules would

You’ll notice that this blog segment ends mid-sentence. That’s because GPT wasn’t given enough tokens to finish saying what it wanted to. Limiting or increasing the amount of tokens GPT uses it for the return won’t change the return by itself. But, for example, if you start using a system template asking GPT to be lengthier in its return and you start getting segments like these, then you know that it’s time to increase your completion_max_tokens so that GPT can finish writing what you asked it to.

Share please 👉👈

And that’s a deeper dive into some of our parameters! Reminder, that this is still an experimental feature and not something we can currently provide a lot of support for. But we’re excited to see what you do with it!

Make something that you’re really happy with? Discover anything cool or hilarious? We’d love to hear from you! You can tweet us at @ContendaCo. And let us know if there’s anything else you’d like to tweak about the transformation process.

Top comments (1)

Collapse
 
cassidoo profile image
Cassidy Williams

OH HECK this is so cool team!!