<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Johnson Lin</title>
    <description>The latest articles on DEV Community by Johnson Lin (@jlin99).</description>
    <link>https://dev.to/jlin99</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jlin99"/>
    <language>en</language>
    <item>
      <title>A Tour of Contenda Studio</title>
      <dc:creator>Johnson Lin</dc:creator>
      <pubDate>Fri, 04 Aug 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/a-tour-of-contenda-studio-3jcg</link>
      <guid>https://dev.to/contenda/a-tour-of-contenda-studio-3jcg</guid>
      <description>&lt;p&gt;We’re really excited to launch &lt;strong&gt;Contenda Studio&lt;/strong&gt;, a new way to stretch your content further!&lt;/p&gt;

&lt;p&gt;Contenda Studio differs from our previous one-shot model, which generated one large blog from one video in its entirety. The goal here is to break down your content into building blocks you can use to rebuild something new. Since the process is a bit more involved now, I want to walk through an example to get you started on using Contenda Studio for your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Sources
&lt;/h2&gt;

&lt;p&gt;The first step you’ll take is not too different from our previous workflow. You’ll still need to upload a source. You can still submit the usual: YouTube, Twitch, or just plain downloadable URLs. It’ll take about 30 minutes to process a video, give or take. We’ll send you an email when it’s ready to use for Contenda Studio.&lt;/p&gt;

&lt;p&gt;For the purpose of this tutorial, we’ll be using the podcast &lt;a href="https://podcasts.apple.com/us/podcast/the-dev-morning-show-at-night/id1640306295"&gt;The Dev Morning Show (At Night)&lt;/a&gt;, specifically the episode - &lt;a href="https://www.youtube.com/watch?v=L6chtpcJZ44"&gt;The Mixed Reality of a Dev Advocate with Cami Williams, Engineering Manager at Meta&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As an aside: You can now also submit text content, not just video! Feel free to just copy and paste documents, articles, landing pages, of any size and we’ll also process them. Text sources are generally ready to use in roughly 10 minutes. For the scope of this tutorial, we’ll focus on just getting something out of our video.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3E-0OUgB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/add_sources.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3E-0OUgB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/add_sources.png" alt="Demo image for adding sources" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Templates
&lt;/h2&gt;

&lt;p&gt;After we’ve submitted our sources and have gotten the email notification that they’re ready to use, we can jump into Contenda Studio!&lt;/p&gt;

&lt;p&gt;First, we need to decide what kind of output we want. We currently have 4 templates for you to choose from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;📖 Tutorial&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🤝 Handshake&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;💬 Q &amp;amp; A&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📃 Recap&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’re going to create a summary of some of the important things that were said in this video.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Topics
&lt;/h2&gt;

&lt;p&gt;Next, we’ll be dropped into the actual interface of Contenda Studio. On the left, you’ll be able to select the source you want to work with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V_eNHJNK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/SelectSource.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V_eNHJNK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/SelectSource.png" alt="Demo image for selecting sources" width="306" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you load a source, you’ll see all of its topics here on the left.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--isqQq2Ya--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/topics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--isqQq2Ya--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/topics.png" alt="Demo image for displaying topics" width="325" height="950"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are your topics! We’ve broken down your content into smaller chunks and analyzed the themes and some of the key points. If you click on a tab, it’ll open the full card and that’s where you can read the key points we’ve identified in that topic to give you a fuller idea of what’s discussed in that chunk of your source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N0k2h8xO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/key_points.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N0k2h8xO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/key_points.png" alt="Demo image for displaying key points" width="290" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give each topic a good read (and all the key points if you’d like). We’ll be using them in the next step which actually generates the text you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Custom Voice
&lt;/h2&gt;

&lt;p&gt;As a creator personally, I care a lot about how to present myself and what I produce. I often mull over a lot of my word choices, and a lot of the base AI models out there in the world produce text that I would never personally write. That’s why Contenda Studio added the ability to provide sample text, so that we can be much closer to your own voice!&lt;/p&gt;

&lt;p&gt;Click on the “Customization” tab below the title and it’ll open up a little drawer with a text box for you to put a sample of your writing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nL2XDkWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/custom_voice.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nL2XDkWC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/custom_voice.png" alt="Demo image for adding custom voice" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the sake of this tutorial, let’s do something extreme. I googled Olde English and took the first poem I found.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: I have no idea about the differences between Olde and Middle and other Englishes&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Whan that aprill with his shoures soote&lt;/p&gt;

&lt;p&gt;The droghte of march hath perced to the roote,&lt;/p&gt;

&lt;p&gt;And bathed every veyne in swich licour&lt;/p&gt;

&lt;p&gt;…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It goes on for a couple more lines, but honestly, I don’t even understand the first three lines here. Regardless, we now should be expecting Olde English sounding text in our final output.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Generate Each Segment
&lt;/h2&gt;

&lt;p&gt;Drag any topics you want and drop them into the relevant sections. Contenda Studio tries to keep it cohesive so your output will look better if you generate each paragraph in order. In this example, I want to create a post about mixed reality. For the introduction I’ll still use the section about who’s speaking on this episode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cE8zBG6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/Intro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cE8zBG6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/Intro.png" alt="Demo image for choosing topics for the intro" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you’re ready to click “Generate” on each segment! It’ll produce the paragraph based on the topics you dragged in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X-3VJ5Uf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/IntroGenerated.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X-3VJ5Uf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/IntroGenerated.png" alt="Demo image for generating text for the intro" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tip: Not loving the output here? You can click “Generate” again to get another attempt at the paragraph&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, we’ll do the same with the body paragraphs. I’ll take all the topics that touch on mixed reality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--25G5JX1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/body.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--25G5JX1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/body.png" alt="Demo image for adding topics for the body" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you wanted to add another body paragraph, you can click this button below your last body paragraph as many times as you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jayfRGqF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/add_body.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jayfRGqF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/add_body.png" alt="Demo image for adding more body paragraphs" width="193" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Edit
&lt;/h2&gt;

&lt;p&gt;Once you feel like you have a good baseline of the content you want to work with, click the “Finish in Editor” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--deWnoG25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/finish_editor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--deWnoG25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/finish_editor.png" alt="Demo image for finishing the process" width="288" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’ll take you to the Contenda Studio editor, where you can really tweak the output until it’s in a state that you’re happy with! From there, you can export as a Markdown file or just copy it to share/publish wherever you’d like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UBOV6Csd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/studio_editor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UBOV6Csd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/studio_editor.png" alt="Demo image for editing generated text" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Personally, I’m loving this as is. They’re referring to augmented reality as another realm! The idea of a bunch of knights from that era putting on a VR headset and getting freaked out that they were “transported” to “another realm” really cracked me up for longer than I’d like to admit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Share please 👉👈
&lt;/h2&gt;

&lt;p&gt;And that’s a deeper dive into how you can use Contenda Studio! Made something you love? Got ideas for a template you’d like to use? We’d love to hear from you! You can tweet us at &lt;a href="https://twitter.com/contendaco"&gt;@ContendaCo&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>contentcreation</category>
      <category>writing</category>
    </item>
    <item>
      <title>Customize code detail, overall length, and headings</title>
      <dc:creator>Johnson Lin</dc:creator>
      <pubDate>Tue, 20 Jun 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/customize-code-detail-overall-length-and-headings-4lpl</link>
      <guid>https://dev.to/contenda/customize-code-detail-overall-length-and-headings-4lpl</guid>
      <description>&lt;p&gt;Last week we &lt;a href="https://dev.to/contenda/play-with-our-prompts-501n"&gt;announced the ability to override some of our default parameters through the API&lt;/a&gt; and released our prompts for inspiration.&lt;/p&gt;

&lt;p&gt;Today we’re going to be following up on it and exploring some of the other parameters in more detail. We understand that content is subjective, so our base model is meant to give you a good starting point. But to help you get to a blog that satisfies your needs, I’m going to be going over how you can further customize things like code detail, overall length, and headings. We’ll be using the same video as last time (&lt;a href="https://www.youtube.com/@NicholasRenotte"&gt;Nicholas Renotte’s&lt;/a&gt; video: &lt;a href="https://www.youtube.com/watch?v=mozBidd58VQ"&gt;Building a Neural Network with PyTorch in 15 Minutes&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Similarity Threshold
&lt;/h2&gt;

&lt;p&gt;The parameter &lt;code&gt;code_similarity_score_threshold&lt;/code&gt; is how we trim down on redundant code snippets. After we extract all the text from a video and pull together all the code snippets, we check them against each other since often we’re just building on top of the same code. If there’s a significant overlap between adjacent code snippets (defined by the cosine similarity score threshold here), we just take the longest one with the assumption being that the longest one is most complete.&lt;/p&gt;

&lt;p&gt;The default parameter here is &lt;code&gt;0.5&lt;/code&gt;. But let’s change it to something much higher and try to include every code snippet we can:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status_update_email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"johnson@contenda.co"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"youtube mozBidd58VQ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"tutorial"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"overrides"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"code_similarity_score_threshold"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.99&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are now using a cosine similarity score threhsold of &lt;code&gt;0.99&lt;/code&gt; which means as long as the code snippets aren’t practically identical, it will be used. So now we’re passing every code snippet we’ve extracted from the video to GPT when generating a segment. The expected result would be that GPT includes more code snippets, some of which are redundant. Here’s a segment from the default parameters:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now, we need to create a training function to train our model. We're going to implement &lt;br&gt;
the training process within the standard Python &lt;code&gt;if __name__ == " __main__":&lt;/code&gt; block. &lt;br&gt;
We'll train our model for 10 epochs, looping through the dataset:&lt;/p&gt;


&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; __main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Train for 10 epochs
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Loop through the dataset
&lt;/span&gt;        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Training code goes here
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Inside this loop, we'll include the necessary code to perform the forward pass, calculate &lt;br&gt;
the loss, perform backpropagation, and update the model weights.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, here’s the same segment with the higher threshold:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To set up the training flow, we will use a standard Python function. First, include the &lt;br&gt;
typical &lt;code&gt;if __name__ == " __main__"&lt;/code&gt; statement to ensure the following code runs only &lt;br&gt;
when this file is executed directly:&lt;/p&gt;


&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; __main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Next, create a loop to iterate through the epochs. In this example, we'll train the model &lt;br&gt;
for 10 epochs:&lt;/p&gt;


&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; __main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;num_epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_epochs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Training code goes here
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Inside the loop, we'll iterate through the batches in our dataset:&lt;/p&gt;


&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; __main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;num_epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_epochs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Batch processing code goes here
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now we're ready to write the training code for processing each batch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As expected, GPT is more likely to use all the code snippets which means continuously explaining, rather than just give us the explanation at the end. We found this result to be unnecessary and made editing more tedious. But, if you’d prefer a step-by-step explanation, you should try bumping this threshold higher than &lt;code&gt;0.5&lt;/code&gt;. Or maybe you’d like to go the other way and cut down on code snippets even more, in which case you should try lowering this threshold.&lt;/p&gt;

&lt;h2&gt;
  
  
  Heading Prompts
&lt;/h2&gt;

&lt;p&gt;Another prompt you can play with is our heading prompts! This is the prompt template we use to generate our headings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a subheader for these paragraphs:

{TEXT}

Subheader: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it. We actually don’t use a system message for our headings. We found that GPT was honestly pretty satisfactory at this task just out of the box. But, if you want something more specific than what we have, you can easily add a system message to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "heading_segment": {
      "system_message_string": "[INSERT EDITED SYSTEM MESSAGE HERE]"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s say you prefer your headings to be much shorter. We can give GPT specific instructions in our system text like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When given text, you write a heading that contains 5 word or less that summarizes the 
main topics.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here’s a couple of the headings from our original blog:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Taking on the 15-Minute PyTorch Deep Learning Challenge
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up the Python Environment and Building a Neural Network with PyTorch
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  Monitoring Training Progress and Saving the Model
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;And here’s what GPT turned them into:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  15-Minute PyTorch Challenge
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  TorchVision Dependencies and Dataset
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  Monitoring Training Progress, Saving Model
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s very good at listening to directions. As awkward as that last heading might seem, at least GPT was consistent in writing a headings in 5 words or less.&lt;/p&gt;

&lt;p&gt;A common pattern also is if you want to give GPT instructions that are a bit more complex, it’s best to make it very explicit and to give at an example or two! Let’s say you want GPT to write your headings in a very specific format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When given text, you write a heading that will always follow this format: 
'(overarching theme): (main topic)'.

Example headings:
Breaking into tech: Cassidy's journey as an enginer
Aquatic beasts: Why stingrays are so cool
Most beloved desserts: Why you should be drinking boba
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what we get:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Code That series: Building a PyTorch deep learning model in 15 minutes
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  Building a neural network: Creating an image classifier with PyTorch
&lt;/h2&gt;

&lt;p&gt;...&lt;/p&gt;
&lt;h2&gt;
  
  
  Deep learning with PyTorch: Setting up the neural network, optimizer, and training process
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;It really follows it to a tee. Now, do you want every heading to look like &lt;code&gt;Heading 2: Electric Boogaloo&lt;/code&gt;? That’s a separate question, but at least now we know we can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Max Input Segment
&lt;/h2&gt;

&lt;p&gt;The parameter &lt;code&gt;max_input_segment&lt;/code&gt; refers to how big our transcript segment chunks are allowed to be. We use different ways to find what we think are natural places to split up a transcript. We break up a transcript since most videos are long enough that it’s impossible to fit the whole transcript and have enough tokens to ask for a blog from GPT.&lt;/p&gt;

&lt;p&gt;Our default &lt;code&gt;max_input_segment&lt;/code&gt; is 1500 tokens. If you increase it any further, you might not see any changes. This is because the models we have in place are what determine where we would want our transcript to be segmented and this number merely determines what is too long of a segment, not what is too short. But, we will see some change if we decrease it, since we do have mechanisms in place to split up our segments even further. The API call would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "max_input_segment": 200
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our original blog has 21 segments. But, now with a much smaller max size for segments, we get 30 segments. Even from the introduction, we can see the change. The original blog’s 2 segments becomes 3 segments in the new blog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Original:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In a recent attempt to challenge myself, I decided to code a PyTorch deep learning model in &lt;br&gt;
just 15 minutes, despite having looked at the PyTorch documentation for the first time the &lt;br&gt;
day before. This experiment was part of my "Code That" series, where I build projects &lt;br&gt;
within a tight time frame.&lt;/p&gt;

&lt;p&gt;For this task, I set some rules to make it even more challenging. The primary constraint &lt;br&gt;
was the time limit of 15 minutes. Additionally, I wasn't allowed to look at any &lt;br&gt;
documentation or pre-existing code, nor could I use CodePilot (though my subscription &lt;br&gt;
had expired anyway). If I were to break any of these rules, a one-minute time penalty &lt;br&gt;
would be deducted from my total time limit. The goal was to build a deep neural network &lt;/p&gt;
&lt;h2&gt;
  
  
  using PyTorch as quickly as possible under these conditions.
&lt;/h2&gt;

&lt;p&gt;To raise the stakes for this challenge, I decided that if I failed to meet the 15-minute &lt;br&gt;
time limit, I would give away a $50 Amazon gift card. In the last two episodes, I didn't &lt;br&gt;
quite make the time limit, so I was eager to put in extra effort this time.&lt;/p&gt;

&lt;p&gt;Before diving into the task, let's briefly discuss what PyTorch is. PyTorch is an &lt;br&gt;
open-source deep learning framework mainly used with Python. Developed by the team at &lt;br&gt;
Facebook, it accelerates the process of building deep learning models and is widely &lt;br&gt;
used in cutting-edge models and deep learning research. Now that we have a basic &lt;br&gt;
understanding of PyTorch, let's get started with the challenge!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;New:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In a recent experiment, I dove into the PyTorch documentation for the first time and &lt;br&gt;
attempted to code a deep learning model using PyTorch in just 15 minutes. This was &lt;br&gt;
quite a challenge, considering the time constraint and the fact that it was my first &lt;br&gt;
experience with PyTorch. The goal was to build a deep neural network as quickly as &lt;br&gt;
possible, setting a strict time limit for the task. Stay tuned to find out how this &lt;br&gt;
high-pressure coding session went and if the deep learning model was successfully &lt;/p&gt;
&lt;h2&gt;
  
  
  created within the allotted time.
&lt;/h2&gt;

&lt;p&gt;As with previous challenges, the time limit for this task was set to 15 minutes. However, &lt;br&gt;
there were some additional constraints to make the challenge more interesting. During this &lt;br&gt;
coding session, I was not allowed to reference any documentation, pre-existing code, or use &lt;br&gt;
CodePilot (though my subscription had already expired). If I were to break these rules and &lt;br&gt;
use any existing resources, a one-minute penalty would be deducted from the total time &lt;br&gt;
limit. With these constraints in place, the challenge was set to be a true test of my &lt;/p&gt;
&lt;h2&gt;
  
  
  coding skills and knowledge of PyTorch.
&lt;/h2&gt;

&lt;p&gt;For this challenge, there were also stakes involved. If I failed to meet the 15-minute time &lt;br&gt;
limit, I would be giving away a $50 Amazon gift card to one of you. Considering the results &lt;br&gt;
from the last two episodes where the time limit wasn't reached, this attempt definitely &lt;br&gt;
required extra effort.&lt;/p&gt;

&lt;p&gt;Before diving into the coding session, let's briefly discuss what PyTorch is. PyTorch is &lt;br&gt;
an open-source deep learning framework, primarily used with Python. Developed by the team &lt;br&gt;
at Facebook, it significantly accelerates the process of building deep learning models. &lt;br&gt;
PyTorch is widely used in cutting-edge models and deep learning research. With this &lt;br&gt;
background information, we were ready to embark on the challenge.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Content wise, it’s about the same. But it does get lengthier when you have more segments. Our original blog comes out to around 3200 words. Our new blog comes out to around 3600 words. We prefer to keep our blogs a bit more concise, but definitely experiment with lowering the &lt;code&gt;max_input_size&lt;/code&gt; if you want more content from GPT that you can edit down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completion Max Tokens
&lt;/h2&gt;

&lt;p&gt;Lastly, we have &lt;code&gt;completion_max_tokens&lt;/code&gt;. This refers to the amount of tokens we’re asking back from GPT. Now, this by itself will likely not change your blog a ton. But, I wanted to highlight it to guide your process of experimenting with our parameters. For the sake of the experiment, let’s set our &lt;code&gt;completion_max_tokens&lt;/code&gt; really low:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "status_update_email": "johnson@contenda.co",
  "source_id": "youtube mozBidd58VQ",
  "type": "tutorial",
  "overrides": {
    "body_segment": {
      "completion_max_tokens": 100
    },
    "body_code_segment": {
      "completion_max_tokens": 100
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll get blog segments back that look like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  The 15-Minute PyTorch Deep Learning Model Challenge
&lt;/h2&gt;

&lt;p&gt;In a recent episode of Code That, a challenge was taken up to code a PyTorch deep learning &lt;br&gt;
model in just 15 minutes. This was attempted just a day after looking at the PyTorch &lt;br&gt;
documentation for the first time. The rules of the challenge were simple but strict:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A time limit of 15 minutes.&lt;/li&gt;
&lt;li&gt;No access to documentation or pre-existing code.&lt;/li&gt;
&lt;li&gt;No use of CodePilot (though the subscription had expired anyway).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Breaking any of these rules would&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’ll notice that this blog segment ends mid-sentence. That’s because GPT wasn’t given enough tokens to finish saying what it wanted to. Limiting or increasing the amount of tokens GPT uses it for the return won’t change the return by itself. But, for example, if you start using a system template asking GPT to be lengthier in its return and you start getting segments like these, then you know that it’s time to increase your &lt;code&gt;completion_max_tokens&lt;/code&gt; so that GPT can finish writing what you asked it to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Share please 👉👈
&lt;/h2&gt;

&lt;p&gt;And that’s a deeper dive into some of our parameters! Reminder, that this is still an experimental feature and not something we can currently provide a lot of support for. But we’re excited to see what you do with it!&lt;/p&gt;

&lt;p&gt;Make something that you’re really happy with? Discover anything cool or hilarious? We’d love to hear from you! You can tweet us at @ContendaCo. And let us know if there’s anything else you’d like to tweak about the transformation process.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>10x Code Snippets</title>
      <dc:creator>Johnson Lin</dc:creator>
      <pubDate>Tue, 30 May 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/10x-code-snippets-p6e</link>
      <guid>https://dev.to/contenda/10x-code-snippets-p6e</guid>
      <description>&lt;p&gt;If you’ve ever had to pause a video to copy a code snippet, then you know true pain. We love video tutorials as much as the next person, but sometimes we just want to copy/paste our way to the next step. That’s why we made an automatic code snippet detection model:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UZdVpxJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/codesnippets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UZdVpxJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/codesnippets.png" alt="Image of code snippet comparison" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we’re excited to announce that we’ve improved our original Alpha model and are releasing a new Beta model that has 10x’ed our previous model in more ways than one.&lt;/p&gt;

&lt;h2&gt;
  
  
  So how does it work?
&lt;/h2&gt;

&lt;p&gt;When you submit a video to Contenda, one of the many fun things that happens is an OCR pipeline. In the simplest terms, imagine if each frame in a video went through this process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What does the text on the screen say? &lt;em&gt;Lemme write that down real quick&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Was that thing code? &lt;em&gt;If yes, I bookmark this&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Comparing it to the blog, which pieces of code are the most relevant for understanding?&lt;/li&gt;
&lt;li&gt;Insert those bad boys into the final blog output.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Comparing the Alpha and Beta results
&lt;/h2&gt;

&lt;p&gt;We love when you try stuff and break it because that’s how we learn and improve! A common problem that was brought to our attention with Alpha was that it would sometimes talk around the code snippet without actually providing the code immediately afterwards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alpha:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this section, we will add the &lt;code&gt;opacity&lt;/code&gt; property to our animation and make some adjustments to the &lt;code&gt;h3&lt;/code&gt; element. Let’s begin by adding the opacity property:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Duplicate the existing keyframes and change the property to &lt;code&gt;opacity&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Set the opacity value at 0% to 0 (no unit).&lt;/li&gt;
&lt;li&gt;At 50%, set the opacity value to 0.2.&lt;/li&gt;
&lt;li&gt;Finally, at 100%, set the opacity value to 1.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let’s work on the &lt;code&gt;h3&lt;/code&gt; element…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With our new model, we’re able to much more consistently get the code snippet inserted after the explanation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Beta:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this tutorial, we’ll add opacity to our keyframe animation. First, we need to duplicate the keyframe for the “heading” animation and modify the opacity values. At the 0% mark, set the opacity to 0, which means the element will be completely transparent. Then, at the 50% mark, set the opacity to 0.2, making the element slightly visible. Finally, at the 100% mark, set the opacity to 1, making the element fully visible. Here’s the updated CSS code for the “heading” animation with the opacity changes:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@keyframes heading {
0% {
opacity: 0;
}
50% {
opacity: 0.2;
}
100% {
opacity: 1;
}
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our Beta model has more code snippets (though if I’m being honest, it’s not exactly 10x as much) while also having less redundant code! It also showed to be effective in decreasing hallucinations. And we will be continuously improving on the code snippet experience, so keep your eyes peeled for future updates and maybe, a potential 100X code snippet release.&lt;/p&gt;

&lt;h2&gt;
  
  
  That’s it!
&lt;/h2&gt;

&lt;p&gt;This feature is LIVE both through the API and the frontend editor. If you’d like to learn more, please feel free &lt;a href="https://discord.gg/bYda4pQz2v"&gt;to reach out to our team on Discord&lt;/a&gt; or &lt;a href="https://contenda.ck.page/96e4e60222"&gt;sign up for our email list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can’t wait to see what you make!&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Contenda Partners with NVIDIA</title>
      <dc:creator>Johnson Lin</dc:creator>
      <pubDate>Tue, 23 May 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/contenda-partners-with-nvidia-46pm</link>
      <guid>https://dev.to/contenda/contenda-partners-with-nvidia-46pm</guid>
      <description>&lt;h2&gt;
  
  
  Contenda Joins NVIDIA Inception
&lt;/h2&gt;

&lt;p&gt;We’re excited to announce that Contenda has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technology advancements.&lt;/p&gt;

&lt;p&gt;Contenda is a content catalyst for developer advocates to repurpose and extend existing videos into effective written content. Our API-first platform automates content creation workflows so teams can ship content faster. With built-in content quality checks, our generative AI models deliver finished drafts with high accuracy and tone matching.&lt;/p&gt;

&lt;p&gt;NVIDIA Inception will allow Contenda to tackle the problem of hallucinations in generative AI and develop guardrails to create content users can trust. The program will also offer Contenda the opportunity to collaborate with industry-leading experts and other AI-driven organizations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Contenda focuses on delivering high quality, educational content to people who need it. I am excited for Contenda to help solve real-world problems in collaboration with NVIDIA.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;-Lilly Chen, CEO and Founder at Contenda&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k95rHp9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/nvidia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k95rHp9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://contenda.co/blogimages/nvidia.png" alt="Graphic of Contenda and NVIDIA" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’d like to learn more, please feel free &lt;a href="https://discord.gg/bYda4pQz2v"&gt;to reach out to our team on Discord&lt;/a&gt; or &lt;a href="https://contenda.ck.page/96e4e60222"&gt;sign up for our email list&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
  </channel>
</rss>
