<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kaylee Lubick</title>
    <description>The latest articles on DEV Community by Kaylee Lubick (@kjlubick).</description>
    <link>https://dev.to/kjlubick</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kjlubick"/>
    <language>en</language>
    <item>
      <title>Animated Python: For Each Loops and Palindromes</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Tue, 18 Oct 2022 11:24:28 +0000</pubDate>
      <link>https://dev.to/kjlubick/animated-python-for-each-loops-and-palindromes-4blk</link>
      <guid>https://dev.to/kjlubick/animated-python-for-each-loops-and-palindromes-4blk</guid>
      <description>&lt;p&gt;Loops are one of the trickiest concepts for beginning programmers.&lt;/p&gt;

&lt;p&gt;To help, the team at Concepts Illuminated started a series of animations that walk through how Python code executes for loops (and conditionals, variables, and functions).&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.youtube.com/watch?v=BpsEvli1OfI" rel="noopener noreferrer"&gt;first animation features code that detects palindromes&lt;/a&gt;, a sample of which is available below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqyn6wzgb23auscnqnlu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqyn6wzgb23auscnqnlu.gif" alt="Preview of Animated Python Example" width="600" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.youtube.com/watch?v=BpsEvli1OfI" rel="noopener noreferrer"&gt;full animation, along with some commentary explaining the various steps&lt;/a&gt;, is available for free on YouTube [8 minute watch].&lt;/p&gt;

&lt;p&gt;The animation was created using &lt;a href="https://manim.community" rel="noopener noreferrer"&gt;Manim&lt;/a&gt;, which allowed us to write Python code to explain Python code. I find that amusing (and also very helpful).&lt;/p&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a ML Transformer in a Spreadsheet</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Sun, 13 Mar 2022 14:47:24 +0000</pubDate>
      <link>https://dev.to/kjlubick/building-an-ml-transformer-in-a-spreadsheet-36g</link>
      <guid>https://dev.to/kjlubick/building-an-ml-transformer-in-a-spreadsheet-36g</guid>
      <description>&lt;p&gt;&lt;a href="https://arxiv.org/pdf/1706.03762.pdf" rel="noopener noreferrer"&gt;Attention Is All You Need&lt;/a&gt; introduced Transformer models, which have been wildly effective in solving various Machine Learning problems. However, the 10 page paper is incredibly dense. There are so many details, it is difficult to gain high-level insights about how they work and why they are effective.&lt;/p&gt;

&lt;p&gt;After several people-months of reading other blog posts about them, the team at Concepts Illuminated understood them well enough to &lt;a href="https://www.youtube.com/watch?v=S9eKuRVigjY" rel="noopener noreferrer"&gt;create a Transformer in a spreadsheet and made a video walking through it&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscd93u4f0yg36j52su06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscd93u4f0yg36j52su06.png" alt="Diagram showing how Transformers have alternating " value="" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a high level, Transformers are effective because they convert the data in a way that can make it easier to find patterns. They build on ideas from Convolutional Neural Networks and Recurrent Neural Networks (Focus and Memory), combining them in something called self-attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=S9eKuRVigjY" rel="noopener noreferrer"&gt;The video&lt;/a&gt; covers these ideas in more details and &lt;a href="https://bit.ly/TransformerSheet2" rel="noopener noreferrer"&gt;this is the link to the spreadsheet with the implemented Transformer&lt;/a&gt;. Skip to the "Appendix" sheet if you want to see a layer with all the bells and whistles, including multi-headed attention and residual connections.&lt;/p&gt;

&lt;p&gt;Implementing the Transformer really helped me understand all the components. I'm especially proud of our metaphor of "scoring points" for explaining self-attention. &lt;/p&gt;

&lt;p&gt;Other resources I found useful when researching transformers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nlp.seas.harvard.edu/2018/04/03/attention.html" rel="noopener noreferrer"&gt;https://nlp.seas.harvard.edu/2018/04/03/attention.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jalammar.github.io/illustrated-transformer/" rel="noopener noreferrer"&gt;https://jalammar.github.io/illustrated-transformer/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0" rel="noopener noreferrer"&gt;https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/-QH8fRhqFHM" rel="noopener noreferrer"&gt;The Narrated Transformer Language Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/SysgYptB198" rel="noopener noreferrer"&gt;Attention Model Intuition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/3qdQAtfq1jU" rel="noopener noreferrer"&gt;Attention Mechanisms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/S27pHKBEp30" rel="noopener noreferrer"&gt;LSTM is dead. Long Live Transformers!&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stats.stackexchange.com/a/424127" rel="noopener noreferrer"&gt;What exactly are keys, queries, and values?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/" rel="noopener noreferrer"&gt;https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/deeper-learning/glossary-of-deep-learning-word-embedding-f90c3cec34ca" rel="noopener noreferrer"&gt;https://medium.com/deeper-learning/glossary-of-deep-learning-word-embedding-f90c3cec34ca&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More of our work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building a traditional neural network in a spreadsheet to learn AND and XOR (&lt;a href="https://www.youtube.com/watch?v=fjfZZ6S1ad4" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=1zwnPt73pow" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Building a convolutional neural network in a spreadsheet to recognize letters (&lt;a href="https://www.youtube.com/watch?v=8QfFtUcuYCU" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=pH0s0e62GT8" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=a4JLw4Du8YM" rel="noopener noreferrer"&gt;Building a recurrent neural network in a spreadsheet to emulate autocomplete&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>neuralnetworks</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Recurrent Neural Network In a Spreadsheet</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Mon, 25 Oct 2021 11:27:49 +0000</pubDate>
      <link>https://dev.to/kjlubick/recurrent-neural-network-in-a-spreadsheet-45f3</link>
      <guid>https://dev.to/kjlubick/recurrent-neural-network-in-a-spreadsheet-45f3</guid>
      <description>&lt;p&gt;Truly understanding Recurrent Neural Networks was hard for me. Sure, I read &lt;a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="noopener noreferrer"&gt;Kaparthy's oft-cited RNN article&lt;/a&gt; and looked at a diagrams like:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jxk2rp5dk83b6k1qukl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jxk2rp5dk83b6k1qukl.png" alt="A typical basic RNN diagram, with only 3 arrows and very little explanation" width="270" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But that didn't resonate in my brain. How do numbers "remember"? What details are lurking in the simplicity of that diagram?&lt;/p&gt;

&lt;p&gt;To understand them better, the team at Concepts Illuminated built one in a spreadsheet. It was not as straightforward as our &lt;a href="https://dev.to/kjlubick/training-a-simple-neural-network-in-a-spreadsheet-2n05"&gt;previous&lt;/a&gt; &lt;a href="https://dev.to/kjlubick/building-an-image-recognizing-spreadsheet-30kh"&gt;attempts&lt;/a&gt; to build neural networks this way, mostly because we had to discover novel ways to visualize what was going on:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c6q6smqmyzgzy2d8jgu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c6q6smqmyzgzy2d8jgu.png" alt="A visualization of how the hidden layer of an RNN progresses through time to spell the word " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those visualizations really helped RNNs click for me, and we were able then to implement one and figure out the weights to make it work. Walk through that &lt;a href="https://www.youtube.com/watch?v=a4JLw4Du8YM" rel="noopener noreferrer"&gt;entire process with us in this YouTube video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://bit.ly/RNNSheet2" rel="noopener noreferrer"&gt;completed spreadsheet is here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More of my work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building a traditional neural network in a spreadsheet to learn AND and XOR (&lt;a href="https://www.youtube.com/watch?v=fjfZZ6S1ad4" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=1zwnPt73pow" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Building a convolutional neural network in a spreadsheet to recognize letters (&lt;a href="https://www.youtube.com/watch?v=8QfFtUcuYCU" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=pH0s0e62GT8" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>neuralnetworks</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Building an Image Recognizing Spreadsheet</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Wed, 22 Sep 2021 01:42:10 +0000</pubDate>
      <link>https://dev.to/kjlubick/building-an-image-recognizing-spreadsheet-30kh</link>
      <guid>https://dev.to/kjlubick/building-an-image-recognizing-spreadsheet-30kh</guid>
      <description>&lt;p&gt;The team at Concepts Illuminated built an image recognizing spreadsheet to better understand Convolutional Neural Networks. The sheet can recognize the letters X, T, and L. Creating it really illuminates the details, especially how a deep convolution layer can build off the previous layers.&lt;/p&gt;

&lt;p&gt;There is a two-part series that walks through building that spreadsheet.&lt;/p&gt;

&lt;p&gt;Part 1 - &lt;a href="https://www.youtube.com/watch?v=8QfFtUcuYCU" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=8QfFtUcuYCU&lt;/a&gt;&lt;br&gt;
Part 2 - &lt;a href="https://www.youtube.com/watch?v=pH0s0e62GT8" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=pH0s0e62GT8&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fully implemented sheet is at &lt;a href="https://bit.ly/CNNSheet3" rel="noopener noreferrer"&gt;https://bit.ly/CNNSheet3&lt;/a&gt; if you just want to jump to the end.&lt;/p&gt;

&lt;p&gt;If you are just getting into neural networks, maybe look at our previous series on building and training a traditional neural network in a spreadsheet:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/kjlubick/training-a-simple-neural-network-in-a-spreadsheet-2n05"&gt;Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/kjlubick/training-a-deep-neural-network-in-a-spreadsheet-3me3"&gt;Part 2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>neuralnetworks</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Training a deep neural network in a spreadsheet</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Sun, 22 Aug 2021 19:57:07 +0000</pubDate>
      <link>https://dev.to/kjlubick/training-a-deep-neural-network-in-a-spreadsheet-3me3</link>
      <guid>https://dev.to/kjlubick/training-a-deep-neural-network-in-a-spreadsheet-3me3</guid>
      <description>&lt;p&gt;The &lt;a href="https://www.youtube.com/watch?v=1zwnPt73pow" rel="noopener noreferrer"&gt;second and final part&lt;/a&gt; of the Spreadsheet neural network video series is up. As promised in the &lt;a href="https://dev.to/kjlubick/training-a-simple-neural-network-in-a-spreadsheet-2n05"&gt;part 1&lt;/a&gt; writeup, this video demonstrates how to make and train a deep neural network; one with a single hidden layer.&lt;/p&gt;

&lt;p&gt;Creating and implementing the neural network algorithms in a spreadsheet genuinely helped the entire Concepts Illuminated team understand them. Hopefully they help you too.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>beginners</category>
      <category>neuralnetworks</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Training a simple Neural Network in a Spreadsheet</title>
      <dc:creator>Kaylee Lubick</dc:creator>
      <pubDate>Sat, 21 Aug 2021 20:20:59 +0000</pubDate>
      <link>https://dev.to/kjlubick/training-a-simple-neural-network-in-a-spreadsheet-2n05</link>
      <guid>https://dev.to/kjlubick/training-a-simple-neural-network-in-a-spreadsheet-2n05</guid>
      <description>&lt;p&gt;For &lt;a href="https://www.3blue1brown.com/blog/some1" rel="noopener noreferrer"&gt;SOME1&lt;/a&gt;, the team at Concepts Illuminated made &lt;a href="https://www.youtube.com/watch?v=fjfZZ6S1ad4" rel="noopener noreferrer"&gt;a video series&lt;/a&gt; that walks the viewer through making a few different neural networks in a spreadsheet.&lt;/p&gt;

&lt;p&gt;Why a spreadsheet? I personally find it easier to reason about how all the steps and equations fit together when you can see them all at once and a spreadsheet is a good mechanism for that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16ezdncyxtqddwxueou2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16ezdncyxtqddwxueou2.png" alt="A picture of a filled out spreadsheet neural network. The AND function is in the process of being trained to a single layer neural network." width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The functions learned aren't particularly complex: the series covers the AND function, the XOR function, and then a 1 bit half-adder. However, these functions are easy to visualize in 3D (and Bubble charts) and can help explain why hidden layers are necessary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=fjfZZ6S1ad4" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt; is available now. Part 2, which covers deep neural networks, should be out soon.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>beginners</category>
      <category>neuralnetworks</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
