Truly understanding Recurrent Neural Networks was hard for me. Sure, I read Kaparthy's oft-cited RNN article and looked at a diagrams like:
But that didn't resonate in my brain. How do numbers "remember"? What details are lurking in the simplicity of that diagram?
To understand them better, the team at Concepts Illuminated built one in a spreadsheet. It was not as straightforward as our previous attempts to build neural networks this way, mostly because we had to discover novel ways to visualize what was going on:
Those visualizations really helped RNNs click for me, and we were able then to implement one and figure out the weights to make it work. Walk through that entire process with us in this YouTube video.
The completed spreadsheet is here.
More of my work:
Top comments (0)