DEV Community

Cover image for Understanding RNNs – Part 3: Unrolling a Recurrent Neural Network
Rijul Rajesh
Rijul Rajesh

Posted on

Understanding RNNs – Part 3: Unrolling a Recurrent Neural Network

In the previous article, we passed some values to the RNN and obtained the output.

Now, to make it easier to understand as we include more inputs, we need to unroll the neural network.

When unrolled, it looks like this.

When unrolling this RNN, we are splitting it into two steps, and it has two inputs as well.

By doing the calculations, we get 0 as the predicted value for tomorrow.

So, as per our assumption, the neural network correctly predicted the value for tomorrow.

Now we did this with two days of data.

Next, let us try using three days of data to make predictions in the same way.

Regardless of how many times we unroll a recurrent neural network, the weights and biases are shared across every input.

Now we have discussed basic recurrent neural networks.

Next, we will see why they are not used that often, which we will explore in the next article.

Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name
Enter fullscreen mode Exit fullscreen mode

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here

Top comments (0)