In the previous article, we visualized the vectors on a graph and saw how we can represent similarity of words easily.
In this article, we will be training the neural network, updating the weights and observing the new results.
The new weights on the connections from the inputs to the activation functions are the word embeddings.
When we plot the words using these new weights, which are now the embeddings, we can see that “The Incredibles” and “Despicable Me” are now relatively close to each other compared to the other words in the training data.
Now that we have trained the neural network, we can see how well it predicts the next word.
If we plug “The Incredibles” into the input, we get the output for the next word as 1, which is exactly what we expect.
Now, if we plug “is” into the input, we are able to correctly predict the next word “great”.
So far, we have seen how a neural network can be trained to predict the next word in each phrase. However, predicting only the next word does not fully capture the context of a particular word.
Word2Vec has two strategies that allow us to capture more context, and we will explore those in the next article.
Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.
Just run:
ipm install repo-name
… and you’re done! 🚀





Top comments (0)