DEV Community


Day 91 Of 100DaysOfCode: Word Tokenization with NLTK

iamdurga profile image Durga Pokharel ・2 min read

This is my 91st day of #100daysofcode and #python learning journey. Talking about today's progress I did write one blog and push the blog on GitHub. Did some code on random topic.

Like usual today also keep learning from Datacamp chapter Natural Language Processing regarding to the topic Word Tokenization with NLTK.


# Import necessary modules
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize

# Split scene_one into sentences: sentences
sentences = sent_tokenize(scene_one)

# Use word_tokenize to tokenize the fourth sentence: tokenized_sent
tokenized_sent = word_tokenize(sentences[3])

# Make a set of unique tokens in the entire scene: unique_tokens
unique_tokens = set(word_tokenize(scene_one))

# Print the unique tokens result

Enter fullscreen mode Exit fullscreen mode

Discussion (0)

Editor guide