<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fatima Jannet</title>
    <description>The latest articles on DEV Community by Fatima Jannet (@fatimajannet).</description>
    <link>https://dev.to/fatimajannet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fatimajannet"/>
    <language>en</language>
    <item>
      <title>NLP - Semantics and Sentiment Analysis</title>
      <dc:creator>Fatima Jannet</dc:creator>
      <pubDate>Wed, 04 Jun 2025 05:28:49 +0000</pubDate>
      <link>https://dev.to/fatimajannet/nlp-semantics-and-sentiment-analysis-2n21</link>
      <guid>https://dev.to/fatimajannet/nlp-semantics-and-sentiment-analysis-2n21</guid>
      <description>&lt;h2&gt;
  
  
  Overview of Semantic and Word Vector
&lt;/h2&gt;

&lt;p&gt;word2vec is a two layer neural net that processes text; where input is a text corpus and output is a vector set. Word2vec’s purpose is to group the vector of similar words. It detects similarities mathematically. It is kind of a numerical distribution of vector of of similar words.&lt;/p&gt;

&lt;p&gt;word2veq is all about training words against other words. There is 2 ways of doing this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;bag of words (CBOW)&lt;/li&gt;
&lt;li&gt;skip-gram&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;They are the opposite of each other. In CBOW, given the context word, it predicts the output word. In skip-gram, it tries to predict the context word from the output word.&lt;/p&gt;

&lt;p&gt;Whatever the method is, each of the words are going to be a vector. In spacy, each of these vectors has 300 dimensions. Training an autoencoder for word2vec yourself can take a long time on a large text corpus. Because of this, it's uncommon for people to not use built-in embedded word vectors. However, if you want to train your own autoencoder for word2vec, you can choose either fewer or more dimensions. Typically, dimensions range from 100 to 1000. The more dimensions you have, the longer the training time will be. However, with more dimensions, you can capture more context around each word, as there is more space to store information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Semantic and Word Vector (spaCy)
&lt;/h2&gt;

&lt;p&gt;Important thing to notice: Larger spacy english model contains the word2veq. The smaller versions do not contain the word vectors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Import spacy 
import spacy 

# Directly download via spaCy (recommended)
!python -m spacy download en_core_web_lg

# Load the model 
import spacy
nlp = spacy.load("en_core_web_lg")  

nlp('BRAC University').vector

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc and span objects also have vectors, which are created by averaging the vectors of individual tokens. This means you can not only use word2vec but also document2vec, where the document's vector is the average of all its words.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nlp('The quick brown fox jumps over the lazy dog.').vector.shape
nlp('dog').vector.shape

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Essentially, both has 300 vectors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Identifying similar words 
tokens = nlp('lion cat pet')

for token1 in tokens: 
  for token2 in tokens: 
    print(token1.text, token2.text, token1.similarity(token2))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The words "lion lion," "cat cat," and "pet pet" have a 100% similarity with each other. The similarity values range between zero and one. It makes sense that a word is completely similar to itself. What's interesting is that the word vectors have enough information to show that "lion" and "cat" have some similarity, with a score of 0.52. "Cat" and "pet" tend to have a high similarity because most cats are pets, so it makes sense they have a high similarity score. It also makes sense that "lion" and "pet" have a similarity of less than 0.5, as having a lion as a pet isn't common, though some people do. We can see that relationships are established just from the word vectors. Essentially, this process checks the cosine similarity between token one and token two.&lt;/p&gt;

&lt;p&gt;Love and hate are very different words with very different meanings. However, they're often used in the same context. You either love a movie or hate a movie, love a book or hate a book. In this way, they are similar because they're frequently used in similar situations. So, words like these can often have very similar vectors.&lt;/p&gt;

&lt;p&gt;So, remember that if words are used in a similar context, they might be similar even if they have opposite meanings in regular English.&lt;/p&gt;

&lt;p&gt;Sometimes, it's useful to combine 300 dimensions into a Euclidean L2 norm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nlp.vocab.vectors.shape
#OUTPUT (342918, 300)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes you may encounter words that is outside of the 342918×300 dimension’s outside. To check if a word is outside of vocabulary (token.is_oov), we have attributes to check.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Outside of vocab attribute checking 
tokens = nlp('dog cat nargle')

for token1 in tokens: 
  print(token1.text, token1.has_vector, token1.vector_norm, token1.is_oov)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nargel is a made-up word, so it doesn't have a vector. This means it returns a norm of 0.0. Is it outside the vocabulary? Yes, that's true. Keep in mind that common names might actually have vectors. For example, if we add karen, it does have a vector associated with it, and even some uncommon names. If I were to include my middle name, it is actually in the vocabulary, which is quite interesting.&lt;/p&gt;

&lt;p&gt;Another thing is, vector arithmetic. You can actually calculate a new vector by adding, subtracting related vectors from these words. A famous example is: king - man + queen = queen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# NOTE: THIS IS A PERFECT CODE BUT IT'S NOT GIVING THE EXPECTED OUTPUT.
# MOST LIKELY, ISSUES RELATED WITH THE LARGE LANGUAGE MODEL 

# King - Man + Woman = Queen. 
# Importing spatial to perform arithmetic
from scipy import spatial 

cosine_similarity = lambda vec1, vec2: 1 - spatial.distance.cosine(vec1, vec2)

# Grabbing their vector
king = nlp.vocab['king'].vector
man = nlp.vocab['man'].vector 
woman  = nlp.vocab['woman'].vector

# King - Man + Woman = Queen --&amp;gt; new_vector similar queen, princess, highness 

new_vector = king  - man + woman
computed_similarities = [] 

# Going all out in the entire vocabulary 
# FOR ALL WORDSIN MY VOCAB 

for word in nlp.vocab: 
  if word.has_vector: 
    if word.is_lower: 
      if word.is_alpha: #numbers 
        if word.is_stop == False:
          similarity = cosine_similarity(new_vector, word.vector)
          computed_similarities.append((word, similarity))

# Sorting in a decsending order
# Without the minus, it sorts in ascending order, showing the least similar words.
computed_similarities = sorted(computed_similarities, key = lambda item: -item[1])


# Printing the top 10 word from the tuple

# Grab the first word of the tuple, ask for it's text
# Then do it for tuple in computed_similarities
# Go through top 10, :10
print([t[0].text for t in computed_similarities[:10]])
# Expected output: ['queen', 'monarch', 'princess', 'royal', 'throne', ...]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s some sort of understanding of royalty related with &lt;em&gt;king&lt;/em&gt;, and some sort of understanding of the genders here. We understand &lt;em&gt;King&lt;/em&gt; is a man. And if we subtract the gender dimension from &lt;em&gt;King&lt;/em&gt; and add &lt;em&gt;woman&lt;/em&gt; to it, we get something of a royalty as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sentiment Analysis
&lt;/h2&gt;

&lt;p&gt;We have a tool called VADER (Valence Aware Dictionary for Sentiment Reasoning). It processes raw text data with an algorithm to determine sentiment. No pre-labeled training is needed. VADER is a model used for text sentiment analysis, and it is sensitive to polarity, indicating both positivity and negativity.&lt;/p&gt;

&lt;p&gt;Vader is available in NLTK. We can directly use it on unlabeled text data. Vader sentiment analysis depends on a dictionary which maps the Lexical features to emotion intensities AKA sentiment score. Sentiment score is obtained by summing up the intensity of each word in a text, so like, every single word has a thing to do with sentiment score. words like love, like, enjoy, happy - all convey to positive sentiment. Vader is smart enough to understand ‘did not enjoy’ is a negative sentiment. Vader take into account of every single word. Also, Vader will mark the intensity of LOVE!!! &amp;gt; love. But yah, Vader can’t detect sarcasm. It’s really difficult. I mean, how it can detect if it sees positive words getting used in a negative way?&lt;/p&gt;

&lt;p&gt;Okay, however, let’s see how to use Vader in NLTK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sentiment Analysis (NLTK)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import nltk

nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer

sid = SentimentIntensityAnalyzer()

review = "This is a good anime"
sid.polarity_scores(review) # Only one +ve word 

review = "BEST ANIME! This was the best, most AWESOME anime MADE EVER IN HISTORY!!!"
sid.polarity_scores(review) # It has many positive words, capitalized, even had '!!! marks'

review = "This was the WORST anime!! YUCK! waste of my time!"
sid.polarity_scores(review) # It has no positive word, rather bunch of negative words.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let’s do it on amazon reviews.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/fatimajannet/NLP-with-Fatima" rel="noopener noreferrer"&gt;Get your resources here &lt;/a&gt;&lt;br&gt;
The whole process is written in the Colab file. There's not much to explain here, so please check out the Colab file.&lt;/p&gt;

&lt;p&gt;That’s it! please run this model with moviereviews.tsv which is also given in the github repo.&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>What is Deep Learning?</title>
      <dc:creator>Fatima Jannet</dc:creator>
      <pubDate>Sun, 17 Nov 2024 17:39:08 +0000</pubDate>
      <link>https://dev.to/fatimajannet/what-is-deep-learning-1dk0</link>
      <guid>https://dev.to/fatimajannet/what-is-deep-learning-1dk0</guid>
      <description>&lt;p&gt;About 25-30 years ago, people didn't know what the internet was. Now, we can't imagine a day without it!&lt;/p&gt;

&lt;p&gt;However, I’ll be giving you a quick run on what is deep learning.&lt;/p&gt;

&lt;p&gt;Neural networks and deep learning was invented back at 60s. But they got recognition in the 80s. People started talking about them, conducted a lot of researches and so on. They thought it will change the whole world but then the hype kind of died. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is that&lt;/strong&gt;? The technology to facilitate neural networks were not on the right margin. For deep learning and neural networking you need two things: enormous amount of data and strong processing power; which was not available back at that time. &lt;/p&gt;

&lt;p&gt;Let's have  look at three years: &lt;strong&gt;1956, 1980, 2017&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Do you know how did storage look back in 1956?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy57bvij8aeh7ae5mo9rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy57bvij8aeh7ae5mo9rq.png" alt="Image description" width="630" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, this is a hard drive, and that's 5 megabytes right there. It's on a forklift, about the size of a small room. It was being moved to another location on a plane. In 1956. A company had to pay &lt;strong&gt;2,500$&lt;/strong&gt; of that time's money to rent that hard drive for one month, not buy it, just rent it.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;1980&lt;/strong&gt;, the situation had improved a little bit. Still very expensive for only 10 megabytes (which is like one photo these days)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sbrujj2g0jpvo66medt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sbrujj2g0jpvo66medt.png" alt="Image description" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in &lt;strong&gt;2017&lt;/strong&gt;, we've got a 256 gigabyte SSD card for $150, which can fit on your finger.&lt;/p&gt;

&lt;p&gt;So from 1956 to 1980, storage &lt;strong&gt;capacity doubled&lt;/strong&gt;, and then it increased about 25,600 times by 2017. The time periods aren't very different, but there was a huge leap in technology. This shows that the growth isn't linear; it's exponential.&lt;/p&gt;

&lt;p&gt;Now, for the processing: here's a chart on an &lt;strong&gt;algorithmic scale&lt;/strong&gt;. If we plot hard drive costs per gigabyte, it quickly approaches zero. Now, you can get free storage on Dropbox and Google Drive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbjs4owtd04jzdrkbak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbjs4owtd04jzdrkbak.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, scientists are exploring the use of DNA for storage, although it's quite expensive. It costs $7,000 to synthesize 2MB of data and another $2,000 to read it.&lt;/p&gt;

&lt;p&gt;But have you noticed that this situation is quite similar to the early days of hard drives and planes? This is also going to improve rapidly due to the exponential growth curve. Ten or twenty years from now, everyone will be using DNA storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38loe0xclusb8ztqpgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38loe0xclusb8ztqpgp.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This information is from &lt;a href="https://www.nature.com/articles/537022a" rel="noopener noreferrer"&gt;Nature&lt;/a&gt;. As you can see, you can store all the world's data in just 1 kilogram of DNA storage. Alternatively, you can store about 1 billion terabytes of data in just 1 gram of DNA storage.&lt;/p&gt;

&lt;p&gt;This example shows how fast we're progressing, which is why deep learning is gaining momentum now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09bbh8oqbigcs60efns1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09bbh8oqbigcs60efns1.png" alt="Image description" width="300" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same growth applies to processing capacity, which is also increasing at an exponential rate. This is known as &lt;strong&gt;Moore's Law&lt;/strong&gt;, and you've probably heard of it. &lt;/p&gt;

&lt;p&gt;Right now, computers have surpassed the thinking ability of a rat! They have already exceeded the thinking capacity of a human brain, and by 2040 or 2045, they will surpass the combined thinking power of all humans. So, basically we're entering the era of computers that are incredibly powerful and can process things much faster than we can imagine and this is what exactly facilitating the deep learning. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14guan95gx6b3jnjkt26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14guan95gx6b3jnjkt26.png" alt="Image description" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now all of this makes us question: &lt;strong&gt;What exactly is deep learning? What is neural networking? what is it? What is going on here?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This gentleman, &lt;strong&gt;Geoffrey Hinton&lt;/strong&gt;, is known as the godfather of deep learning. He conducted research on deep learning in the 1980s and has done a lot of work in the field. He has published many research papers on deep learning. Currently, he works at Google, so much of what we will discuss comes from Geoffrey Hinton. He won the 2024 Nobel Prize in Physics&lt;/p&gt;

&lt;p&gt;[He has many great YouTube videos where he explains things clearly, so I highly recommend checking them out!]&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ay0p11urk2qlrymb1uqf.png" rel="noopener noreferrer"&gt;Image description&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The idea behind deep learning is to look at the human brain.&lt;/strong&gt; It tries to mimic the brain (neuroscience stuffs).We don't know everything about the human brain, but with the little we do know, we try to mimic it.** Why?** Because the human is one of the most powerful learning tool on this planet The way brain learns, adapts skills and apply them - we want our computers  to copy that. &lt;/p&gt;

&lt;p&gt;Here we have some neurons. These neurons are spread onto glass and observed under a microscope with some coloring. You can see what they look like. They have a body, branches, and tails. You can also see a nucleus in the middle. That's what a neuron looks like. In the human brain, there are approximately a hundred billion individual neurons in total and they are connected with each other. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqa986yniuiksjeue5dh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqa986yniuiksjeue5dh.png" alt="Image description" width="612" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So to give you a picture of this, &lt;strong&gt;this is what it looks like, this is an actual dissection of the human brain&lt;/strong&gt;.&lt;br&gt;
This is just to show how vast the network of neurons is. There are billions and billions of neurons all connected in your brain. We're not talking about five hundred, a thousand, or even a million. We're talking about billions of neurons who takes care of memorizing, balancing etc. And yes, that's what we're going to try to recreate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ig55xhl9psllh818cid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ig55xhl9psllh818cid.png" alt="Image description" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ezmmi641kyey9oqm0i3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ezmmi641kyey9oqm0i3.png" alt="Image description" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how do we recreate this in a computer? We create an artificial structure called an artificial neural network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hbk53lff98vov0q6kii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hbk53lff98vov0q6kii.png" alt="Image description" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
We have nodes or neurons, which are used for input values. These are the values you know about a certain situation. For example, if you're modeling something and want to make predictions, you'll need some input to begin with. This is called the input layer.&lt;/p&gt;

&lt;p&gt;Then you have the output, which is the value you want to predict. This is called the output layer.&lt;/p&gt;

&lt;p&gt;And in between, we have a hidden layer. In your brain, information comes through your senses like eyes and ears. It doesn't go straight to the result; it passes through many neurons first. This is why we use hidden layers in modeling the brain before reaching the output. This is the whole concept behind it: we are going to model the brain, so we need these hidden layers before the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neurons are connected to the hidden layer, and the hidden layer is connected to the output value.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful2kpj5f6g39y5zwr4xi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful2kpj5f6g39y5zwr4xi.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then we connect everything, just like in the human brain. Connect everything, interconnect everything. That's how the input values process through all these hidden layers, just like in the human brain, and then we get an output value.&lt;/p&gt;

&lt;p&gt;This is what deep learning is all about at a very abstract level.&lt;/p&gt;

&lt;p&gt;I will be uploading two blogs on two deep learning models. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Artificial Neural Networks for a Business Problem&lt;/li&gt;
&lt;li&gt;Convolutional Neural Networks for a Computer Vision task&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My blogs are pretty long, so I'll upload them on &lt;a href="https://mahia.hashnode.dev" rel="noopener noreferrer"&gt;Hashnode&lt;/a&gt; and then share another blog here with links to those two posts. Thanks for reading!&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>computerscience</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>ML Chapter 7: Natural Language Processing</title>
      <dc:creator>Fatima Jannet</dc:creator>
      <pubDate>Sun, 17 Nov 2024 04:18:20 +0000</pubDate>
      <link>https://dev.to/fatimajannet/ml-chapter-7-natural-language-processing-5b0</link>
      <guid>https://dev.to/fatimajannet/ml-chapter-7-natural-language-processing-5b0</guid>
      <description>&lt;p&gt;Natural Language Processing (NLP) involves using machine learning models to work with text and language. The goal of NLP is to teach machines to understand spoken and written words. For example, when you dictate something into your iPhone or Android device and it converts your speech to text, that's an NLP algorithm at work.&lt;/p&gt;

&lt;p&gt;You can also use NLP to analyze a text review and predict whether it's positive or negative. NLP can categorize articles or determine the genre of a book. It can even be used to create machine translators or speech recognition systems. In these cases, classification algorithms help identify the language. Most NLP algorithms are classification models, including Logistic Regression, Naive Bayes, CART (a decision tree model), Maximum Entropy (also related to decision trees), and Hidden Markov Models (based on Markov processes).&lt;/p&gt;

&lt;p&gt;Small insight before starting: On the left of the Venn diagram, we have green representing NLP. On the right, we have blue representing DL. In the intersection, we have DNLP. There's a subsection of DNLP called Seq2Seq. Sequence to sequence is currently the most cutting-edge and powerful model for NLP. However, we won't discuss seq2seq in this blog. We will be covering basically the bag-of-words classification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mmfsodi8uxawgqbpiup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mmfsodi8uxawgqbpiup.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this part, you will understand and learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean text to prepare it for machine learning models.&lt;/li&gt;
&lt;li&gt;Create a Bag of Words model.&lt;/li&gt;
&lt;li&gt;Apply machine learning models to this Bag of Words model.
Here’s what we will be focusing on. Note: We will not discuss Seq2Seq, chatbots, or deep NLP. The materials I have used are from NLP with DL, so we will exclude the DL part. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiydf6ftw1sphsawpafyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiydf6ftw1sphsawpafyk.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To read the full blog: &lt;a href="https://mahia.hashnode.dev/ml-chapter-7-natural-language-processing" rel="noopener noreferrer"&gt;ML Chapter 7: Natural Language Processing&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>nlp</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Calculus for ML: Derivatives Part - 01</title>
      <dc:creator>Fatima Jannet</dc:creator>
      <pubDate>Thu, 16 May 2024 12:54:08 +0000</pubDate>
      <link>https://dev.to/fatimajannet/calculus-for-ml-derivatives-part-01-2adm</link>
      <guid>https://dev.to/fatimajannet/calculus-for-ml-derivatives-part-01-2adm</guid>
      <description>&lt;p&gt;Do you know calculus plays an important role in ML? I am going to show you how derivates performs a vital role to make the most accurate model for you. Let's start !! &lt;/p&gt;

&lt;p&gt;First let's have a look at some derivatives. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Calculating a slope: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd3ym8xi8pfaiqpns6e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd3ym8xi8pfaiqpns6e7.png" alt="Image description" width="520" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;slope = rise/run &lt;br&gt;
      = change in x axis/change in y axis&lt;br&gt;
      = ∆x/∆y &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw83lbef8mkdbgwiu6z4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw83lbef8mkdbgwiu6z4.png" alt="Image description" width="631" height="411"&gt;&lt;/a&gt;&lt;br&gt;
Here, the slope is going to be - &lt;br&gt;
     slope =  ∆x/∆t &lt;br&gt;
           = x(15) - x(10) / t(15) - t(10)&lt;br&gt;
           = 202-122 / 15-10&lt;br&gt;
           = 16 m/s &lt;/p&gt;

&lt;p&gt;Now, I ask you a question. At t = 12.5. what is the velocity? If not the exact velocity, find me a good estimate. &lt;br&gt;
Let's find out the slope, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe759x1vixvkrjubqya2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe759x1vixvkrjubqya2v.png" alt="Image description" width="620" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            slope = ∆x/∆t
                  = (177-155)/(13-12) = 15 m/s 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But this is just a roughly value. We still don't know the exact estimate value of v at t=12.5s. To find this, we need finer and finer intervals. The more finer, the more accurate answer it is, and this is where the importance of derivate comes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Derivatives and Tangents&lt;/strong&gt;&lt;br&gt;
A tangent is a line that touches a curve at only one point. The line points out the slope of the tangent to that point. A derivative of a function gives you the gradient of a tangent at a certain point on a curve. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F883tz2epbteog9hmfkqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F883tz2epbteog9hmfkqg.png" alt="Image description" width="781" height="553"&gt;&lt;/a&gt;&lt;br&gt;
Look, how the intervals are coming closer as we are getting smaller to smaller. This gets so close that we can't even find the difference. These are called instantaneous rate of change aka the derivatives. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Derivative of a line&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbgnam602chjlp5w8oqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbgnam602chjlp5w8oqs.png" alt="Image description" width="742" height="490"&gt;&lt;/a&gt;&lt;br&gt;
slope = ∆y/∆x &lt;br&gt;
      = rise/run&lt;br&gt;
      = a(x+∆x)+b-(ax+b) / x+∆x-x &lt;br&gt;
      = a∆x/∆x = a&lt;br&gt;
So, the derivative of the eq(ax+b) is the slope a. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Derivative of Quadratic Function :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv1zzyfu7bbz5kiyhgs8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv1zzyfu7bbz5kiyhgs8.png" alt="Image description" width="793" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;slope = ∆f/∆x = {f(x+∆x)^2 - f(x)^2} / ∆x &lt;br&gt;
Between (1,1) and (1.5, 2.25) - &lt;br&gt;
∆x = 2-1 = 1 &lt;br&gt;
[∆f = 4-1 = 3 ; y2 - y1] &lt;br&gt;
∆f = (x+∆x)^2 - (x)^2 &lt;br&gt;
=&amp;gt; (1+1)^2 - 1 =&amp;gt; 4-1 = 3&lt;br&gt;
so, slope = 3 &lt;/p&gt;

&lt;p&gt;Between (1,1) and (1.5, 2.25)- &lt;br&gt;
∆x = 1.5 - 1 = 0.5&lt;br&gt;
∆f = (x+∆x)^2 - (x)^2 &lt;br&gt;
= (1+0.5)^2 - 1 = 1.25 &lt;br&gt;
slope = 1.25/0.5 = 2.5 &lt;/p&gt;

&lt;p&gt;Even smaller: (1,1) and (1.25, 1.56) - &lt;br&gt;
∆x = 1.25-1 = 0.25 &lt;br&gt;
∆f = (x+∆x)^2 - (x)^2 = (1+0,25)^2 - 1 = 0.56&lt;br&gt;
slope = 0.56 / 0.25 = 2.25&lt;/p&gt;

&lt;p&gt;Again, d/dx(f) = {f(x+∆x)^2 - f(x)^2} / ∆x &lt;br&gt;
= (x^2 + 2x∆x + ∆x^2)/ ∆x = 2x + ∆x &lt;/p&gt;

&lt;p&gt;So, f(x) = x^2 =&amp;gt; f'(x) = 2x &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Derivative of Cubic function&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvon4ptlvrmyu0m8jf3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvon4ptlvrmyu0m8jf3p.png" alt="Image description" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;slope = ∆f/∆x = {f(x+∆x)^3 - f(x)}/∆x&lt;br&gt;
If you apply the cubic formula, it will end at 3x^2 + 3x∆x + ∆x^2. &lt;br&gt;
As, ∆x -&amp;gt; 0, so, f'(y) = 3x^2 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Derivative of 1/x&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F579d843b2d1z9i1hwt9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F579d843b2d1z9i1hwt9w.png" alt="Image description" width="743" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, i am going to make it short - &lt;br&gt;
slope = ∆f/∆x = {f(x+∆x)^(-1) - f(x)}/∆x &lt;/p&gt;

&lt;p&gt;f(x) = x^(-1)&lt;br&gt;
=&amp;gt; f'(x) = -x^(2) &lt;/p&gt;

&lt;p&gt;I hope you noticed that i basically followed the rule &lt;strong&gt;nx^(n-1)&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>Calculus for ML: Derivatives. Part - 2</title>
      <dc:creator>Fatima Jannet</dc:creator>
      <pubDate>Thu, 16 May 2024 12:52:09 +0000</pubDate>
      <link>https://dev.to/fatimajannet/calculus-for-ml-derivatives-part-2-2igb</link>
      <guid>https://dev.to/fatimajannet/calculus-for-ml-derivatives-part-2-2igb</guid>
      <description>&lt;p&gt;Today I'll be talking about the Inverse function, derivative of trigonometric function, exponential and logarithms. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inverse function&lt;/strong&gt;&lt;br&gt;
The inverse function's job is to basically undo another function. The general definition is - if you get y out of f(x), then if you put y into the inverse of f, then you will get x. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjfushc9yfwd0fkuxua7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjfushc9yfwd0fkuxua7.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;br&gt;
If you put x through f function and you get x^2, then you put x^2 into g function, then you get x. That's why inverse function is also known as anti-function.&lt;/p&gt;

&lt;p&gt;Now, let's have a look at their graphs. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20jfu8z3ypqhwnl1drk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20jfu8z3ypqhwnl1drk1.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we saw, they are a reflection of each other (check the axis if you have any confusion). So. the slope is inverse of one another. &lt;br&gt;
Slope :  ∆f/∆x =  ∆y/ ∆g and, g'(y) = 1/f'(x) &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Derivative of trigonometric function&lt;/strong&gt;&lt;br&gt;
Derivatives of the trigonometric functions can be found on sin(x) and cos(x). The tangent, tan(x) = sin(x)/cos(x)&lt;/p&gt;

&lt;p&gt;f(x) = sinx &lt;br&gt;
=&amp;gt; f'(x) = cosx&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9guztxhhc4n6tzym5s5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9guztxhhc4n6tzym5s5l.png" alt="Image description" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcemnts2xwn7bqms15dxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcemnts2xwn7bqms15dxg.png" alt="Image description" width="642" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exponentials&lt;/strong&gt;&lt;br&gt;
Do you know Euler's number? To understand exponential function we need to first understand euler's number. Euler's Number 'e' is a numerical constant used in mathematical calculations. The value of e is 2.718281828459045…so on. As it's an irrational number, e can't be expressed through ration between two integers. So, we will define it is by using the expression (1 + 1/n)^n. Let's look at this expression for several values of n.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz3fh2bxw6zscbv0110r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz3fh2bxw6zscbv0110r.png" alt="Image description" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look here, the more we increase the number o n, the more it goes to a certain value and if we keep doing this, it gets more closer to 2.7182818284...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a5xwfwljxuf4cmk6qeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a5xwfwljxuf4cmk6qeg.png" alt="Image description" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the function and it's derivative both are the same. Why it is the same? I'll explain it to another blog!!&lt;/p&gt;

&lt;p&gt;Exponential function is related to many real life situations such as any kind of growth, investment, bank deposits, loans and profit, et cetera. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Logarithms *&lt;/em&gt;&lt;br&gt;
The logarithmic function is the inverse function of exponentiation. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc5gp03vjx31bb32g54t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc5gp03vjx31bb32g54t.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The right graph was calculated by using the result for inverses.&lt;br&gt;
Therefore, the derivative of logarithm is 1 over y.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
