<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Obieda Ananbeh</title>
    <description>The latest articles on DEV Community by Obieda Ananbeh (@oananbeh).</description>
    <link>https://dev.to/oananbeh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oananbeh"/>
    <language>en</language>
    <item>
      <title>BERT(Bidirectional Encoder Representation from Transformer)</title>
      <dc:creator>Obieda Ananbeh</dc:creator>
      <pubDate>Fri, 31 Dec 2021 17:49:15 +0000</pubDate>
      <link>https://dev.to/oananbeh/bertbidirectional-encoder-representation-from-transformer-57b3</link>
      <guid>https://dev.to/oananbeh/bertbidirectional-encoder-representation-from-transformer-57b3</guid>
      <description>&lt;p&gt;*&lt;em&gt;What is BERT? *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;BERT stands for (Bidirectional Encoder Representation from Transformer), was initially released in late 2018 by Google. &lt;/p&gt;

&lt;p&gt;It has made a significant breakthrough in the field of natural language processing by achieving better outcomes in a variety of NLP tasks, including question answering, text creation, sentence categorization, and many more. &lt;/p&gt;

&lt;p&gt;BERT's success is due in part to the fact that it is a context-based embedding model, as opposed to other prominent embedding models like word2vec, which are context-free.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why BERT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BERT quickly became a hot topic in NLP. This is because: &lt;br&gt;
It displays a high degree of linguistic sophistication, attaining human-level performance on some tasks. &lt;br&gt;
It may be used for a wide range of jobs. &lt;br&gt;
It has the advantages of pre-training and fine-tuning: BERT has been pre-trained on a huge text corpus by Google, and you can exploit its language comprehension by fine-tuning the pre-trained model on your own application (classification, entity recognition, question answering, etc.). With minimum design work, you may be able to acquire very precise outcomes on your assignment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How BERT Work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Transformer, an attention mechanism that learns contextual relationships between words (or sub-words) in a text, is used by BERT. Transformer has two different processes in its basic form: an encoder that reads the text input and a decoder that generates a job prediction. Only the encoder technique is required because BERT's purpose is to construct a language model. &lt;br&gt;
The Transformer encoder reads the complete sequence of words at once, unlike directional models that read the text input sequentially (left-to-right or right-to-left). As a result, it is classified as bidirectional, however it is more correct to describe it as non-directional. This property enables the model to deduce the context of a word from its surrounds (left and right of the word)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;[1]S. Ravichandiran, Getting started with Google BERT : build and train state-of-the-art natural language processing models using BERT. &lt;br&gt;
[2]C. Mccormick, “The Inner Workings-of-BERT,” 2020.&lt;br&gt;
[3]&lt;a href="https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270"&gt;https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>bert</category>
      <category>python</category>
    </item>
  </channel>
</rss>
