<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Huynh-Chinh</title>
    <description>The latest articles on DEV Community by Huynh-Chinh (@chinhh).</description>
    <link>https://dev.to/chinhh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chinhh"/>
    <language>en</language>
    <item>
      <title>Everything you need to start working with Qdrant</title>
      <dc:creator>Huynh-Chinh</dc:creator>
      <pubDate>Thu, 30 Mar 2023 04:03:38 +0000</pubDate>
      <link>https://dev.to/chinhh/everything-you-need-to-start-working-with-qdrant-3fhp</link>
      <guid>https://dev.to/chinhh/everything-you-need-to-start-working-with-qdrant-3fhp</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Qdrant&lt;/strong&gt; (quadrant) is a vector similarity search engine. Nó cung cấp một dịch vụ production-ready với API thuận tiện để lưu trữ, tìm kiếm và quản lý points-vectors có additional payload.&lt;br&gt;
&lt;strong&gt;Qdrant&lt;/strong&gt; được điều chỉnh để hỗ trợ lọc mở rộng. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cách dễ nhất để sử dụng Qdrant là run một pre-built image bằng docker. Download image from &lt;a href="https://hub.docker.com/r/qdrant/qdrant"&gt;DockerHub&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull qdrant/qdrant&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And run the service inside the docker: &lt;br&gt;
&lt;code&gt;docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Trong trường hợp này, Qdrant sẽ sử dụng cấu hình mặc định và lưu trữ tất cả dữ liệu trong thư mục &lt;code&gt;./qdrant_storage&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Và bây giờ Qdrant có thể truy cập được tại &lt;code&gt;localhost:6333&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Collections&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Payload&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Points&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Filtering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Optimizer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Indexing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Distributed deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Snapshots&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;13. Quantization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;14. Integraions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;15. Telemetry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;16. Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://qdrant.tech/documentation/"&gt;Ref.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Thank you very much for taking time to read this. I would really appreciate any comment in the comment section.&lt;br&gt;
Enjoy🎉&lt;/p&gt;

</description>
      <category>qdrant</category>
      <category>blogsvietnamese</category>
      <category>quadrant</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Text Processing Techniques in NLP</title>
      <dc:creator>Huynh-Chinh</dc:creator>
      <pubDate>Mon, 28 Feb 2022 10:23:57 +0000</pubDate>
      <link>https://dev.to/chinhh/text-processing-techniques-in-nlp-1034</link>
      <guid>https://dev.to/chinhh/text-processing-techniques-in-nlp-1034</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;br&gt;
Text processing is one of the most common tasks used in machine learning applications such as language translation, sentiment analysis, spam filtering and many others.&lt;/p&gt;

&lt;p&gt;Text processing refers only the analysis, manipulation, and generation of text, while natural language processing refers of the ability of a computer to understand human language in a valuable way. Basically, natural language processing is the next step after text processing.&lt;/p&gt;

&lt;p&gt;For example, a simple sentiment analysis would require a machine learning model to look for instances of positive or negative sentiment words, which could be provided to the model beforehand. This would be text processing, since the model isn't understanding the words, it's just looking for words that it was programmed to look for.&lt;/p&gt;

&lt;p&gt;A natural language processing model would be translating full sentences into another language. Since syntax varies from one language to another, the computer has to understand the meaning of the sentences in order to accurately translate them. But while NLP is more advanced than text processing, it always has text processing invilved as a step in the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Sentence cleaning&lt;/strong&gt;&lt;br&gt;
Sentence cleaning or noise removal is one of the first things you should be looking into when it comes to Text Mining and NLP. There are various ways to remove noise. This includes punctuation removal, special character removal, numbers removal, html formatting removal, domain specific keyword removal (e.g 'RT' for retweet), source code removal, header removal and more. It all depends on which domain you are working in and what entails noise for your task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; """Basic cleaning of texts."""

 # remove html markup
 text=re.sub("(&amp;lt;.*?&amp;gt;)","",text)

 #remove non-ascii and digits
 text=re.sub("(\\W|\\d)"," ",text)

 #remove whitespace
 text=text.strip()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above are some examples of ways to clean text. There will be different ways depending on the complexity of the data and the type of language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Stop Words&lt;/strong&gt;&lt;br&gt;
Stop words are a set of commonly used words in a language. Examples of stop words in English are &lt;em&gt;"a"&lt;/em&gt;, &lt;em&gt;"the"&lt;/em&gt;, &lt;em&gt;"is"&lt;/em&gt;, &lt;em&gt;"are"&lt;/em&gt; and etc. The intuition behind using stop words is that, by removing low information words from text, we can focus on the important words instead.&lt;/p&gt;

&lt;p&gt;For example, in the context of a search system, if your query is &lt;em&gt;"what is text preprocessing?"&lt;/em&gt;, you want the search system to focus on surfacing documents that talk about text preprocessing over documents that talk about what is. This can be done by preventing all words from  your stop word list from being analyzed. Stop words are commonly applied in search systems, text classification applications, topic modeling, topic extraction and others.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stopwords=['this','that','and','a','we','it','to','is','of','up','need']
text="this is a text full of content and we need to clean it up"

words=text.split(" ")
shortlisted_words=[]

#remove stop words
for w in words:
    if w not in stopwords:
        shortlisted_words.append(w)
    else:
        shortlisted_words.append("W")

print("original sentence: ",text)    
print("sentence with stop words removed: ",' '.join(shortlisted_words))    

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;original sentence: this is a text full of content and we need to clean it up
sentence with stop words removed:  W W W text full W content W W W W clean W W
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Regular Expression&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://dev.to/chinhh/python-regular-expressions-4m0k"&gt;Read more...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Tokenization&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://dev.to/chinhh/what-is-tokenization-in-nlp-1cd2"&gt;Read more...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. N-grams (Unigram, Bigram, Trigram)&lt;/strong&gt;&lt;br&gt;
Text n-grams are commonly utilized in natural language processing and text mining. It’s essentially a string of words that appear in the same window at the same time.&lt;/p&gt;

&lt;p&gt;N-grams is a technique to tokenize a string into substrings, by equally dividing an existing string into equal substrings of length N.&lt;/p&gt;

&lt;p&gt;Basically, N is usually from 1~3, with the corresponding names unigram (N=1), bigram(N=2), trigram(N=3).&lt;br&gt;
For a simple example we have the string "good morning", parsed into bigrams:&lt;br&gt;
&lt;code&gt;"good morning" =&amp;gt; {"go", "oo", "od", "d ", " m", "mo", "or", "rn", "ni", "in", "ng"}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;From the above example, you can easily imagine how N-gram works. To implement N-gram, just a few lines of code as follows, like the example written in python as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def split_ngram(statement, ngram):
    result = []
    if(len(statement)&amp;gt;=ngram):
        for i in xrange(len(statement) - ngram + 1):
            result.append(statement[i:i+ngram])
    return result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consider the sentence "I like dancing in the rain" see the Uni-Gram, Bi-Gram, and Tri-Gram cases below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Uni-Gram: 'I', 'like', 'dancing', 'in', 'the', 'rain'
Bi-Gram: 'I like', 'like dancing', 'dancing in', 'in the', 'the rain'
Tri-Gram: 'I like dancing', 'like dancing in', 'dancing in the', 'in the rain'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Implement N-Grams using Python NLTK&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from nltk import ngrams

sentence = 'I like dancing in the rain'

ngram = ngrams(sentence.split(' '), n=2)

for x in ngram:
    print(x)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;('I', 'like')
('like', 'dancing')
('dancing', 'in')
('in', 'the')
('the', 'rain')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. Text Normalization&lt;/strong&gt;&lt;br&gt;
A highly overlooked preprocessing step is text normalization. Text normalization is the process of transforming a text into a canonical (standard) form.  For example, the word "gooood" and "gud" can be transformed to "good", its canonical form. Another example is mapping of near identical words such as &lt;em&gt;"stopwords"&lt;/em&gt;, &lt;em&gt;"stop-words"&lt;/em&gt; and &lt;em&gt;"stop words"&lt;/em&gt; to just &lt;em&gt;"stopwords"&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Text normalization is important for noisy texts such as social media comments, text messages and comments to blog posts where abbreviations, misspellings and use of out-of-vocabulary words (oov) are prevalent.&lt;/p&gt;

&lt;p&gt;Here's an example of words before and after normalization:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FBkrGVa6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ul9iwfm8agwrziufzzx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FBkrGVa6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ul9iwfm8agwrziufzzx2.png" alt="Image description" width="589" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Stemming&lt;/strong&gt;&lt;br&gt;
Stemming is the process of reducing inflection in words (e.g. troubled, troubles) to their root form (e.g. trouble). The “root” in this case may not be a real root word, but just a canonical form of the original word.&lt;/p&gt;

&lt;p&gt;Stemming uses a crude heuristic process that chops off the ends of words in the hope of correctly transforming words into its root form. So the words “trouble”, “troubled” and “troubles” might actually be converted to troublinstead of trouble because the ends were just chopped off (ughh, how crude!).&lt;/p&gt;

&lt;p&gt;There are different algorithms for stemming. The most common algorithm, which is also known to be empirically effective for English, is Porters Algorithm. Here is an example of stemming in action with Porter Stemmer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import nltk
import pandas as pd
from nltk.stem import PorterStemmer

# init stemmer
porter_stemmer=PorterStemmer()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# stem connect variations
words=["connect","connected","connection","connections","connects"]
stemmed_words=[porter_stemmer.stem(word=word) for word in words]
stemdf= pd.DataFrame({'original_word': words,'stemmed_word': stemmed_words})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# stem trouble variations
words=["trouble","troubled","troubles","troublemsome"]
stemmed_words=[porter_stemmer.stem(word=word) for word in words]
stemdf= pd.DataFrame({'original_word': words,'stemmed_word': stemmed_words})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5cVrZi5U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4kuh6wkgbwnbeen1oytb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5cVrZi5U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4kuh6wkgbwnbeen1oytb.png" alt="Image description" width="355" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Lemmatization&lt;/strong&gt;&lt;br&gt;
Lemmatization on the surface is very similar to stemming, where the goal is to remove inflections and map a word to its root form. The only difference is that, lemmatization tries to do it the proper way. It doesn't just chop things off, it actually transforms words to the actual root. For example, the word "better" would map to "good". It may use a dictionary such as &lt;a href="https://www.nltk.org/_modules/nltk/stem/wordnet.html"&gt;WordNet for mappings&lt;/a&gt; or some special &lt;a href="https://www.semanticscholar.org/paper/A-Rule-based-Approach-to-Word-Lemmatization-Plisson-Lavrac/5319539616e81b02637b1bf90fb667ca2066cf14"&gt;rule-based approaches&lt;/a&gt;. Here is an example of lemmatization in action using a WordNet-based approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
# init lemmatizer
lemmatizer = WordNetLemmatizer()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#lemmatize trouble variations
words=["trouble","troubling","troubled","troubles",]
lemmatized_words=[lemmatizer.lemmatize(word=word,pos='v') for word in words]
lemmatizeddf= pd.DataFrame({'original_word': words,'lemmatized_word': lemmatized_words})
lemmatizeddf=lemmatizeddf[['original_word','lemmatized_word']]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#lemmatize goose variations
words=["goose","geese"]
lemmatized_words=[lemmatizer.lemmatize(word=word,pos='n') for word in words]
lemmatizeddf= pd.DataFrame({'original_word': words,'lemmatized_word': lemmatized_words})
lemmatizeddf=lemmatizeddf[['original_word','lemmatized_word']]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z9x6kMqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyfx2z6p2i92x6ycoy7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z9x6kMqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyfx2z6p2i92x6ycoy7c.png" alt="Image description" width="376" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Ref:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://algorithmia.com/blog/text-processing-what-why-and-how"&gt;[1]&lt;/a&gt; Text processing: what, why, and how &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kdnuggets.com/2019/04/text-preprocessing-nlp-machine-learning.html"&gt;[2]&lt;/a&gt; All you need to know about text preprocessing for NLP and Machine Learning&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Thank you very much for taking time to read this. I would really appreciate any comment in the comment section.&lt;br&gt;
Enjoy🎉&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>machinelearning</category>
      <category>artificaialintelligence</category>
      <category>python</category>
    </item>
    <item>
      <title>Server Monitoring with Prometheus and Grafana setup in Docker and Portainer</title>
      <dc:creator>Huynh-Chinh</dc:creator>
      <pubDate>Fri, 25 Feb 2022 08:15:15 +0000</pubDate>
      <link>https://dev.to/chinhh/server-monitoring-with-prometheus-and-grafana-266o</link>
      <guid>https://dev.to/chinhh/server-monitoring-with-prometheus-and-grafana-266o</guid>
      <description>&lt;p&gt;&lt;strong&gt;I-Introduction&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Portainer&lt;/strong&gt; is a free Docker Container management tool with compact size and intuitive management interface, simple to deploy and use, allowing users to easily manage Docker host or Swarm cluster. This tool works on a container deployed on Docker Engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; is a leading time-series, an open-source platform for visualization and monitoring. It allows you to query, visualize, set alerts, and understand metrics no matter where they are stored. You can create amazing dashboards in Grafana to visualize and monitor the metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus&lt;/strong&gt; is an open-source time-series monitoring system for machine-centric and highly dynamic service-oriented architectures. It can literally monitor everything. It integrates with Grafana very smoothly as Grafana also offers Prometheus as one of its data sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus Architecture&lt;/strong&gt;: At its core, Prometheus has a main component called Prometheus Server, responsible for the actual monitoring work.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixgz35no17vaq7ui4bh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixgz35no17vaq7ui4bh.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;
  The Prometheus server consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time Series Database that stores all the metric data like current CPU usage, memory usage etc.&lt;/li&gt;
&lt;li&gt;Data Retrieval Worker is responsible for all the data pulling activities from applications, services, servers etc. and pushing them into the database.&lt;/li&gt;
&lt;li&gt;HTTP Server API meant to accept queries for the stored data. The Server API is used to display the data in a dashboard or a Web UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;II-Install Portainer with Docker Swarm on Linux&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;1. Introduction&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Portainer can be deployed on top of any K8s, Docker or Swarm environment. It works seamlessly in the cloud, on prem and at the edge to give you a consolidated view of all your containers.&lt;br&gt;
Portainer consists of two elements, the Portainer Server and the Portainer Agent. Both elements run as lightweight Docker containers on a Docker engine. This document will help you deploy the Portainer Server and Agent containers on your Linux environment. To add a new Linux Swarm environment to an existing Portainer Server installation, please refer to the &lt;a href="https://docs.portainer.io/v/ce-2.11/start/install/agent/swarm/linux" rel="noopener noreferrer"&gt;Portainer Agent installation instructions&lt;/a&gt;.&lt;br&gt;
To get started, you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The latest version of Docker installed and working&lt;/li&gt;
&lt;li&gt;Swarm mode enabled and working, including the overlay network for the swarm service communication&lt;/li&gt;
&lt;li&gt;sudo access on the manager node of your swarm cluster&lt;/li&gt;
&lt;li&gt;By default, Portainer will expose the UI over port 9443 and expose a TCP tunnel server over port 8000.&lt;/li&gt;
&lt;li&gt;The manager and worker nodes must be able to communicate with each other over port 9001.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2. Deployment&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Portainer can be directly deployed as a service in your Docker cluster. Note that this method will automatically deploy a single instance of the Portainer Server, and deploy the Portainer Agent as a global service on every node in your cluster.&lt;br&gt;
First, retrieve the stack YML manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L https://downloads.portainer.io/portainer-agent-stack.yml \
    -o portainer-agent-stack.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then use the downloaded YML manifest to deploy your stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stack deploy -c portainer-agent-stack.yml portainer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Portainer Server and the Agents have now been installed. You can check to see whether the Portainer Server and Agent containers have started by running &lt;code&gt;docker ps&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@manager01:~# docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED              STATUS              PORTS                NAMES
59ee466f6b15   portainer/agent:2.11.1          "./agent"                About a minute ago   Up About a minute                        portainer_agent.xbb8k6r7j1tk9gozjku7e43wr.5sa6b3e8cl6hyu0snlt387sgv
2db7dd4bfba0   portainer/portainer-ce:2.11.1   "/portainer -H tcp:/…"   About a minute ago   Up About a minute   8000/tcp, 9443/tcp   portainer_portainer.1.gpuvu3pqmt1m19zxfo44v7izx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;3. Logging In&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Now that the installation is complete, you can log into your Portainer Server instance by opening a web browser and going to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://localhost:9443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accessing the Portainer dashboard page, at the first interface after successful setup, the user will see the information of the successfully connected endpoint: the number of stacks, the number of containers, the number of volumes, the number of images and one Docker and host information.&lt;br&gt;
...readmore...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;III-Server Monitoring with Prometheus and Grafana setup in Docker and Portainer&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://youtu.be/9TJx7QTrTyo" rel="noopener noreferrer"&gt;Ref&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;1. Deploy Prometheus and Grafana&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create a new stack and define or paste the content of your docker-compose file in a box web editor.&lt;br&gt;
image1&lt;br&gt;
We need to deploy Prometheus and Grafana so the full content of docker-compose will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'

volumes:
  prometheus-data:
    driver: local
  grafana-data:
    driver: local

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - /etc/prometheus:/etc/prometheus
      - prometheus-data:/prometheus
    restart: unless-stopped
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;2. Configure Prometheus&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Configuring Prometheus to monitor itself&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$sudo mkdir /etc/prometheus

sudo vim /etc/prometheus/prometheus.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  # external_labels:
  #  monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=&amp;lt;job_name&amp;gt;` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;3. Third-Party Exporters&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Node_exporter&lt;/strong&gt; is one of the exports that help prometheus collect metrics of the machine to be monitored and installed on the target machine (the machine being monitored).&lt;br&gt;
&lt;strong&gt;cAdvisor&lt;/strong&gt; is to analyze usage, performance, and many other metrics from Container applications, providing users with an overview of all running containers.&lt;/p&gt;

&lt;p&gt;At stack monitoring we need to add the content config of cadvisor and node_exporter to the docker-compose file then update the stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  node_exporter:
    image: quay.io/prometheus/node-exporter:latest
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

  cadvisor:
    image: google/cadvisor:latest
    container_name: cadvisor
    # ports:
    #   - "8080:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    devices:
      - /dev/kmsg
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Prometheus config file we need to add node-exporter and cadvisor to the scrape configuration as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; # Example job for node_exporter
  - job_name: 'node_exporter'
    static_configs:
      - targets: ['node_exporter:9100']

  # Example job for cadvisor
  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;4. Visualize data with Grafana&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
In the Grafana page we built above at &lt;code&gt;localhost:3000&lt;/code&gt;&lt;br&gt;
Here we log in with the user and password as admin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf1gnxbk0ix7wj47fn30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf1gnxbk0ix7wj47fn30.png" alt="Image description" width="472" height="361"&gt;&lt;/a&gt;&lt;br&gt;
At the Home homepage, we need to add the data source, here we will add the data source from Prometheus because what we need to do is visualize on the Grafana dashboard to receive the data poured in from Prometheus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l9sreoctji37pndv1s0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l9sreoctji37pndv1s0.png" alt="Image description" width="614" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuxwe5rs035lwr3w6u1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuxwe5rs035lwr3w6u1m.png" alt="Image description" width="800" height="236"&gt;&lt;/a&gt;&lt;br&gt;
At the setting we need to reconfigure the HTTP tab at the URL is &lt;code&gt;http://prometheus:9090&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjz3z94v1lolh6iidf79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjz3z94v1lolh6iidf79.png" alt="Image description" width="599" height="378"&gt;&lt;/a&gt;&lt;br&gt;
After connecting your data source, at Home we create a new dashboard =&amp;gt; Add an empty panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsoo7nrfv7xusye7j6sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsoo7nrfv7xusye7j6sw.png" alt="Image description" width="674" height="332"&gt;&lt;/a&gt;&lt;br&gt;
The Metrics browser helps us query the data to visualize.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6txj8d9sx3h8btzw6v9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6txj8d9sx3h8btzw6v9e.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;5. Import Grafana Dashboards&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
In the Grafana homepage &lt;code&gt;https://grafana.com&lt;/code&gt; at the Dashboards tab we will see there are many dashboards. In this article we will learn about node exporters and cadvisor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xcsa3kvf41ii06tdgnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xcsa3kvf41ii06tdgnj.png" alt="Image description" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get node exporter full by copying the ID and pasting it into the dashboards server we built.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwefoxcm9h202igwpvni2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwefoxcm9h202igwpvni2.png" alt="Image description" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgttisbwvxlxlx2x667o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgttisbwvxlxlx2x667o.png" alt="Image description" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqswxdmwv19qa5nmydlsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqswxdmwv19qa5nmydlsr.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;br&gt;
Get a Cadvisor exporter by copying the ID and pasting it into the dashboards server we built.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegxdg18lpwdgpvs4lj7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegxdg18lpwdgpvs4lj7h.png" alt="Image description" width="800" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcpgdzs0asjfypypshwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcpgdzs0asjfypypshwm.png" alt="Image description" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7wmm25vfwooc1nmgkll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7wmm25vfwooc1nmgkll.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IV-Alertmanager with Slack&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;1.Introduction&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.&lt;br&gt;
&lt;a href="https://dev.to/chinhh/prometheus-alert-manager-with-slack-pagerduty-and-gmail-2ng"&gt;&lt;em&gt;Readmore...&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Thank you very much for taking time to read this. I would really appreciate any comment in the comment section.&lt;br&gt;
Enjoy🎉&lt;/p&gt;

</description>
      <category>docker</category>
      <category>servermonitoring</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
    <item>
      <title>GETTING STARTED WITH NATURAL LANGUAGE PROCESSING</title>
      <dc:creator>Huynh-Chinh</dc:creator>
      <pubDate>Fri, 25 Feb 2022 02:22:37 +0000</pubDate>
      <link>https://dev.to/chinhh/getting-started-with-natural-language-processing-pc</link>
      <guid>https://dev.to/chinhh/getting-started-with-natural-language-processing-pc</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Natural language processing (NLP) is concerned with enabling computers to interpret, analyze, and approximate the generation of human speech. Typically, this would refer to tasks such as generating responses to questions, translating languages, identifying languages, summarizing documents, understanding the sentiment of text, spell checking, speech recognition, and many other tasks. The field is at the intersection of linguistics, AI, and computer science.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roadmap of NLP for Machine Learning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Pre-processing&lt;/strong&gt;   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sentence cleaning&lt;/li&gt;
&lt;li&gt;Stop Words&lt;/li&gt;
&lt;li&gt;Regular Expression&lt;/li&gt;
&lt;li&gt;Tokenization&lt;/li&gt;
&lt;li&gt;N-grams (Unigram, Bigram, Trigram)&lt;/li&gt;
&lt;li&gt;Text Normalization&lt;/li&gt;
&lt;li&gt;Stemming&lt;/li&gt;
&lt;li&gt;Lemmatization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://dev.to/chinhh/text-processing-techniques-in-nlp-1034"&gt;&lt;em&gt;read more...&lt;/em&gt; &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Linguistics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part-of-Speech Tags&lt;/li&gt;
&lt;li&gt;Constituency Parsing&lt;/li&gt;
&lt;li&gt;Dependency Parsing&lt;/li&gt;
&lt;li&gt;Syntactic Parsing&lt;/li&gt;
&lt;li&gt;Semantic Analysis&lt;/li&gt;
&lt;li&gt;Lexical Semantics&lt;/li&gt;
&lt;li&gt;Coreference Resolution&lt;/li&gt;
&lt;li&gt;Chunking&lt;/li&gt;
&lt;li&gt;Entity Extraction/ Named Entity Recognition(NER)&lt;/li&gt;
&lt;li&gt;Named Entity Disambiguation/ Entity Linking&lt;/li&gt;
&lt;li&gt;Knowledge Graphs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Word Embeddings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Frequency-based Word Embedding&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One Hot Encoding&lt;/li&gt;
&lt;li&gt;Bag of Words or CountVectorizer()&lt;/li&gt;
&lt;li&gt;TFIDF of TFIDFVectorizer()&lt;/li&gt;
&lt;li&gt;Co-occurrence Matrix, Co-occurrence Vector&lt;/li&gt;
&lt;li&gt;Hashing Vectorizer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Pretrained Word Embedding&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Word2Vec (by Google): CBOW, Skip-Gram&lt;/li&gt;
&lt;li&gt;GloVe (by Stanford)&lt;/li&gt;
&lt;li&gt;fastText (by Facebook)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Topic Modeling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latent Semantic Analysis (LSA)&lt;/li&gt;
&lt;li&gt;Probabilistic Latent Semantic Analysis (pLSA)&lt;/li&gt;
&lt;li&gt;Latent Dirichlet Allocation (LDA)&lt;/li&gt;
&lt;li&gt;lda2Vec&lt;/li&gt;
&lt;li&gt;Non-Negative Matrix Factorization (NMF)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. NLP with Deep Learning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine Learning (Logistic Regression, SVM, Naïve Bayes)&lt;/li&gt;
&lt;li&gt;Embedding Layer&lt;/li&gt;
&lt;li&gt;Artificial Neural Network&lt;/li&gt;
&lt;li&gt;Deep Neural Network&lt;/li&gt;
&lt;li&gt;Convolution Neural Network&lt;/li&gt;
&lt;li&gt;RNN/LSTM/GRU&lt;/li&gt;
&lt;li&gt;Bi-RNN/Bi-LSTM/Bi-GRU&lt;/li&gt;
&lt;li&gt;Pretrained Language Models: ELMo, ULMFiT&lt;/li&gt;
&lt;li&gt;Sequence-to-Sequence/Encoder-Decoder&lt;/li&gt;
&lt;li&gt;Transformers (attention mechanism)&lt;/li&gt;
&lt;li&gt;Encoder-only Transformers: BERT&lt;/li&gt;
&lt;li&gt;Decoder-only Transformers: GPT&lt;/li&gt;
&lt;li&gt;Transfer Learning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Example Use cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sentiment Analysis&lt;/li&gt;
&lt;li&gt;Question Answering&lt;/li&gt;
&lt;li&gt;Language Translation&lt;/li&gt;
&lt;li&gt;Text/Intent Classification&lt;/li&gt;
&lt;li&gt;Text Summarization&lt;/li&gt;
&lt;li&gt;Text Similarity&lt;/li&gt;
&lt;li&gt;Text Clustering&lt;/li&gt;
&lt;li&gt;Text Generation&lt;/li&gt;
&lt;li&gt;Chatbots (DialogFlow, RASA, Self-made Bots)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Libraries&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NLTK&lt;/li&gt;
&lt;li&gt;Spacy&lt;/li&gt;
&lt;li&gt;Gensim&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Thank you very much for taking time to read this. I would really appreciate any comment in the comment section.&lt;br&gt;
Enjoy🎉&lt;/p&gt;

</description>
      <category>python</category>
      <category>nlp</category>
      <category>machinelearning</category>
      <category>artificaialintelligence</category>
    </item>
  </channel>
</rss>
