<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: code2k13</title>
    <description>The latest articles on DEV Community by code2k13 (@code2k13).</description>
    <link>https://dev.to/code2k13</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/code2k13"/>
    <language>en</language>
    <item>
      <title>Create real time language visualization of tweets in minutes</title>
      <dc:creator>code2k13</dc:creator>
      <pubDate>Sun, 01 Aug 2021 14:48:52 +0000</pubDate>
      <link>https://dev.to/code2k13/create-real-time-language-visualization-of-tweets-in-minutes-44ek</link>
      <guid>https://dev.to/code2k13/create-real-time-language-visualization-of-tweets-in-minutes-44ek</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8rCBcXhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/code2k13/nlphose/raw/main/docs/images/netflix_twitter.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8rCBcXhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/code2k13/nlphose/raw/main/docs/images/netflix_twitter.gif" alt="image of visualization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article I will show you how to perform language identification on tweets and how to stream the results to a webpage and display a real time visualization. We will use my open source project &lt;a href="https://github.com/code2k13/nlphose"&gt;nlphose&lt;/a&gt; and &lt;a href="https://c3js.org/"&gt;C3.js&lt;/a&gt; to create this visualization in minutes and without writing any Python code !&lt;/p&gt;

&lt;p&gt;To run this example you need following software&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ngrok.com/docs"&gt;ngrok&lt;/a&gt; (optional, not required if your OS has GUI)&lt;/li&gt;
&lt;li&gt;Internet Browser&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Starting the nlphose docker container
&lt;/h2&gt;

&lt;p&gt;Run the below code at shell/command prompt. It pulls the latest &lt;a href="https://hub.docker.com/repository/docker/code2k13/nlphose"&gt;nlphose docker image&lt;/a&gt; from Docker hub. After running the command, it should start &lt;em&gt;'bash'&lt;/em&gt; inside the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 code2k13/nlphose:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running the nlphose pipeline inside the container
&lt;/h2&gt;

&lt;p&gt;Copy  paste the below command inside the container's shell prompt. It will start the nlphose pipeline&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;twint &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"netflix"&lt;/span&gt; |&lt;span class="se"&gt;\&lt;/span&gt;
./twint2json.py |&lt;span class="se"&gt;\&lt;/span&gt;
./lang.py |&lt;span class="se"&gt;\&lt;/span&gt;
jq &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'[.id,.lang]'&lt;/span&gt; |&lt;span class="se"&gt;\&lt;/span&gt;
./ws.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code collects tweets from twitter containing the term &lt;em&gt;"netflix"&lt;/em&gt; using the 'twint' command and performs language identification on it. Then it streams the output using a &lt;a href="https://socket.io/"&gt;socket.io&lt;/a&gt; server. For more details, please refer to &lt;a href="https://github.com/code2k13/nlphose/wiki"&gt;wiki of my project&lt;/a&gt;. You can also create these commands graphically, as shown below using the &lt;a href="https://ashishware.com/static/nlphose.html"&gt;NlpHose Pipeline Builder tool&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w7qyi_rR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prc5klc6r72c1wcssa7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w7qyi_rR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prc5klc6r72c1wcssa7i.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Exposing local port 3000 on the internet (optional)
&lt;/h2&gt;

&lt;p&gt;If you are running this pipeline on a headless server (no browser), you can expose port 3000 of your host machine over the internet using ngrok.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./ngrok http 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running the demo
&lt;/h2&gt;

&lt;p&gt;Download &lt;a href="https://raw.githubusercontent.com/code2k13/nlphose/main/demos/netflix_languages_demo.html"&gt;this html file&lt;/a&gt; from my GitHub repo. Edit the file and update the following line in the file with ngrok url&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;endpointUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://your_ngrok_url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are not using ngrok, and have browser installed on your system (which is running the docker container), simply change the line to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;endpointUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need to run this file from a local webserver &lt;em&gt;(example &lt;a href="https://www.npmjs.com/package/node-http-server"&gt;http-server&lt;/a&gt; or python -m http.server 8080)&lt;/em&gt;.Once you run it, you should see a webpage like the one shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8rCBcXhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/code2k13/nlphose/raw/main/docs/images/netflix_twitter.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8rCBcXhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/code2k13/nlphose/raw/main/docs/images/netflix_twitter.gif" alt="image of visualization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it !! Hope you enjoyed this article !&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>nlp</category>
      <category>charting</category>
      <category>lowcode</category>
    </item>
    <item>
      <title>NlphoseBuilder : A tool to create NLP pipelines via drag and drop</title>
      <dc:creator>code2k13</dc:creator>
      <pubDate>Sat, 17 Jul 2021 19:05:23 +0000</pubDate>
      <link>https://dev.to/code2k13/nlphosebuilder-a-tool-to-create-nlp-pipelines-via-drag-and-drop-4cga</link>
      <guid>https://dev.to/code2k13/nlphosebuilder-a-tool-to-create-nlp-pipelines-via-drag-and-drop-4cga</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;✨Checkout the live &lt;a href="https://ashishware.com/static/nlphose.html"&gt;demo here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Recently  I completed work on a tool called &lt;a href="https://github.com/code2k13/nlphoseGUI"&gt;nlphoseGUIBuilder&lt;/a&gt; that allows creation of complex NLP pipelines visually, without writing a single line of code ! It uses &lt;a href="https://developers.google.com/blockly/"&gt;Blockly&lt;/a&gt; to enable creation of NLP pipelines using drag and drop. &lt;/p&gt;

&lt;p&gt;Currently following operations are supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sentiment Analysis (AFINN)&lt;/li&gt;
&lt;li&gt;NER (Spacy)&lt;/li&gt;
&lt;li&gt;Language Identification (FastText)&lt;/li&gt;
&lt;li&gt;Chunking (NLTK)&lt;/li&gt;
&lt;li&gt;Sentiment Analysis (Transformers)&lt;/li&gt;
&lt;li&gt;Question Answering (Transformers)&lt;/li&gt;
&lt;li&gt;Zero shot Classification (Transformers)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tool generates a nlphose command that can be executed in a docker container to run the pipeline. These pipelines can process streaming text like tweets or static data like files. They can be executed just like normal shell command using &lt;a href="https://github.com/code2k13/nlphose"&gt;nlphose&lt;/a&gt;.  Let me show you what I mean !&lt;/p&gt;

&lt;p&gt;Below is pipeline that searches Twitter for tweets containing 'netflix' and performs named entity recognition on it.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VOvmDp2d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiru18lsctkolq63i2j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VOvmDp2d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiru18lsctkolq63i2j0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It generates a nlphose command which looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;twint &lt;span class="nt"&gt;-s&lt;/span&gt; netflix |&lt;span class="se"&gt;\ &lt;/span&gt;
./twint2json.py |&lt;span class="se"&gt;\ &lt;/span&gt;
./entity  |&lt;span class="se"&gt;\ &lt;/span&gt;
./senti 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the above pipeline is run using  &lt;a href="https://github.com/code2k13/nlphose"&gt;nlphose&lt;/a&gt;, you can expect to see stream of JSON output similar to the one shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;....
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"6a5fe972-e2e6-11eb-9efa-42b45ace4426"&lt;/span&gt;,
  &lt;span class="s2"&gt;"text"&lt;/span&gt;: &lt;span class="s2"&gt;"Wickham were returned, and to lament over his absence from the Netherfield ball. He joined them on their entering the town, and attended them to their aunt’s where his regret and vexation, and the concern of everybody, was well talked over. To Elizabeth, however, he voluntarily acknowledged that the necessity of his absence _had_ been self-imposed."&lt;/span&gt;,
  &lt;span class="s2"&gt;"afinn_score"&lt;/span&gt;: &lt;span class="nt"&gt;-1&lt;/span&gt;.0,
  &lt;span class="s2"&gt;"entities"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"label"&lt;/span&gt;: &lt;span class="s2"&gt;"PERSON"&lt;/span&gt;,
      &lt;span class="s2"&gt;"entity"&lt;/span&gt;: &lt;span class="s2"&gt;"Wickham"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"label"&lt;/span&gt;: &lt;span class="s2"&gt;"ORG"&lt;/span&gt;,
      &lt;span class="s2"&gt;"entity"&lt;/span&gt;: &lt;span class="s2"&gt;"Netherfield"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"label"&lt;/span&gt;: &lt;span class="s2"&gt;"PERSON"&lt;/span&gt;,
      &lt;span class="s2"&gt;"entity"&lt;/span&gt;: &lt;span class="s2"&gt;"Elizabeth"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets try out something more, the below pipeline searches for tweets containing the word 'rainfall' and then finds the location where it rained using 'extractive question answering'. It also filters out answers with lower scores.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gn0hIVGD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4hdnfxqeodrk7ax2x92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gn0hIVGD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4hdnfxqeodrk7ax2x92.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the nlphose command it generates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;twint &lt;span class="nt"&gt;-s&lt;/span&gt; rainfall |&lt;span class="se"&gt;\ &lt;/span&gt;
./twint2json.py |&lt;span class="se"&gt;\ &lt;/span&gt;
./xformer.py &lt;span class="nt"&gt;--pipeline&lt;/span&gt; question-answering &lt;span class="nt"&gt;--param&lt;/span&gt; &lt;span class="s1"&gt;'where did it rain'&lt;/span&gt; |&lt;span class="se"&gt;\ &lt;/span&gt;
jq &lt;span class="s1"&gt;'if (.xfrmr_question_answering.score) &amp;gt; 0.80 then . else empty end'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is also possible to create a pipeline that processes multiple files from a folder :&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dfnnqJip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgj0aoiqlfnqj704xgu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dfnnqJip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgj0aoiqlfnqj704xgu3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above pipeline generates this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./files2json.py &lt;span class="nt"&gt;-n&lt;/span&gt; 3  data/&lt;span class="k"&gt;*&lt;/span&gt;.txt |&lt;span class="se"&gt;\ &lt;/span&gt;
./xformer.py &lt;span class="nt"&gt;--pipeline&lt;/span&gt; question-answering &lt;span class="nt"&gt;--param&lt;/span&gt; &lt;span class="s1"&gt;'who gave the speech ?'&lt;/span&gt; |&lt;span class="se"&gt;\ &lt;/span&gt;
jq &lt;span class="s1"&gt;'if (.xfrmr_question_answering.score) &amp;gt; 0.80 then . else empty end'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Play with the tool here: &lt;a href="https://ashishware.com/static/nlphose.html"&gt;https://ashishware.com/static/nlphose.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the link to the projects git repository: &lt;a href="https://github.com/code2k13/nlphoseGUI"&gt;https://github.com/code2k13/nlphoseGUI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a YouTube link of the tool in action:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/X-BmStLY-DY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Don't forget to checkout the repository of the companion project nlphose: &lt;a href="https://github.com/code2k13/nlphose"&gt;https://github.com/code2k13/nlphose&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Create NLP pipelines with drag and drop</title>
      <dc:creator>code2k13</dc:creator>
      <pubDate>Tue, 06 Jul 2021 20:00:12 +0000</pubDate>
      <link>https://dev.to/code2k13/create-nlp-pipelines-with-drag-and-drop-bjg</link>
      <guid>https://dev.to/code2k13/create-nlp-pipelines-with-drag-and-drop-bjg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BlcfGm17--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wcfnrjlfzizwlj1yegp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BlcfGm17--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wcfnrjlfzizwlj1yegp.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently I have started work on query builder GUI for my open source project &lt;a href="https://github.com/code2k13/nlphose"&gt;nlphose&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I am using &lt;a href="https://en.m.wikipedia.org/wiki/Blockly"&gt;Blockly&lt;/a&gt; to implement this feature.&lt;/p&gt;

&lt;p&gt;Still in initial phase , I have uploaded video on YouTube which you guys can check out !&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/X-BmStLY-DY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Motion detection in microscope images using Python and OpenCV</title>
      <dc:creator>code2k13</dc:creator>
      <pubDate>Mon, 05 Jul 2021 11:54:10 +0000</pubDate>
      <link>https://dev.to/code2k13/motion-detection-in-microscope-images-using-python-and-opencv-1flg</link>
      <guid>https://dev.to/code2k13/motion-detection-in-microscope-images-using-python-and-opencv-1flg</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SDsHBp8nBEQ"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;There are three types of organisms one can see in above video, the round one, the long one and couple of amobea&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last year, I was introduced to a wonderful scientific instrument called the ‘Foldscope’. I have spent hours observing things with it. One of my favorite past times is to observe ciliates using the Foldscope. Ciliates are very simple single cell organisms which are easy to find and come in numerous shapes and sizes. Most ciliates move very fast, and you need some skill with a microscope to follow them on the slide. This inspired me to write some code that could detect moving objects in a video and draw rectangles around them. Amazingly, I believe I was able to do a decent job with under 60 lines of python code&lt;br&gt;
(&lt;a href="https://github.com/code2k13/motiondetection"&gt;https://github.com/code2k13/motiondetection&lt;/a&gt; )&lt;/p&gt;

&lt;p&gt;In this post I will discuss concepts which I used for detecting moving objects and how the work together to come up with the end results.&lt;/p&gt;
&lt;h3&gt;
  
  
  Reading video with Python and OpenCV
&lt;/h3&gt;

&lt;p&gt;The first thing we need to do is be able to load frames one by one from a video. OpenCV makes this task very easy. OpenCV has a very convenient function called ‘cv2.VideoCapture’ which allows returns an object that can be used to find out information about the video (like width, height,frame rate). The same object allows us to read a single frame from the video by calling ‘read()’ method on it. The ‘read()’ method returns two values, a boolean indicating success of the operation and the frame as an image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VideoCapture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"video_input.mp4"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isOpened&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; 
    &lt;span class="n"&gt;width&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# float
&lt;/span&gt;    &lt;span class="n"&gt;height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# float
&lt;/span&gt;    &lt;span class="n"&gt;fps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CAP_PROP_FPS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full video can be read frame by frame using following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;success&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;success&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;im&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Writing videos using OpenCV
&lt;/h3&gt;

&lt;p&gt;Writing videos with OpenCV is also very easy. Similar to ‘VideoCapture’ function, the  ‘VideoWriter’ function can be used to write video , frame by frame. This function expects path of output image, code information,frames per second, width and height of output image as parameters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;fourcc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VideoWriter_fourcc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'m'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'p'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'4'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'v'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;VideoWriter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'video_output.mp4'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;fourcc&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Write a frame to the video is as easy as calling&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Finding frame difference
&lt;/h3&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/PdOXJt3mMD0"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;The above video was generated out of frame diffs from the original video&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Images are represented as matrices in the memory. OpenCV has a function called ‘cv2.absdiff()’ which can be used to calculated absolute difference of two images. This is the basis of our motion detection. We are relying on the fact that when something in the video moves it's absdiff will be non zero for those pixels. However if something stationary and has not moved in two consequitve frames, the absdiff will be zero. So, as we read the video frame by frame, we compare current frame with older frame and calculate absdiff matrix. The dimensions of this matrix are same as of the images to be compared.&lt;/p&gt;

&lt;p&gt;Sounds easy , right ? But there are some problems with this approach. Firstly, cameras and software produce artifacts when then capture and encode videos. Such artifacts give us non-zero diff even when the object is stationary .  Uneven lighting and focussing can also cause non-zero diffs for stationary portions of images.&lt;/p&gt;

&lt;p&gt;After experimenting with some approaches, I found out that thresholding the diff image using mean values works very well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nonzero&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;)].&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nonzero&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;)].&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using edge detection to improve accuracy
&lt;/h3&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/gkm9Ch0u-XU"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;Edge detection using 'Sobel' filter performed on the frame diff video&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As microorganisms move, they push matter around them, which gives positive pixels after diffing. But we want to differentiate micro-organisms from other things. Also focussing plays an important part here. Generally a lot of  out of focus moving objects will also give positive frame differences. Mostly these are blurred objects which we simply want to ignore. This is where edge detection comes into play on edges and borders in the image. To find those we use edge detection. This can be easily achieved by using 'sobel' filter from scikit learn package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sobel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_diff&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using contour detection to detect objects
&lt;/h3&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/WvaO9ieQTyM"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;Video generated after performing contour detection on the Sobel filter output&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most protozoans like ciliates will not always show a clear border (because they are mostly transparent). So when we use edge detection to detect shapes/outlines of moving objects, we get broken edges. In my experience contour detection works very well to group such broken edges and generated a more continuous border.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;contours&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hierarchy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;findContours&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thresh&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RETR_TREE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CHAIN_APPROX_SIMPLE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OpenCV has built-in functions to find contours. The best part is, the function is able to find nested contour structures and return a hirearchy. The 'cv2.findCountours' function returns heirarchy of contours. We only consider top level contours (who dont have a parent). If for  a contour 'idx' , if 'hierarchy[0][idx][3]' returns -1, it means that it is a top level contour and it does not have any parent. Everything else we ignore. &lt;/p&gt;

&lt;h3&gt;
  
  
  Creating bounding boxes
&lt;/h3&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Vcz49aZtVZQ"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;em&gt;Video showing bounding boxes drawn over contours&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Creating boxes around countours can require a bit of math. Luckily OpenCV has a convinient function 'cv2.boundingRect' which returns center coordinates, with and height of bounding rect around a given contour.Once we have that, drawing a rectangle on our frame can simply be down using cv2.rectangle function. We can pass the color and border-width when drawing the rectangle to this function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;hierarchy&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;boundingRect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;contour&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;
    &lt;span class="n"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rectangle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;),(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The concept of ‘q’
&lt;/h3&gt;

&lt;p&gt;Like I explained earlier, videos taken using a microscope can be messy. There can a lot going on. We may be only interested in detecting objects of a certain size. This where I have introduced a parameter called 'q'. This parameter was used for altering settings for various filters I experimented with. Currently this is only used to filter out bounding rects which are smaller than q^2 in area. You should experiment with different values of 'q' , depending the resolution of your video and size of objects you are interested in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Things to improve
&lt;/h3&gt;

&lt;p&gt;I want to make this approach fast enough so that it can run in realtime. Also it would be nice if I can get this ported to android or a mobile phone. I also plan to experiment with ML based segmentation techniques for better detection.&lt;/p&gt;

&lt;h3&gt;
  
  
  The full code
&lt;/h3&gt;

&lt;p&gt;The full code , along with sample video is available on Github:&lt;br&gt;
&lt;a href="https://github.com/code2k13/motiondetection"&gt;https://github.com/code2k13/motiondetection&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>imageprocessing</category>
      <category>opencv</category>
      <category>microscopy</category>
    </item>
  </channel>
</rss>
