<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vadim</title>
    <description>The latest articles on DEV Community by Vadim (@opencv).</description>
    <link>https://dev.to/opencv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/opencv"/>
    <language>en</language>
    <item>
      <title>OpenCV For Android Distribution</title>
      <dc:creator>Vadim</dc:creator>
      <pubDate>Thu, 11 Apr 2024 16:10:25 +0000</pubDate>
      <link>https://dev.to/opencv/opencv-for-android-distribution-4n3n</link>
      <guid>https://dev.to/opencv/opencv-for-android-distribution-4n3n</guid>
      <description>&lt;p&gt;The OpenCV.ai team, creators of the essential OpenCV library for computer vision, has launched version 4.9.0 in partnership with ARM Holdings. This update is a big step for Android developers, simplifying how OpenCV is used in Android apps and boosting performance on ARM devices.&lt;/p&gt;

&lt;p&gt;The full description of the updates is &lt;a href="https://www.opencv.ai/blog/opencv-for-android-distribution"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Animal Behavior Recognition Using Machine Learning</title>
      <dc:creator>Vadim</dc:creator>
      <pubDate>Thu, 22 Feb 2024 17:12:59 +0000</pubDate>
      <link>https://dev.to/opencv/animal-behavior-recognition-using-machine-learning-1ik</link>
      <guid>https://dev.to/opencv/animal-behavior-recognition-using-machine-learning-1ik</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e77i0ym9xypz4bpywnn.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e77i0ym9xypz4bpywnn.jpeg" alt="Image description" width="800" height="566"&gt;&lt;/a&gt;I hope you will find this post well. The &lt;a href="https://www.opencv.ai/blog/animal-behavior-recognition-using-machine-learning"&gt;article&lt;/a&gt; from OpenCV.ai reviews key AI methods in animal behavior recognition and animal pose detection, showing their application in fields from neurobiology to veterinary medicine. Also, it highlights the significance of recent scientific advancements in understanding and managing animal behavior.&lt;/p&gt;

&lt;p&gt;In this article you will know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Understanding Behavior Recognition in Deep Learning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deep Learning Animal Pose Recognition Methods&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Animal Behavior Recognition Techniques&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Examples of Deep Learning-based Applications&lt;br&gt;
The full article is &lt;a href="https://www.opencv.ai/blog/animal-behavior-recognition-using-machine-learning"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Look into MediaPipe solutions with Python</title>
      <dc:creator>Vadim</dc:creator>
      <pubDate>Tue, 16 Jan 2024 13:20:04 +0000</pubDate>
      <link>https://dev.to/opencv/look-into-mediapipe-solutions-with-python-2bjp</link>
      <guid>https://dev.to/opencv/look-into-mediapipe-solutions-with-python-2bjp</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bkJRIvH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y23iu25ki34zkl0bj91h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bkJRIvH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y23iu25ki34zkl0bj91h.jpg" alt="Image description" width="800" height="567"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;What is MediaPipe&lt;/strong&gt;&lt;br&gt;
MediaPipe is an open-source cross-platform framework for building machine learning pipelines for processing sequential data like video and audio and deploying it on a wide range of target devices.&lt;/p&gt;

&lt;p&gt;MediaPipe empowers your application with state-of-the-art machine learning algorithms, running with real-time speed on edge devices with low-code APIs or via no-code studio builder.&lt;/p&gt;

&lt;p&gt;You are free to use any pre-built solution as a "black box" or to customize it to your needs. You can even fully reimplement the algorithm. This article will help you to make it, as it shows how to dive inside a solution as deep as you want.&lt;/p&gt;

&lt;p&gt;Let's figure out how to access any intermediate result inside the solution graph from the Python API using the official code example from MediaPipe itself: &lt;code&gt;mediapipe.solutions.pose.Pose&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The very first step we should do is to build the mediapipe python package from its source code.&lt;/p&gt;

&lt;p&gt;Building MediaPipe python package from source code&lt;br&gt;
To achieve our goal - building the MediaPipe python package from its sources - we follow the official &lt;a href="https://github.com/google/mediapipe/blob/759e9fd56e3d43d4b152ba85ae7dd59f9cf32535/docs/getting_started/python.md#building-mediapipe-python-package"&gt;build&lt;/a&gt; instructions.&lt;/p&gt;

&lt;p&gt;It should be as easy as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;clone the source code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install all the dependencies and build tools&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;build the Python wheel/install the package into the virtual environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, sometimes you get the following error while building the MediaPipe Python package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mediapipe/tasks/cc/components/processors/proto/detection_postprocessing_graph_options.proto:39:12:
Explicit 'optional' labels are disallowed in the Proto3 syntax. To define 'optional' fields in Proto3,
simply remove the 'optional' label, as fields are 'optional' by default.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To fix it, you should change the file &lt;code&gt;mediapipe/tasks/cc/components/processors/proto/detection_postprocessing_graph_options.proto&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;find the line containing text: &lt;code&gt;syntax = "proto3";&lt;/code&gt; (for me it was line 16)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;change it to the following text: &lt;code&gt;syntax = "proto2";&lt;/code&gt; and try again.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note: If you missed some steps and jumped directly into building the package, you may get annoying error messages even after returning back and following all the instructions. One possible solution is to execute the&lt;code&gt;bazel clean --expunge&lt;/code&gt; command and try to rebuild the package. Sometimes, even this step is not enough and your best option is to delete your MediaPipe copy and start from the beginning of the process.&lt;/p&gt;

&lt;p&gt;NOTE: you may have to manually fix &lt;code&gt;__init__.py&lt;/code&gt; file of the built MediaPipe package after each rebuilding (delete duplicated code).&lt;/p&gt;

&lt;p&gt;Accessing intermediate results&lt;br&gt;
There are several possible situations you may get into while trying to extract some intermediate processing results from the MediaPipe solutions:&lt;/p&gt;

&lt;p&gt;• intermediate results are exposed from the graph, but not passed to the Python code;&lt;/p&gt;

&lt;p&gt;• you want to expose graph node inputs/outputs as the new outputs from the graph and Python code;&lt;/p&gt;

&lt;p&gt;• you want to print some information from inside the C++ code of the graph nodes.&lt;/p&gt;

&lt;p&gt;Let's take a look at all these situations one by one.&lt;/p&gt;

&lt;p&gt;The Article continued on the &lt;a href="https://www.opencv.ai/blog/look-into-mediapipe-solutions-with-python?utm_source=dev.to&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=mediapipe"&gt;OpenCV.ai blog&lt;/a&gt;...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting the Hang of OpenCV’s Inner Workings with ChatGPT</title>
      <dc:creator>Vadim</dc:creator>
      <pubDate>Wed, 06 Sep 2023 12:21:13 +0000</pubDate>
      <link>https://dev.to/opencv/getting-the-hang-of-opencvs-inner-workings-with-chatgpt-onb</link>
      <guid>https://dev.to/opencv/getting-the-hang-of-opencvs-inner-workings-with-chatgpt-onb</guid>
      <description>&lt;p&gt;Very interesting blog post from OpenCV.ai team about how can explore ChatGPT to serve for code development debugging.&lt;br&gt;
Introduction from the article:&lt;br&gt;
As programmers, we often work with familiar development environments, but occasionally we encounter new tools that can be time-consuming and challenging to learn. In such situations, having virtual assistance can be extremely beneficial.&lt;br&gt;
In this article, I will share my experience of contributing to OpenCV, a renowned open-source library, despite having limited knowledge of C++ and understanding its architecture. I achieved this with the assistance of ChatGPT, a Large Language Model (LLM).&lt;br&gt;
I hope you can find it interesting. More details are &lt;a href="https://forum.unity.com/threads/getting-the-hang-of-opencvs-inner-workings-with-chatgpt.1488105/"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
