<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: NASEEM A P</title>
    <description>The latest articles on DEV Community by NASEEM A P (@naseemap47).</description>
    <link>https://dev.to/naseemap47</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/naseemap47"/>
    <language>en</language>
    <item>
      <title>Custom Human Pose Classification using Mediapipe</title>
      <dc:creator>NASEEM A P</dc:creator>
      <pubDate>Mon, 26 Sep 2022 06:14:14 +0000</pubDate>
      <link>https://dev.to/naseemap47/custom-human-pose-classification-using-mediapipe-6fc</link>
      <guid>https://dev.to/naseemap47/custom-human-pose-classification-using-mediapipe-6fc</guid>
      <description>&lt;p&gt;Creating a Custom pose classification using Mediapipe with help of OpenCV&lt;br&gt;
GitHub: &lt;a href="https://github.com/naseemap47/CustomPose-Classification-Mediapipe.git"&gt;CustomPose-Classification-Mediapipe&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Sample OutPut
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Video
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ixhKArnm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahjjlf48xxq1vbgrxw9q.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ixhKArnm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahjjlf48xxq1vbgrxw9q.gif" alt="Video Output" width="600" height="336"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Image
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MnwcRdjT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jibdk5vmdx7qyk3gbbry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MnwcRdjT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jibdk5vmdx7qyk3gbbry.png" alt="Image Output" width="720" height="720"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;I am going to create a Custom Human pose classification using Yoga Pose Dataset.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Clone the Repository:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/naseemap47/CustomPose-Classification-Mediapipe.git
cd CustomPose-Classification-Mediapipe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Install Dependency
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Download Dataset:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  About Dataset:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;5 Classes: Chair, Cobra, Dog, Tree and Warrior&lt;/li&gt;
&lt;li&gt;Contain Train and Test data&lt;/li&gt;
&lt;li&gt;Combine both Train and Test data
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Dataset
│   ├── Chair
│   │   ├── 1.jpg
│   │   ├── 2.jpg
│   │   ├── ...
│   ├── Cobra
│   │   ├── 1.jpg
│   │   ├── 2.jpg
│   │   ├── ...
.   .
.   .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  2. Create Landmark Dataset for each Classes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--dataset&lt;/code&gt; __ Path to Dataset&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o&lt;/code&gt; , &lt;code&gt;--save&lt;/code&gt; __ Path to save &lt;strong&gt;CSV&lt;/strong&gt; file
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 poseLandmark_csv.py -i data/ -o data.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;CSV&lt;/strong&gt; file will be saved in &amp;lt;&lt;strong&gt;path_to_save_csv&lt;/strong&gt;&amp;gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Create DeepLearinng Model to predict Human Pose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--dataset&lt;/code&gt; __ Path to &lt;strong&gt;CSV&lt;/strong&gt; Data&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o&lt;/code&gt; , &lt;code&gt;--save&lt;/code&gt; __ Path to save &lt;strong&gt;model.h5&lt;/strong&gt; file
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 poseModel.py -i data.csv -o model.h5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Model will saved in &amp;lt;&lt;strong&gt;path_to_save_model&lt;/strong&gt;&amp;gt; and Model Metrics saved in &lt;strong&gt;metrics.png&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Inference
&lt;/h3&gt;

&lt;p&gt;Show Predicted Pose Class on Test Image or Video or Web-cam&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-m&lt;/code&gt; , &lt;code&gt;--model&lt;/code&gt; __ Path to saved &lt;strong&gt;model.h5&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-c&lt;/code&gt; , &lt;code&gt;--conf&lt;/code&gt; __ Min prediction &lt;strong&gt;conf&lt;/strong&gt; to detect pose class (&lt;strong&gt;0&amp;lt;conf&amp;lt;1&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--source&lt;/code&gt; __ Path to Image or Video, for webcam use Zero(0)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--save&lt;/code&gt; __ It will save Images (on &lt;strong&gt;ImageOutput&lt;/strong&gt; Dir) or Videos (“&lt;strong&gt;output.avi&lt;/strong&gt;”)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Image
python3 inference.py --model model.h5 --conf 0.75 --source data/test/image.jpg
# Video
python3 inference.py --model model.h5 --conf 0.75 --source data/test/video.mp4
# web-cam
python3 inference.py --model model.h5 --conf 0.75 --source 0
###### to save ######
# Image
python3 inference.py --model model.h5 --conf 0.75 --source data/test/image.jpg --save
# Video
python3 inference.py --model model.h5 --conf 0.75 --source data/test/video.mp4 --save
# web-cam
python3 inference.py --model model.h5 --conf 0.75 --source 0 --save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;To Exit Window — Press Q-key&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Custom Pose Classification
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Clone this Repository:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/naseemap47/CustomPose-Classification-Mediapipe.git
cd CustomPose-Classification-Mediapipe
git checkout custom
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  1. Take your Custom Pose Dataset
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Dataset
│   ├── Pose_1
│   │   ├── 1.jpg
│   │   ├── 2.jpg
│   │   ├── ...
│   ├── Pose_2
│   │   ├── 1.jpg
│   │   ├── 2.jpg
│   │   ├── ...
.   .
.   .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Create Landmark Dataset for each Classes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--dataset&lt;/code&gt; __ Path to Dataset&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o&lt;/code&gt; , &lt;code&gt;--save&lt;/code&gt; __ Path to save &lt;strong&gt;CSV&lt;/strong&gt; file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 poseLandmark_csv.py -i &amp;lt;path_to_data_dir&amp;gt; -o &amp;lt;path_to_save_csv&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CSV file will be saved in &amp;lt;&lt;strong&gt;path_to_save_csv&lt;/strong&gt;&amp;gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create DeepLearinng Model to predict Human Pose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--dataset&lt;/code&gt; __ Path to &lt;strong&gt;CSV&lt;/strong&gt; Data&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o&lt;/code&gt; , &lt;code&gt;--save&lt;/code&gt; __ Path to save &lt;strong&gt;model.h5&lt;/strong&gt; file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 poseModel.py -i &amp;lt;path_to_save_csv&amp;gt; -o &amp;lt;path_to_save_model&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Model will saved in &amp;lt;&lt;strong&gt;path_to_save_model&lt;/strong&gt;&amp;gt; and Model Metrics saved in &lt;strong&gt;metrics.png&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Inference
&lt;/h3&gt;

&lt;p&gt;Open &lt;strong&gt;inference.py&lt;/strong&gt;&lt;br&gt;
change &lt;strong&gt;Line-43&lt;/strong&gt;: According to your Class Names, Write Class Order&lt;/p&gt;

&lt;p&gt;Show Predicted Pose Class on Test Image or Video or Web-cam&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-m&lt;/code&gt; , &lt;code&gt;--model&lt;/code&gt; __ Path to saved &lt;strong&gt;model.h5&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-c&lt;/code&gt; , &lt;code&gt;--conf&lt;/code&gt; __ Min prediction &lt;strong&gt;conf&lt;/strong&gt; to detect pose class (&lt;strong&gt;0&amp;lt;conf&amp;lt;1&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; , &lt;code&gt;--source&lt;/code&gt; __ Path to Image or Video, for webcam use Zero(0)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--save&lt;/code&gt; __ It will save Images (on &lt;strong&gt;ImageOutput&lt;/strong&gt; Dir) or Videos (“&lt;strong&gt;output.avi&lt;/strong&gt;”)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 inference.py --model &amp;lt;path_to_model&amp;gt; --conf &amp;lt;model_prediction_confidence&amp;gt; --source &amp;lt;image or video or web-cam&amp;gt;
# to save
python3 inference.py --model &amp;lt;path_to_model&amp;gt; --conf &amp;lt;model_prediction_confidence&amp;gt; --source &amp;lt;image or video or web-cam&amp;gt; --save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Show Predicted Pose Class on Test Image or Video or Web-cam&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To Exit Window — Press Q-key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank You…&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>mediapipe</category>
      <category>keras</category>
      <category>tensorflow</category>
      <category>humanpose</category>
    </item>
    <item>
      <title>Auto Annotation using ONNX and YOLOv7 model (Object Detection)</title>
      <dc:creator>NASEEM A P</dc:creator>
      <pubDate>Mon, 26 Sep 2022 05:40:47 +0000</pubDate>
      <link>https://dev.to/naseemap47/auto-annotation-using-onnx-and-yolov7-model-object-detection-29e9</link>
      <guid>https://dev.to/naseemap47/auto-annotation-using-onnx-and-yolov7-model-object-detection-29e9</guid>
      <description>&lt;p&gt;Annotation is very boring work, so I think that can we use our custom trained model (ONNX model) to annotate our new Data.&lt;/p&gt;

&lt;p&gt;So I created a python module that can Auto-Annotate your Dataset using your ONNX mode.&lt;/p&gt;

&lt;p&gt;I also added new Auto-Annotator using YOLOv7 model (.pb)&lt;/p&gt;

&lt;p&gt;Link to GitHub Repository:-&lt;br&gt;
&lt;a href="https://github.com/naseemap47/autoAnnoter"&gt;https://github.com/naseemap47/autoAnnoter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can convert other types of model Tensorflow(.pb) or PyTorch(.pth) or other Models into ONNX. That’s why I choose ONNX model to build my Auto-Annotator module.&lt;/p&gt;

&lt;p&gt;Convert To ONNX Model&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From Tensorflow (.pb)
I hoping that, you are using Object Detection API for this.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html"&gt;https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the time of exporting, Use:-&lt;br&gt;
--input_type image_tensor&lt;br&gt;
NOT&lt;br&gt;
--input_type float_image_tensor&lt;br&gt;
Example:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python .\exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\my_ssd_resnet50_v1_fpn\pipeline.config --trained_checkpoint_dir .\models\my_ssd_resnet50_v1_fpn\ --output_directory .\exported-models\my_model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;pb to ONNX&lt;br&gt;
Follow tensorflow-onnx:-&lt;br&gt;
&lt;a href="https://github.com/onnx/tensorflow-onnx"&gt;https://github.com/onnx/tensorflow-onnx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;pip install -U tf2onnx&lt;br&gt;
python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx&lt;br&gt;
default of 13 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command. If you are unsure about which opset to use, refer to the ONNX operator documentation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From Pytorch (.pth) [pth to ONNX]
Checkout Links Below:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How to convert custom .pth Model to .onnx? · Issue #193 · danielgatis/rembg&lt;br&gt;
Hey, first of all thanks for sharing your great work! As described in the docs I trained a custom model based on the…&lt;br&gt;
github.com&lt;/p&gt;

&lt;p&gt;(optional) Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime - PyTorch…&lt;br&gt;
In this tutorial, we describe how to convert a model defined in PyTorch into the ONNX format and then run it with ONNX…&lt;br&gt;
pytorch.org&lt;/p&gt;

&lt;p&gt;Auto-Annotate — ONNX Model&lt;br&gt;
Clone Git Repository:-&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/naseemap47/autoAnnoter"&gt;https://github.com/naseemap47/autoAnnoter&lt;/a&gt;&lt;br&gt;
cd autoAnnoter&lt;br&gt;
Install Required Libraries:&lt;/p&gt;

&lt;p&gt;pip3 install -r requirements.txt&lt;br&gt;
Variables:&lt;/p&gt;

&lt;p&gt;-x , --xml _____ for XML Annotations&lt;br&gt;
-t, --txt _____ to annotate in (.txt) format&lt;br&gt;
-i, --dataset _ path to Dataset&lt;br&gt;
-c, --classes _ path to classes.txt file (names of object detection classes)&lt;br&gt;
Example for classes.txt :&lt;/p&gt;

&lt;p&gt;car&lt;br&gt;
person&lt;br&gt;
book&lt;br&gt;
apple&lt;br&gt;
mobile&lt;br&gt;
bottle&lt;br&gt;
....&lt;br&gt;
-m, --model__ path to ONNX model&lt;br&gt;
-s, --size__ Size of image used to train the your object detection model&lt;br&gt;
-conf, --confidence __ Model detection Confidence (0&amp;lt;confidence&amp;lt;1)&lt;br&gt;
For XML Annotations:&lt;/p&gt;

&lt;p&gt;python3 autoAnnot.py -x -i  -c  -m  -s  -conf &lt;br&gt;
For TXT Format Annotations:&lt;/p&gt;

&lt;p&gt;python3 autoAnnot.py -t -i  -c  -m  -s  -conf &lt;br&gt;
It will do all operation and the end of process you will get your Auto-Annotated data inside your data.&lt;br&gt;
Auto-Annotated data will be with your corresponding Image data.&lt;br&gt;
In this way you can easy check, the Annotation is Correct or NOT.&lt;/p&gt;

&lt;p&gt;Auto-Annotate YOLOv7 Model&lt;br&gt;
Clone Git Repository:-&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/naseemap47/autoAnnoter"&gt;https://github.com/naseemap47/autoAnnoter&lt;/a&gt;&lt;br&gt;
Variables:&lt;/p&gt;

&lt;p&gt;-i, --dataset ____ path to Dataset&lt;br&gt;
-m, --model ____ path to YOLOv7 model (.pt)&lt;br&gt;
-c, --confidence ___ Model detection Confidence (0
python3 autoAnotYolov7.py -i  -m  -c &lt;br&gt;
It will do all operation and the end of process you will get your Auto-Annotated data inside your data.&lt;br&gt;
Auto-Annotated data will be with your corresponding Image data.&lt;br&gt;
In this way you can easy check, the Annotation is Correct or NOT.&lt;/p&gt;

&lt;p&gt;Auto-Annotated data accuracy is Completely depends on your Custom Model.&lt;br&gt;
So, its better to check the Annotation is Correct or NOT.&lt;br&gt;
Let me know your feedback about my Auto-Annotator&lt;br&gt;
Thank you…&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Auto Annotation using ONNX and YOLOv7 model (Object Detection)</title>
      <dc:creator>NASEEM A P</dc:creator>
      <pubDate>Fri, 23 Sep 2022 04:33:39 +0000</pubDate>
      <link>https://dev.to/naseemap47/auto-annotation-using-onnx-and-yolov7-model-object-detection-2pii</link>
      <guid>https://dev.to/naseemap47/auto-annotation-using-onnx-and-yolov7-model-object-detection-2pii</guid>
      <description>&lt;p&gt;Annotation is very boring work, so I think that can we use our custom trained model (&lt;strong&gt;ONNX&lt;/strong&gt; model) to annotate our new Data.&lt;/p&gt;

&lt;p&gt;So I created a python module that can &lt;strong&gt;Auto-Annotate&lt;/strong&gt; your Dataset using your &lt;strong&gt;ONNX&lt;/strong&gt; mode.&lt;/p&gt;

&lt;p&gt;I also added new &lt;strong&gt;Auto-Annotator&lt;/strong&gt; using &lt;strong&gt;YOLOv7&lt;/strong&gt; model (.pb)&lt;/p&gt;

&lt;p&gt;Link to GitHub Repository:-&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/naseemap47/autoAnnoter"&gt;https://github.com/naseemap47/autoAnnoter&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can convert other types of model &lt;strong&gt;Tensorflow(.pb&lt;/strong&gt;) or &lt;strong&gt;PyTorch(.pth)&lt;/strong&gt; or other Models into &lt;strong&gt;ONNX&lt;/strong&gt;. That’s why I choose &lt;strong&gt;ONNX&lt;/strong&gt; model to build my &lt;strong&gt;Auto-Annotator&lt;/strong&gt; module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convert To ONNX Model
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. From Tensorflow (.pb)
&lt;/h3&gt;

&lt;p&gt;I hoping that, you are using Object Detection API for this.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html"&gt;https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the time of exporting,&lt;br&gt;
Use:-&lt;br&gt;
&lt;code&gt;--input_type image_tensor&lt;/code&gt;&lt;br&gt;
NOT&lt;br&gt;
&lt;code&gt;--input_type float_image_tensor&lt;/code&gt;&lt;br&gt;
Example:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python .\exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\my_ssd_resnet50_v1_fpn\pipeline.config --trained_checkpoint_dir .\models\my_ssd_resnet50_v1_fpn\ --output_directory .\exported-models\my_model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  pb to ONNX
&lt;/h3&gt;

&lt;p&gt;Follow tensorflow-onnx:-&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/onnx/tensorflow-onnx"&gt;https://github.com/onnx/tensorflow-onnx&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -U tf2onnx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;default of &lt;code&gt;13&lt;/code&gt; for the &lt;strong&gt;ONNX&lt;/strong&gt; opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the &lt;code&gt;--opset&lt;/code&gt; argument to the command. If you are unsure about which opset to use, refer to the &lt;strong&gt;ONNX&lt;/strong&gt; operator documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. From Pytorch (.pth) [pth to ONNX]
&lt;/h3&gt;

&lt;p&gt;Checkout Links Below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/danielgatis/rembg/issues/193"&gt;https://github.com/danielgatis/rembg/issues/193&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html"&gt;https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Auto-Annotate — ONNX Model
&lt;/h2&gt;

&lt;p&gt;Clone Git Repository:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/naseemap47/autoAnnoter
cd autoAnnoter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Required Libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-x&lt;/code&gt; , &lt;code&gt;--xml&lt;/code&gt; _____ for XML Annotations&lt;br&gt;
&lt;code&gt;-t&lt;/code&gt;, &lt;code&gt;--txt&lt;/code&gt; _____ to annotate in (.txt) format&lt;br&gt;
&lt;code&gt;-i&lt;/code&gt;, &lt;code&gt;--dataset&lt;/code&gt; _ path to Dataset&lt;br&gt;
&lt;code&gt;-c&lt;/code&gt;, &lt;code&gt;--classes&lt;/code&gt; _ path to &lt;strong&gt;classes.txt&lt;/strong&gt; file (names of object detection classes)&lt;br&gt;
Example for &lt;strong&gt;classes.txt&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;car
person
book
apple
mobile
bottle
....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;-m&lt;/code&gt;, &lt;code&gt;--model&lt;/code&gt;__ path to &lt;strong&gt;ONNX&lt;/strong&gt; model&lt;br&gt;
&lt;code&gt;-s&lt;/code&gt;, &lt;code&gt;--size&lt;/code&gt;__ Size of image used to train the your object detection model&lt;br&gt;
&lt;code&gt;-conf&lt;/code&gt;, &lt;code&gt;--confidence&lt;/code&gt; __ Model detection Confidence (0&amp;lt;confidence&amp;lt;1)&lt;/p&gt;
&lt;h3&gt;
  
  
  For XML Annotations:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 autoAnnot.py -x -i &amp;lt;PATH_TO_DATA&amp;gt; -c &amp;lt;PATH_TO_classes.txt&amp;gt; -m &amp;lt;ONNX_MODEL_PATH&amp;gt; -s &amp;lt;SIZE_OF_IMAGE_WHEN_TRAIN_YOUR_MODEL&amp;gt; -conf &amp;lt;MODEL_OBJCET_DETECTION_CONFIDENCE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  For TXT Format Annotations:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 autoAnnot.py -t -i &amp;lt;PATH_TO_DATA&amp;gt; -c &amp;lt;PATH_TO_classes.txt&amp;gt; -m &amp;lt;ONNX_MODEL_PATH&amp;gt; -s &amp;lt;SIZE_OF_IMAGE_WHEN_TRAIN_YOUR_MODEL&amp;gt; -conf &amp;lt;MODEL_OBJCET_DETECTION_CONFIDENCE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It will do all operation and the end of process you will get your &lt;strong&gt;Auto-Annotated&lt;/strong&gt; data inside your data.&lt;br&gt;
&lt;strong&gt;Auto-Annotated&lt;/strong&gt; data will be with your corresponding Image data.&lt;br&gt;
In this way you can easy check, the Annotation is Correct or NOT.&lt;/p&gt;
&lt;h2&gt;
  
  
  Auto-Annotate YOLOv7 Model
&lt;/h2&gt;

&lt;p&gt;Clone Git Repository:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/naseemap47/autoAnnoter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-i&lt;/code&gt;, &lt;code&gt;--dataset&lt;/code&gt; ____ path to Dataset&lt;br&gt;
&lt;code&gt;-m&lt;/code&gt;, &lt;code&gt;--model&lt;/code&gt; ____ path to &lt;strong&gt;YOLOv7&lt;/strong&gt; model (.pt)&lt;br&gt;
&lt;code&gt;-c&lt;/code&gt;, &lt;code&gt;--confidence&lt;/code&gt; ___ Model detection Confidence (0&amp;lt;confidence&amp;lt;1)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 autoAnotYolov7.py -i &amp;lt;PATH_TO_DATA&amp;gt; -m &amp;lt;YOLOv7_MODEL_PATH&amp;gt; -c &amp;lt;MODEL_OBJCET_DETECTION_CONFIDENCE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will do all operation and the end of process you will get your &lt;strong&gt;Auto-Annotated&lt;/strong&gt; data inside your data.&lt;br&gt;
&lt;strong&gt;Auto-Annotated&lt;/strong&gt; data will be with your corresponding Image data.&lt;br&gt;
In this way you can easy check, the Annotation is Correct or NOT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto-Annotated data accuracy is Completely depends on your Custom Model&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So, its better to check the Annotation is Correct or NOT.&lt;br&gt;
Let me know your feedback about my &lt;strong&gt;Auto-Annotator&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Thank you…&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
  </channel>
</rss>
