DEV Community

Cover image for Auto Annotation using ONNX and YOLOv7 model (Object Detection)
NASEEM A P
NASEEM A P

Posted on

Auto Annotation using ONNX and YOLOv7 model (Object Detection)

Annotation is very boring work, so I think that can we use our custom trained model (ONNX model) to annotate our new Data.

So I created a python module that can Auto-Annotate your Dataset using your ONNX mode.

I also added new Auto-Annotator using YOLOv7 model (.pb)

Link to GitHub Repository:-
https://github.com/naseemap47/autoAnnoter

We can convert other types of model Tensorflow(.pb) or PyTorch(.pth) or other Models into ONNX. That’s why I choose ONNX model to build my Auto-Annotator module.

Convert To ONNX Model

  1. From Tensorflow (.pb) I hoping that, you are using Object Detection API for this.

https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html

When the time of exporting, Use:-
--input_type image_tensor
NOT
--input_type float_image_tensor
Example:-

python .\exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\my_ssd_resnet50_v1_fpn\pipeline.config --trained_checkpoint_dir .\models\my_ssd_resnet50_v1_fpn\ --output_directory .\exported-models\my_model
Enter fullscreen mode Exit fullscreen mode

pb to ONNX
Follow tensorflow-onnx:-
https://github.com/onnx/tensorflow-onnx

pip install -U tf2onnx
python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx
default of 13 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command. If you are unsure about which opset to use, refer to the ONNX operator documentation.

  1. From Pytorch (.pth) [pth to ONNX] Checkout Links Below:

How to convert custom .pth Model to .onnx? · Issue #193 · danielgatis/rembg
Hey, first of all thanks for sharing your great work! As described in the docs I trained a custom model based on the…
github.com

(optional) Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime - PyTorch…
In this tutorial, we describe how to convert a model defined in PyTorch into the ONNX format and then run it with ONNX…
pytorch.org

Auto-Annotate — ONNX Model
Clone Git Repository:-

git clone https://github.com/naseemap47/autoAnnoter
cd autoAnnoter
Install Required Libraries:

pip3 install -r requirements.txt
Variables:

-x , --xml _____ for XML Annotations
-t, --txt _____ to annotate in (.txt) format
-i, --dataset _ path to Dataset
-c, --classes _ path to classes.txt file (names of object detection classes)
Example for classes.txt :

car
person
book
apple
mobile
bottle
....
-m, --model__ path to ONNX model
-s, --size__ Size of image used to train the your object detection model
-conf, --confidence __ Model detection Confidence (0<confidence<1)
For XML Annotations:

python3 autoAnnot.py -x -i -c -m -s -conf
For TXT Format Annotations:

python3 autoAnnot.py -t -i -c -m -s -conf
It will do all operation and the end of process you will get your Auto-Annotated data inside your data.
Auto-Annotated data will be with your corresponding Image data.
In this way you can easy check, the Annotation is Correct or NOT.

Auto-Annotate YOLOv7 Model
Clone Git Repository:-

git clone https://github.com/naseemap47/autoAnnoter
Variables:

-i, --dataset ____ path to Dataset
-m, --model ____ path to YOLOv7 model (.pt)
-c, --confidence ___ Model detection Confidence (0 python3 autoAnotYolov7.py -i -m -c
It will do all operation and the end of process you will get your Auto-Annotated data inside your data.
Auto-Annotated data will be with your corresponding Image data.
In this way you can easy check, the Annotation is Correct or NOT.

Auto-Annotated data accuracy is Completely depends on your Custom Model.
So, its better to check the Annotation is Correct or NOT.
Let me know your feedback about my Auto-Annotator
Thank you…

Top comments (0)