DEV Community

Cover image for Auto Annotation using ONNX and YOLOv7 model (Object Detection)
NASEEM A P
NASEEM A P

Posted on

Auto Annotation using ONNX and YOLOv7 model (Object Detection)

Annotation is very boring work, so I think that can we use our custom trained model (ONNX model) to annotate our new Data.

So I created a python module that can Auto-Annotate your Dataset using your ONNX mode.

I also added new Auto-Annotator using YOLOv7 model (.pb)

Link to GitHub Repository:-

https://github.com/naseemap47/autoAnnoter

We can convert other types of model Tensorflow(.pb) or PyTorch(.pth) or other Models into ONNX. That’s why I choose ONNX model to build my Auto-Annotator module.

Convert To ONNX Model

1. From Tensorflow (.pb)

I hoping that, you are using Object Detection API for this.

https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html

When the time of exporting,
Use:-
--input_type image_tensor
NOT
--input_type float_image_tensor
Example:-

python .\exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\my_ssd_resnet50_v1_fpn\pipeline.config --trained_checkpoint_dir .\models\my_ssd_resnet50_v1_fpn\ --output_directory .\exported-models\my_model
Enter fullscreen mode Exit fullscreen mode

pb to ONNX

Follow tensorflow-onnx:-

https://github.com/onnx/tensorflow-onnx

pip install -U tf2onnx
Enter fullscreen mode Exit fullscreen mode
python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx
Enter fullscreen mode Exit fullscreen mode

default of 13 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command. If you are unsure about which opset to use, refer to the ONNX operator documentation.

2. From Pytorch (.pth) [pth to ONNX]

Checkout Links Below:

https://github.com/danielgatis/rembg/issues/193

https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

Auto-Annotate — ONNX Model

Clone Git Repository:-

git clone https://github.com/naseemap47/autoAnnoter
cd autoAnnoter
Enter fullscreen mode Exit fullscreen mode

Install Required Libraries:

pip3 install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Variables:

-x , --xml _____ for XML Annotations
-t, --txt _____ to annotate in (.txt) format
-i, --dataset _ path to Dataset
-c, --classes _ path to classes.txt file (names of object detection classes)
Example for classes.txt :

car
person
book
apple
mobile
bottle
....
Enter fullscreen mode Exit fullscreen mode

-m, --model__ path to ONNX model
-s, --size__ Size of image used to train the your object detection model
-conf, --confidence __ Model detection Confidence (0<confidence<1)

For XML Annotations:

python3 autoAnnot.py -x -i <PATH_TO_DATA> -c <PATH_TO_classes.txt> -m <ONNX_MODEL_PATH> -s <SIZE_OF_IMAGE_WHEN_TRAIN_YOUR_MODEL> -conf <MODEL_OBJCET_DETECTION_CONFIDENCE>
Enter fullscreen mode Exit fullscreen mode

For TXT Format Annotations:

python3 autoAnnot.py -t -i <PATH_TO_DATA> -c <PATH_TO_classes.txt> -m <ONNX_MODEL_PATH> -s <SIZE_OF_IMAGE_WHEN_TRAIN_YOUR_MODEL> -conf <MODEL_OBJCET_DETECTION_CONFIDENCE>
Enter fullscreen mode Exit fullscreen mode

It will do all operation and the end of process you will get your Auto-Annotated data inside your data.
Auto-Annotated data will be with your corresponding Image data.
In this way you can easy check, the Annotation is Correct or NOT.

Auto-Annotate YOLOv7 Model

Clone Git Repository:-

git clone https://github.com/naseemap47/autoAnnoter
Enter fullscreen mode Exit fullscreen mode

Variables:

-i, --dataset ____ path to Dataset
-m, --model ____ path to YOLOv7 model (.pt)
-c, --confidence ___ Model detection Confidence (0<confidence<1)

python3 autoAnotYolov7.py -i <PATH_TO_DATA> -m <YOLOv7_MODEL_PATH> -c <MODEL_OBJCET_DETECTION_CONFIDENCE>
Enter fullscreen mode Exit fullscreen mode

It will do all operation and the end of process you will get your Auto-Annotated data inside your data.
Auto-Annotated data will be with your corresponding Image data.
In this way you can easy check, the Annotation is Correct or NOT.

Auto-Annotated data accuracy is Completely depends on your Custom Model.

So, its better to check the Annotation is Correct or NOT.
Let me know your feedback about my Auto-Annotator
Thank you…

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

Top comments (0)

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay