In the world of computer vision, working with precise annotations is key to training machine learning models effectively. LabelMe is a popular tool for creating object annotations in images, but many models—like YOLO—require labels in a specific format. Enter labelme-to-yolo, a new Python library that makes converting LabelMe annotations into YOLO format easier than ever.
What is labelme-to-yolo?
labelme-to-yolo is a Python library that allows you to convert annotation files generated by the LabelMe tool into the label format used by YOLO. With this library, you can automate the conversion process and save time when working with object detection models.
Key Features:
- Fast conversion: Convert LabelMe annotations to YOLO format in seconds.
- Multi-class support: Supports multiple object classes, allowing you to train detection models with various categories.
- Easy integration: Simply install the library and use the conversion command in your project.
How to Use labelme-to-yolo
Once the library is installed, the process is simple. To convert your LabelMe annotations into YOLO format, run the following command, adjusting the paths to your dataset and output folders:
labelme2yolo --source-path ./path/to/labelme/dataset --output-path ./path/to/output/folder
This will convert all annotations in the specified source folder into the YOLO format and save them in the output folder you’ve chosen.
Output Structure
After the conversion, your output directory will have the following structure:
datasets
├── images
│ ├── train
│ │ ├── img_1.jpg
│ │ ├── img_2.jpg
│ │ ├── img_3.jpg
│ │ ├── img_4.jpg
│ │ └── img_5.jpg
│ └── val
│ ├── img_6.jpg
│ └── img_7.jpg
├── labels
│ ├── train
│ │ ├── img_1.txt
│ │ ├── img_2.txt
│ │ ├── img_3.txt
│ │ ├── img_4.txt
│ │ └── img_5.txt
│ └── val
│ ├── img_6.txt
│ └── img_7.txt
├── labels.txt
├── test.txt
├── train.txt
└── proyect.yml
The tool automatically detects all the classes from the LabelMe annotations, so you won’t need to manually input any class labels. The labels.txt file will list all the class names, while train.txt and test.txt will contain the paths to the respective training and testing images. The YOLO annotation files for each image will be located in the corresponding .txt files within the labels/train and labels/val directories.
Give It a Star! 🌟
If you found labelme-to-yolo useful, I’d really appreciate it if you could give the repository a star on GitHub! 🌟 It helps others discover the project and shows your support for its development.
Tlaloc-Es / labelme-to-yolo
Convert LabelMe Annotation Format to YOLO Annotation Format for Segmentation
LabelMe to Yolo
Convert LabelMe format into YoloV7 format for instance segmentation.
You can install labelme2yolo
from Pypi. It's going to install the library itself and its prerequisites as well.
pip install labelme2yolo
You can install labelme2yolo
from its source code.
git clone https://github.com/Tlaloc-Es/labelme-to-yolo.git
cd labelme2yolo
pip install -e .
Usage
First of all, make your dataset with LabelMe, after that call to the following command
labelme2yolo --source-path /labelme/dataset --output-path /another/path
The arguments are:
-
--source-path
: That indicates the path where are the json output of LabelMe and their images, both will have been in the same folder -
--output-path
: The path where you will save the converted files and a copy of the images following the yolov7 folder estructure
Expected output
If you execute the following command:
labelme2yolo --source-path /labelme/dataset --output-path /another/datasets
You will get something like this
datasets
├── images
│ ├── train
│ │ ├──
…
Top comments (0)