DEV Community

Cover image for FiftyOne Computer Vision Tips and Tricks - April 5, 2024
Jimmy Guerrero for Voxel51

Posted on • Edited on • Originally published at voxel51.com

FiftyOne Computer Vision Tips and Tricks - April 5, 2024

Welcome to our weekly FiftyOne tips and tricks blog where we recap interesting questions and answers that have recently popped up on Slack, GitHub, Stack Overflow, and Reddit.

As an open source community, the FiftyOne community is open to all. This means everyone is welcome to ask questions, and everyone is welcome to answer them. Continue reading to see the latest questions asked and answers provided!

Wait, what’s FiftyOne?

FiftyOne is an open source machine learning toolset that enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster.

Ok, let’s dive into this week’s tips and tricks!

Annotating and labeling images with FiftyOne

Can FiftyOne be used to annotate or label images?

FiftyOne is not an annotation tool per se, but it does integrate with a variety of popular annotation tools like CVAT, Label Studio, Labelbox and V7.

With these integrations, FiftyOne provides an API to create tasks and jobs, upload data, define label schemas, and download annotations programmatically in Python. For example, let’s create annotation tasks in CVAT:

import fiftyone as fo
import fiftyone.zoo as foz
from fiftyone import ViewField as F

# Step 1: Load your data into FiftyOne

dataset = foz.load_zoo_dataset(
    "quickstart", dataset_name="cvat-annotation-example"
)
dataset.persistent = True

dataset.evaluate_detections(
    "predictions", gt_field="ground_truth", eval_key="eval"
)

# Step 2: Locate a subset of your data requiring annotation

# Create a view that contains only high confidence false positive model
# predictions, with samples containing the most false positives first
most_fp_view = (
    dataset
    .filter_labels("predictions", (F("confidence") > 0.8) & (F("eval") == "fp"))
    .sort_by(F("predictions.detections").length(), reverse=True)
)

# Let's edit the ground truth annotations for the sample with the most
# high confidence false positives
sample_id = most_fp_view.first().id
view = dataset.select(sample_id)

# Step 3: Send samples to CVAT

# A unique identifier for this run
anno_key = "cvat_basic_recipe"

view.annotate(
    anno_key,
    label_field="ground_truth",
    attributes=["iscrowd"],
    launch_editor=True,
)
print(dataset.get_annotation_info(anno_key))

# Step 4: Perform annotation in CVAT and save the tasks
Enter fullscreen mode Exit fullscreen mode

Then, once the annotation work is complete, we merge the annotations back into FiftyOne:

import fiftyone as fo

anno_key = "cvat_basic_recipe"

# Step 5: Merge annotations back into FiftyOne dataset

dataset = fo.load_dataset("cvat-annotation-example")
dataset.load_annotations(anno_key)

# Load the view that was annotated in the App
view = dataset.load_annotation_view(anno_key)
session = fo.launch_app(view=view)

# Step 6: Cleanup

# Delete tasks from CVAT
results = dataset.load_annotation_results(anno_key)
results.cleanup()

# Delete run record (not the labels) from FiftyOne
dataset.delete_annotation_run(anno_key)
Enter fullscreen mode Exit fullscreen mode

Using a FiftyOne Plugin to save image files

Community Slack member Victoria asked:

I'm trying to build a custom FiftyOne Plugin and want to execute a bash command that saves images to the file system. Where are the files actually being stored? My command looks like this:

command = f"ffmpeg -skip_frame nokey -i {file_path} -vsync vfr -frame_pts true out-%02d.jpeg"
subprocess.run(command, shell = True, executable="/bin/bash")
Enter fullscreen mode Exit fullscreen mode

If you are using a virtual environment (recommended), by default they should be saved to something similar to:

/Users/username/miniconda3/envs/fo/lib/python3.10/site-packages/fiftyone/server

For more information on how to use the file explorer capabilities of plugins, check out the FiftyOne Plugin code for IO on GitHub.

Returning JSON when working with the AnnotationResults class

Community Slack member ZKW asked:

It would be great if the class [AnnotationResults](https://docs.voxel51.com/api/fiftyone.core.annotation.html?highlight=annotationresults#fiftyone.core.annotation.AnnotationResults) had an instance method like to_json() that would return JSON that could be converted into AnnotationResults. Is there a possible workaround you can suggest?

Two options to try here. You can call results.serialize() to get a JSON dict of the results object and there's also a results.write_json() that you can use to write the results to a file. To see the docstring in a notebook use:

results.write_json?
Enter fullscreen mode Exit fullscreen mode

Adding a detector model to FiftyOne

Community Slack member John T asked:

Is it possible to use my own detector model and add it to FiftyOne's inference functionality?

Yes! Check out the “Evaluate Object Detections” tutorial to learn how to use FiftyOne to perform an evaluation of your detection model. Also have a look at the “Object detection” section of the User Guide in the Docs.

A rough code example might look something like this:

import fiftyone as fo

#load your dataset
dataset = fo.load_dataset("your_dataset")

model = (Load your model here)

for sample in dataset:
    results = model.infer(sample.filepath) #inference as you normally do
    detections = []
    for detection in results:
        #convert result bounding boxes to FiftyOne Bounding boxes
        bbox = [nx,ny,nw,nh]
        det = fo.Detection(
            label=label,
            bounding_box=bbox,
            confidence=confidence,
        )
         detections.append(det)
    sample["model_predictions"] = fo.Detections(
          detections=detections)
    sample.save()
Enter fullscreen mode Exit fullscreen mode

Adding RGB colors to classes in a point cloud

Community Slack member Nadu asked:

I'm using FiftyOne with a point cloud dataset. I want to add labels and give a color to each class in the point cloud? Is this possible?

Image description

Yes it is! You can color by RGB using fiftyone/utilities. You may also want to check out the “Build a 3D Self Driving Dataset from Scratch with OpenAI’s Point-E and FiftyOne” tutorial in the Docs.

Top comments (0)