Source Code Explore Other Examples
Visualize the Google Research Objectron dataset including camera poses, sparse point-clouds and surfaces characterization.
This example visualizes the Objectron database, a rich collection of object-centric video clips accompanied by AR session metadata. With high-resolution images, object pose, camera pose, point-cloud, and surface plane information available for each sample, the visualization offers a comprehensive view of the object from various angles. Additionally, the dataset provides manually annotated 3D bounding boxes, enabling precise object localization and orientation.
Logging and visualizing with Rerun
The visualizations in this example were created with the following Rerun code:
Timelines
For each processed frame, all data sent to Rerun is associated with the two timelines time and frame_idx.
rr.set_time_sequence("frame", sample.index)
rr.set_time_seconds("time", sample.timestamp)
Video
Pinhole camera is utilized for achieving a 3D view and camera perspective through the use of the Pinhole and Transform3D archetypes.
rr.log(
"world/camera",
rr.Transform3D(translation=translation, rotation=rr.Quaternion(xyzw=rot.as_quat())),
)
rr.log(
"world/camera",
rr.Pinhole(
resolution=[w, h],
image_from_camera=intrinsics,
camera_xyz=rr.ViewCoordinates.RDF,
),
)
The input video is logged as a sequence of ImageEncoded objects to the world/camera entity.
rr.log("world/camera", rr.ImageEncoded(path=sample.image_path))
Sparse point clouds
Sparse point clouds from ARFrame are logged as Points3D archetype to the world/points entity.
rr.log("world/points", rr.Points3D(positions, colors=[255, 255, 255, 255]))
Annotated bounding boxes
Bounding boxes annotated from ARFrame are logged as Boxes3D, containing details such as object position, sizes, center and rotation.
rr.log(
f"world/annotations/box-{bbox.id}",
rr.Boxes3D(
half_sizes=0.5 * np.array(bbox.scale),
centers=bbox.translation,
rotations=rr.Quaternion(xyzw=rot.as_quat()),
colors=[160, 230, 130, 255],
labels=bbox.category,
),
timeless=True,
)
Setting up the default blueprint
The default blueprint is configured with the following code:
blueprint = rrb.Horizontal(
rrb.Spatial3DView(origin="/world", name="World"),
rrb.Spatial2DView(origin="/world/camera", name="Camera", contents=["/world/**"]),
)
In particular, we want to reproject the points and the 3D annotation box in the 2D camera view corresponding to the pinhole logged at "/world/camera". This is achieved by setting the view's contents to the entire "/world/**" subtree, which include both the pinhole transform and the image data, as well as the point cloud and the 3D annotation box.
Join us on Github
rerun-io / rerun
Visualize streams of multimodal data. Fast, easy to use, and simple to integrate. Built in Rust using egui.
Build time aware visualizations of multimodal data
Use the Rerun SDK (available for C++, Python and Rust) to log data like images, tensors, point clouds, and text. Logs are streamed to the Rerun Viewer for live visualization or to file for later use.
A short taste
import rerun as rr # pip install rerun-sdk
rr.init("rerun_example_app")
rr.connect() # Connect to a remote viewer
# rr.spawn() # Spawn a child process with a viewer and connect
# rr.save("recording.rrd") # Stream all logs to disk
# Associate subsequent data with 42 on the “frame” timeline
rr.set_time_sequence("frame", 42)
# Log colored 3D points to the entity at `path/to/points`
rr.log("path/to/points", rr.Points3D(positions, colors=colors
…
Top comments (0)