This week's CVPR conference was AWESOME! Here's a quick spotlight on the papers we found insightful at this year's show.
📙 Good Reads by Jacob Marks
🔥 CVPR 2024 Paper Spotlight: CoDeF 🔥
Recent progress in video editing/translation has been driven by techniques like Tune-A-Video and FateZero, which utilize text-to-image generative models.
Because a generative model (with inherent randomness) is applied to each frame in input videos, these methods are susceptible to breaks in temporal consistency.
Content Deformation Fields (CoDeF) overcome this challenge by representing any video with a flattened canonical image, which captures the textures in the video, and a deformation field, which describes how each frame in the video is deformed relative to the canonical image. This allows for image algorithms like image translation to be “lifted” to the video domain, applying the algorithm to the canonical image and propagating the effect to each frame using the deformation field.
Through lifting image translation algorithms, CoDeF achieves unprecedented cross-frame consistency in video-to-video translation. CoDeF can also be applied for point-based tracking (even with non-rigid entities like water), segmentation-based tracking, and video super-resolution!
🔥 CVPR 2024 Paper Spotlight: Depth Anything 🔥
How do you estimate depth using just a single image? Technically, calculating 3D characteristics of objects like depth requires comparing images from multiple perspectives — humans, for instance, perceive depth by merging images from two eyes.
Computer vision applications, however, are often constrained to a single camera. In these scenarios, deep learning models are used to estimate depth from one vantage point. Convolutional neural networks (CNNs) and, more recently, transformers and diffusion models employed for this task typically need to be trained on highly specific data.
Depth Anything revolutionizes relative and absolute depth estimation. Like Meta AI’s Segment Anything, Depth Anything is trained on an enormous quantity and diversity of data — 62 million images, giving the model unparalleled generality and robustness for zero-shot depth estimation, as well as state-of-the-art fine-tuned performance on datasets like NYUv2 and KITTI. (the video shows raw footage, MiDaS – previous best, and Depth Anything)
The model uses a Dense Prediction Transformer (DPT) architecture and is already integrated into Hugging Face‘s Transformers library and FiftyOne!
- Arxiv
- Project page
- GitHub
- Depth Anything Transformers Docs
- Monocular Depth Estimation Tutorial
- Depth Anything FiftyOne Integration
🔥 CVPR 2024 Paper Spotlight: YOLO-World 🔥
Over the past few years, object detection has been cleanly divided into two camps.
Real-time closed-vocabulary detection:
Single-stage detection models like those from the You-Only-Look-Once (YOLO) family made it possible to detect objects from a pre-set list of classes in mere milliseconds on GPUs.
Open-vocabulary object detection:
Transformer-based models like Grounding DINO and Owl-ViT brought open-world knowledge to detection tasks, giving you the power to detect objects from arbitrary text prompts, at the expense of speed.
YOLO-World bridges this gap! YOLO-World uses a YOLO backbone for rapid detection and introduces semantic information via a CLIP text encoder. The two are connected through a new lightweight module called a Re-parameterizable Vision-Language Path Aggregation Network.
What you get is a family of strong zero-shot detection models that can process up to 74 images per second! YOLO-World is already integrated into Ultralytics (along with YOLOv5, YOLOv8, and YOLOv9), and FiftyOne!
🔥 CVPR 2024 Paper Spotlight: DeepCache 🔥
Diffusion models dominate the discourse regarding visual genAI these days — Stable Diffusion, Midjourney, DALL-E3, and Sora are just a few of the diffusion-based models that produce breathtakingly stunning visuals.
If you’ve ever tried to run a diffusion model locally, you’ve probably seen for yourself how these models can be pretty slow. This is because diffusion models iteratively try to denoise an image (or other state), meaning that many sequential forward passes through the model must be made.
DeepCache accelerates diffusion model inference by up to 10x with minimal quality drop-off. The technique is training-free and works by leveraging the fact that high-level features are fairly consistent throughout the diffusion denoising process. By caching these once, this computation can be saved in subsequent steps.
🔥 CVPR 2024 Paper Spotlight: PhysGaussian 🔥
I’m a sucker for some physics-based machine learning, and this new approach from researchers at UCLA, Zhejiang University, and the University of Utah is pretty insane.
3D Gaussian splatting is a rasterization technique that generates realistic new views of a scene from a set of photos or an input video. It has rapidly risen to prominence because it is simple, trains relatively quickly, and can synthesize novel views in real time.
However, to simulate dynamics (which involves motion synthesis), views generated by Gaussian splatting had to be converted into meshes before physical simulation and final rendering could be performed.
PhysGaussian cuts through these intermediate steps by embedding physical concepts like stress, plasticity, and elasticity into the model itself. At a high level, the model leverages the deep relationships between physical behavior and visual appearance, following Nvidia’s “what you see is what you simulate” (WS2) approach.
Very excited to see where this line of work goes!
🎙️ Good Listens - All About CVPR!
LINK - Youtube
LINK - YouTube
LINK - YouTube
LINK - YouTube
🗓️. Upcoming Events
Check out these upcoming AI, machine learning and computer vision events! View the full calendar and register for an event.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more