April 9, 2024, 4:46 a.m. | Duy-Tho Le, Hengcan Shi, Jianfei Cai, Hamid Rezatofighi

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.04629v1 Announce Type: new
Abstract: Diffusion models have recently gained prominence as powerful deep generative models, demonstrating unmatched performance across various domains. However, their potential in multi-sensor fusion remains largely unexplored. In this work, we introduce DifFUSER, a novel approach that leverages diffusion models for multi-modal fusion in 3D object detection and BEV map segmentation. Benefiting from the inherent denoising property of diffusion, DifFUSER is able to refine or even synthesize sensor features in case of sensor malfunction, thereby improving …

3d object 3d object detection abstract arxiv cs.cv deep generative models detection diffusion diffusion model diffusion models domains fusion generative generative models however modal multi-modal novel object performance robust segmentation sensor type work

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York