April 19, 2024, 4:42 a.m. | Thomas Monninger, Vandana Dokkadi, Md Zafar Anwar, Steffen Staab

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.11803v1 Announce Type: cross
Abstract: Autonomous driving requires an accurate representation of the environment. A strategy toward high accuracy is to fuse data from several sensors. Learned Bird's-Eye View (BEV) encoders can achieve this by mapping data from individual sensors into one joint latent space. For cost-efficient camera-only systems, this provides an effective mechanism to fuse data from multiple cameras with different views. Accuracy can further be improved by aggregating sensor information over time. This is especially important in monocular …

abstract accuracy aggregation arxiv autonomous autonomous driving bird cost cs.ai cs.cv cs.lg cs.ro data driving environment image improving mapping representation sensors space strategy temporal the environment type view

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City