May 19, 2022, 1:10 a.m. | Jie Xiang, Yun Wang, Lifeng An, Haiyang Liu, Zijun Wang, Jian Liu

cs.CV updates on arXiv.org arxiv.org

Although existing monocular depth estimation methods have made great
progress, predicting an accurate absolute depth map from a single image is
still challenging due to the limited modeling capacity of networks and the
scale ambiguity issue. In this paper, we introduce a fully Visual
Attention-based Depth (VADepth) network, where spatial attention and channel
attention are applied to all stages. By continuously extracting the
dependencies of features along the spatial and channel dimensions over a long
distance, VADepth network can effectively …

arxiv attention autonomous autonomous driving cv driving visual attention

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Alternant Data Engineering

@ Aspire Software | Angers, FR

Senior Software Engineer, Generative AI

@ Google | Dublin, Ireland