March 13, 2024, 4:43 a.m. | Quoc-Vinh Lai-Dang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.07542v1 Announce Type: cross
Abstract: This survey explores the adaptation of visual transformer models in Autonomous Driving, a transition inspired by their success in Natural Language Processing. Surpassing traditional Recurrent Neural Networks in tasks like sequential image processing and outperforming Convolutional Neural Networks in global context capture, as evidenced in complex scene recognition, Transformers are gaining traction in computer vision. These capabilities are crucial in Autonomous Driving for real-time, dynamic visual scene processing. Our survey provides a comprehensive overview of …

abstract arxiv autonomous autonomous driving context convolutional neural networks cs.cv cs.lg current driving future global image image processing language language processing natural natural language natural language processing networks neural networks processing recurrent neural networks success survey tasks transformer transformer models transformers transition trends type vision vision transformers visual

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA