June 20, 2022, 1:13 a.m. | Yao Mu, Shoufa Chen, Mingyu Ding, Jianyu Chen, Runjian Chen, Ping Luo

cs.CV updates on arXiv.org arxiv.org

Transformer has achieved great successes in learning vision and language
representation, which is general across various downstream tasks. In visual
control, learning transferable state representation that can transfer between
different control tasks is important to reduce the training sample size.
However, porting Transformer to sample-efficient visual control remains a
challenging and unsolved problem. To this end, we propose a novel Control
Transformer (CtrlFormer), possessing many appealing benefits that prior arts do
not have. Firstly, CtrlFormer jointly learns self-attention mechanisms between …

arxiv cv learning representation state transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States