Web: http://arxiv.org/abs/2206.08883

June 20, 2022, 1:13 a.m. | Yao Mu, Shoufa Chen, Mingyu Ding, Jianyu Chen, Runjian Chen, Ping Luo

cs.CV updates on arXiv.org arxiv.org

Transformer has achieved great successes in learning vision and language
representation, which is general across various downstream tasks. In visual
control, learning transferable state representation that can transfer between
different control tasks is important to reduce the training sample size.
However, porting Transformer to sample-efficient visual control remains a
challenging and unsolved problem. To this end, we propose a novel Control
Transformer (CtrlFormer), possessing many appealing benefits that prior arts do
not have. Firstly, CtrlFormer jointly learns self-attention mechanisms between …

arxiv cv learning representation state transformer

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY