Web: http://arxiv.org/abs/2205.01972

May 5, 2022, 1:12 a.m. | Yuki Tatsunami, Masato Taki

cs.LG updates on arXiv.org arxiv.org

In recent computer vision research, the advent of the Vision Transformer
(ViT) has rapidly revolutionized various architectural design efforts: ViT
achieved state-of-the-art image classification performance using self-attention
found in natural language processing, and MLP-Mixer achieved competitive
performance using simple multi-layer perceptrons. In contrast, several studies
have also suggested that carefully redesigned convolutional neural networks
(CNNs) can achieve advanced performance comparable to ViT without resorting to
these new ideas. Against this background, there is growing interest in what
inductive bias is …

arxiv classification cv deep image lstm

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California