Feb. 1, 2024, 12:42 p.m. | Donghoon Han Seunghyeon Seo Donghyeon Jeon Jiho Jang Chaerin Kong Nojun Kwak

cs.CV updates on arXiv.org arxiv.org

Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications. Yet, the superior performance and modeling flexibility of transformers came with a severe increase in computation costs, and hence several works have proposed methods to reduce this burden. Inspired by a cost-cutting method originally proposed for language models, Data Multiplexing (DataMUX), we propose a novel approach for efficient visual recognition that employs additional …

applications batching computation computer computer vision costs creative cs.ai cs.cv cs.lg domain faster flexibility language language processing modeling natural natural language natural language processing nlp performance processing reduce success transformers vision

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote