Aug. 16, 2022, 1:13 a.m. | Alexander Wong, Mohammad Javad Shafiee, Saad Abbasi, Saeejith Nair, Mahmoud Famouri

cs.CV updates on arXiv.org arxiv.org

With the growing adoption of deep learning for on-device TinyML applications,
there has been an ever-increasing demand for more efficient neural network
backbones optimized for the edge. Recently, the introduction of attention
condenser networks have resulted in low-footprint, highly-efficient,
self-attention neural networks that strike a strong balance between accuracy
and speed. In this study, we introduce a new faster attention condenser design
called double-condensing attention condensers that enable more condensed
feature embedding. We further employ a machine-driven design exploration
strategy …

architecture arxiv attention cv edge network neural network self-attention the edge

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Science Specialist

@ Telstra | Telstra ICC Bengaluru

Senior Staff Engineer, Machine Learning

@ Nagarro | Remote, India