Aug. 11, 2023, 6:50 a.m. | André Peter Kelm, Lucas Schmidt, Tim Rolff, Christian Wilms, Ehsan Yaghoubi, Simone Frintrop

cs.CV updates on arXiv.org arxiv.org

In this work, we parallelize high-level features in deep networks to
selectively skip or select class-specific features to reduce inference costs.
This challenges most deep learning methods due to their limited ability to
efficiently and effectively focus on selected class-specific features without
retraining. We propose a serial-parallel hybrid architecture with serial
generic low-level features and parallel high-level features. This accounts for
the fact that many high-level features are class-specific rather than generic,
and has connections to recent neuroscientific findings that …

arxiv attention challenges cost costs deep learning features focus hybrid inference networks parallelization reduce through work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US