Aug. 17, 2022, 1:12 a.m. | Vedant Nanda, Ayan Majumdar, Camila Kolling, John P. Dickerson, Krishna P. Gummadi, Bradley C. Love, Adrian Weller

cs.CV updates on arXiv.org arxiv.org

An evaluation criterion for safe and trustworthy deep learning is how well
the invariances captured by representations of deep neural networks (DNNs) are
shared with humans. We identify challenges in measuring these invariances.
Prior works used gradient-based methods to generate \textit{identically
represented inputs} (IRIs), \ie, inputs which have identical representations
(on a given layer) of a neural network, and thus capture invariances of a given
network. One necessary criterion for a network's invariances to align with
human perception is for …

arxiv cv human networks neural networks perception

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya