Aug. 19, 2022, 1:11 a.m. | Alessandro Ingrosso, Sebastian Goldt

stat.ML updates on arXiv.org arxiv.org

Exploiting data invariances is crucial for efficient learning in both
artificial and biological neural circuits. Understanding how neural networks
can discover appropriate representations capable of harnessing the underlying
symmetries of their inputs is thus crucial in machine learning and
neuroscience. Convolutional neural networks, for example, were designed to
exploit translation symmetry and their capabilities triggered the first wave of
deep learning successes. However, learning convolutions directly from
translation-invariant data with a fully-connected network has so far proven
elusive. Here, we …

arxiv data data-driven networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya