March 5, 2024, 2:41 p.m. | Olivia T. Zahn, Thomas L. Daniel, J. Nathan Kutz

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.00974v1 Announce Type: new
Abstract: We characterize the connectivity structure of feed-forward, deep neural networks (DNNs) using network motif theory. To address whether a particular motif distribution is characteristic of the training task, or function of the DNN, we compare the connectivity structure of 350 DNNs trained to simulate a bio-mechanical flight control system with different randomly initialized parameters. We develop and implement algorithms for counting second- and third-order motifs and calculate their significance using their Z-score. The DNNs are …

abstract arxiv bio connectivity cs.lg distribution dnn function motif network networks neural networks theory training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Developer AI Senior Staff Engineer, Machine Learning

@ Google | Sunnyvale, CA, USA; New York City, USA

Engineer* Cloud & Data Operations (f/m/d)

@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183