Web: http://arxiv.org/abs/2209.07080

Sept. 16, 2022, 1:11 a.m. | Ehsan Amid, Rohan Anil, Christopher Fifty, Manfred K. Warmuth

cs.LG updates on arXiv.org arxiv.org

In this work, we propose a novel approach for layerwise representation
learning of a trained neural network. In particular, we form a Bregman
divergence based on the layer's transfer function and construct an extension of
the original Bregman PCA formulation by incorporating a mean vector and
normalizing the principal directions with respect to the geometry of the local
convex function around the mean. This generalization allows exporting the
learned representation as a fixed layer with a non-linearity. As an application …

applications arxiv distillation knowledge representation representation learning

More from arxiv.org / cs.LG updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Data Scientist (Analytics) - Singapore

@ Momos | Singapore, Central, Singapore

Machine Learning Scientist, Drug Discovery

@ Flagship Pioneering, Inc. | Cambridge, MA

Applied Scientist - Computer Vision

@ Flawless | Los Angeles, California, United States

Sr. Data Engineer, Customer Service

@ Wayfair Inc. | Boston, MA