Web: http://arxiv.org/abs/2209.05732

Sept. 16, 2022, 1:13 a.m. | Weipeng Huang, Junjie Tao, Changbo Deng, Ming Fan, Wenqiang Wan, Qi Xiong, Guangyuan Piao

cs.LG updates on arXiv.org arxiv.org

This paper revisits an incredibly simple yet exceedingly effective computing
paradigm, Deep Mutual Learning (DML). We observe that the effectiveness
correlates highly to its excellent generalization quality. In the paper, we
interpret the performance improvement with DML from a novel perspective that it
is roughly an approximate Bayesian posterior sampling procedure. This also
establishes the foundation for applying the R\'{e}nyi divergence to improve the
original DML, as it brings in the variance control of the prior (in the context
of …

arxiv divergence

More from arxiv.org / cs.LG updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France