Aug. 12, 2022, 1:11 a.m. | Hyoje Lee, Yeachan Park, Hyun Seo, Myungjoo Kang

cs.CV updates on arXiv.org arxiv.org

To boost the performance, deep neural networks require deeper or wider
network structures that involve massive computational and memory costs. To
alleviate this issue, the self-knowledge distillation method regularizes the
model by distilling the internal knowledge of the model itself. Conventional
self-knowledge distillation methods require additional trainable parameters or
are dependent on the data. In this paper, we propose a simple and effective
self-knowledge distillation method using a dropout (SD-Dropout). SD-Dropout
distills the posterior distributions of multiple models through a …

arxiv cv distillation dropout knowledge

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA