all AI news
MED-TEX: Transferring and Explaining Knowledge with Less Data from Pretrained Medical Imaging Models. (arXiv:2008.02593v2 [cs.CV] UPDATED)
Jan. 12, 2022, 2:10 a.m. | Thanh Nguyen-Duc, He Zhao, Jianfei Cai, Dinh Phung
cs.LG updates on arXiv.org arxiv.org
Deep learning methods usually require a large amount of training data and
lack interpretability. In this paper, we propose a novel knowledge distillation
and model interpretation framework for medical image classification that
jointly solves the above two issues. Specifically, to address the data-hungry
issue, a small student model is learned with less data by distilling knowledge
from a cumbersome pretrained teacher model. To interpret the teacher model and
assist the learning of the student, an explainer module is introduced to …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 9 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 9 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Lead Software Engineer - Artificial Intelligence, LLM
@ OpenText | Hyderabad, TG, IN
Lead Software Engineer- Python Data Engineer
@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom
Data Analyst (m/w/d)
@ Collaboration Betters The World | Berlin, Germany
Data Engineer, Quality Assurance
@ Informa Group Plc. | Boulder, CO, United States
Director, Data Science - Marketing
@ Dropbox | Remote - Canada