all AI news
Deep Reference Priors: What is the best way to pretrain a model?. (arXiv:2202.00187v2 [stat.ML] UPDATED)
Web: http://arxiv.org/abs/2202.00187
June 17, 2022, 1:12 a.m. | Yansong Gao, Rahul Ramesh, Pratik Chaudhari
stat.ML updates on arXiv.org arxiv.org
What is the best way to exploit extra data -- be it unlabeled data from the
same task, or labeled data from a related task -- to learn a given task? This
paper formalizes the question using the theory of reference priors. Reference
priors are objective, uninformative Bayesian priors that maximize the mutual
information between the task and the weights of the model. Such priors enable
the task to maximally affect the Bayesian posterior, e.g., reference priors
depend upon the …
More from arxiv.org / stat.ML updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY