March 16, 2022, 2:49 p.m. | /u/KonArtist01

Machine Learning www.reddit.com

There is a lot of research going on in Active Learning but I feel there is nothing conclusive coming out of this field. A lot of methods are struggling to beat the random sampling baseline.

For example, they used to publish this promising paper of using drop out for uncertainty estimation:
[https://arxiv.org/pdf/1506.02142.pdf](https://arxiv.org/pdf/1506.02142.pdf)

But people tried to use it and it was just not performing well. Since then, I am not sure if DL methods can properly bootstrap themselves and identify …

active learning learning machinelearning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA