all AI news
Goldilocks: Just-Right Tuning of BERT for Technology-Assisted Review. (arXiv:2105.01044v2 [cs.IR] UPDATED)
Jan. 21, 2022, 2:10 a.m. | Eugene Yang, Sean MacAvaney, David D. Lewis, Ophir Frieder
cs.CL updates on arXiv.org arxiv.org
Technology-assisted review (TAR) refers to iterative active learning
workflows for document review in high recall retrieval (HRR) tasks. TAR
research and most commercial TAR software have applied linear models such as
logistic regression to lexical features. Transformer-based models with
supervised tuning are known to improve effectiveness on many text
classification tasks, suggesting their use in TAR. We indeed find that the
pre-trained BERT model reduces review cost by 10% to 15% in TAR workflows
simulated on the RCV1-v2 newswire collection. …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Vice President, AI Product Manager
@ JPMorgan Chase & Co. | New York City, United States
Binance Accelerator Program - Data Engineer
@ Binance | Asia