Web: http://arxiv.org/abs/2205.05435

May 12, 2022, 1:11 a.m. | Rabab Alkhalifa, Elena Kochkina, Arkaitz Zubiaga

cs.CL updates on arXiv.org arxiv.org

Performance of text classification models can drop over time when new data to
be classified is more distant in time from the data used for training, due to
naturally occurring changes in the data, such as vocabulary change. A solution
to this is to continually label new data to retrain the model, which is,
however, often unaffordable to be performed regularly due to its associated
cost. This raises important research questions on the design of text
classification models that are …

arxiv building text

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California