Nov. 7, 2022, 2:15 a.m. | Yasmen Wahba, Nazim Madhavji, John Steinbacher

cs.CL updates on arXiv.org arxiv.org

The emergence of pre-trained language models (PLMs) has shown great success
in many Natural Language Processing (NLP) tasks including text classification.
Due to the minimal to no feature engineering required when using these models,
PLMs are becoming the de facto choice for any NLP task. However, for
domain-specific corpora (e.g., financial, legal, and industrial), fine-tuning a
pre-trained model for a specific task has shown to provide a performance
improvement. In this paper, we compare the performance of four different PLMs …

arxiv classification comparison language language models svm text text classification

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne