Feb. 27, 2024, 5:50 a.m. | Itay Etelis, Avi Rosenfeld, Abraham Itzhak Weinberg, David Sarne

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.16700v1 Announce Type: new
Abstract: In recent years, transformer models have revolutionized Natural Language Processing (NLP), achieving exceptional results across various tasks, including Sentiment Analysis (SA). As such, current state-of-the-art approaches for SA predominantly rely on transformer models alone, achieving impressive accuracy levels on benchmark datasets. In this paper, we show that the key for further improving the accuracy of such ensembles for SA is to include not only transformers, but also traditional NLP models, despite the inferiority of the …

abstract accuracy analysis art arxiv benchmark cs.ai cs.cl current datasets key language language processing natural natural language natural language processing nlp paper processing results sentiment sentiment analysis show state tasks the key transformer transformer models type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States