all AI news
BERTifying Sinhala -- A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification. (arXiv:2208.07864v2 [cs.CL] UPDATED)
Aug. 18, 2022, 1:11 a.m. | Vinura Dhananjaya, Piyumal Demotte, Surangika Ranathunga, Sanath Jayasena
cs.CL updates on arXiv.org arxiv.org
This research provides the first comprehensive analysis of the performance of
pre-trained language models for Sinhala text classification. We test on a set
of different Sinhala text classification tasks and our analysis shows that out
of the pre-trained multilingual models that include Sinhala (XLM-R, LaBSE, and
LASER), XLM-R is the best model by far for Sinhala text classification. We also
pre-train two RoBERTa-based monolingual Sinhala models, which are far superior
to the existing pre-trained language models for Sinhala. We show …
analysis arxiv classification language language models text text classification
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Science Specialist
@ Telstra | Telstra ICC Bengaluru
Senior Staff Engineer, Machine Learning
@ Nagarro | Remote, India