all AI news
BERTifying Sinhala -- A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification. (arXiv:2208.07864v1 [cs.CL])
Aug. 17, 2022, 1:11 a.m. | Vinura Dhananjaya, Piyumal Demotte, Surangika Ranathunga, Sanath Jayasena
cs.CL updates on arXiv.org arxiv.org
This research provides the first comprehensive analysis of the performance of
pre-trained language models for Sinhala text classification. We test on a set
of different Sinhala text classification tasks and our analysis shows that out
of the pre-trained multilingual models that include Sinhala (XLM-R, LaBSE, and
LASER), XLM-R is the best model by far for Sinhala text classification. We also
pre-train two RoBERTa-based monolingual Sinhala models, which are far superior
to the existing pre-trained language models for Sinhala. We show …
analysis arxiv classification language language models text text classification
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Senior AI Engineer, EdTech (Remote)
@ Lightci | Toronto, Ontario
Data Scientist for Salesforce Applications
@ ManTech | 781G - Customer Site,San Antonio,TX
AI Research Scientist
@ Gridmatic | Cupertino, CA
Data Engineer
@ Global Atlantic Financial Group | Boston, Massachusetts, United States
Machine Learning Engineer - Conversation AI
@ DoorDash | Sunnyvale, CA; San Francisco, CA; Seattle, WA; Los Angeles, CA