all AI news
JABER and SABER: Junior and Senior Arabic BERt. (arXiv:2112.04329v2 [cs.CL] UPDATED)
Jan. 7, 2022, 2:10 a.m. | Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang,
cs.CL updates on arXiv.org arxiv.org
Language-specific pre-trained models have proven to be more accurate than
multilingual ones in a monolingual evaluation setting, Arabic is no exception.
However, we found that previously released Arabic BERT models were
significantly under-trained. In this technical report, we present JABER and
SABER, Junior and Senior Arabic BERt respectively, our pre-trained language
model prototypes dedicated for Arabic. We conduct an empirical study to
systematically evaluate the performance of models across a diverse set of
existing Arabic NLU tasks. Experimental results show …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Director, Global Procurement Data Analytics
@ Alcon | Fort Worth - Main
Backend Software Engineer, Airbnb for Real Estate
@ Airbnb | United States
Data Scientist
@ Exoticca | Barcelona, Catalonia, Spain - Remote
ESG Data Analytics Summer Associate (Intern)
@ Apex Clean Energy | Charlottesville, VA, United States
Team Lead, Machine Learning
@ Prenuvo | Vancouver, British Columbia, Canada