Oct. 18, 2022, 1:12 a.m. | Muhammad AL-Qurishi, Sarah AlQaseemi, Riad Soussi

cs.CL updates on arXiv.org arxiv.org

The effectiveness of the BERT model on multiple linguistic tasks has been
well documented. On the other hand, its potentials for narrow and specific
domains such as Legal, have not been fully explored. In this paper, we examine
how BERT can be used in the Arabic legal domain and try customizing this
language model for several downstream tasks using several different
domain-relevant training and testing datasets to train BERT from scratch. We
introduce the AraLegal-BERT, a bidirectional encoder Transformer-based model …

arxiv bert language language model legal pretrained language model text

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote