Oct. 7, 2022, 1:17 a.m. | Shounak Paul, Arpan Mandal, Pawan Goyal, Saptarshi Ghosh

cs.CL updates on arXiv.org arxiv.org

Natural Language Processing in the legal domain been benefited hugely by the
emergence of Transformer-based Pre-trained Language Models (PLMs) pre-trained
on legal text. There exist PLMs trained over European and US legal text, most
notably LegalBERT. However, with the rapidly increasing volume of NLP
applications on Indian legal documents, and the distinguishing characteristics
of Indian legal text, it has become necessary to pre-train LMs over Indian
legal text as well. In this work, we introduce transformer-based PLMs
pre-trained over a …

arxiv legal pre-training text training transformers

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria