Web: http://arxiv.org/abs/2205.06130

May 13, 2022, 1:11 a.m. | Kabir Ahuja, Shanu Kumar, Sandipan Dandapat, Monojit Choudhury

cs.CL updates on arXiv.org arxiv.org

Massively Multilingual Transformer based Language Models have been observed
to be surprisingly effective on zero-shot transfer across languages, though the
performance varies from language to language depending on the pivot language(s)
used for fine-tuning. In this work, we build upon some of the existing
techniques for predicting the zero-shot performance on a task, by modeling it
as a multi-task learning problem. We jointly train predictive models for
different tasks which helps us build more accurate predictors for tasks where
we …

arxiv learning models performance prediction

More from arxiv.org / cs.CL updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote