April 1, 2024, 4:47 a.m. | Shadi Manafi, Nikhil Krishnaswamy

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.20056v1 Announce Type: new
Abstract: Multilingual Language Models (MLLMs) exhibit robust cross-lingual transfer capabilities, or the ability to leverage information acquired in a source language and apply it to a target language. These capabilities find practical applications in well-established Natural Language Processing (NLP) tasks such as Named Entity Recognition (NER). This study aims to investigate the effectiveness of a source language when applied to a target language, particularly in the context of perturbing the input test set. We evaluate on …

abstract acquired adversarial applications apply arxiv capabilities cross-lingual cs.cl datasets information language language models language processing languages mllms multilingual natural natural language natural language processing ner nlp practical processing recognition robust robustness tasks transfer type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain