April 1, 2024, 4:47 a.m. | Shadi Manafi, Nikhil Krishnaswamy

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.20056v1 Announce Type: new
Abstract: Multilingual Language Models (MLLMs) exhibit robust cross-lingual transfer capabilities, or the ability to leverage information acquired in a source language and apply it to a target language. These capabilities find practical applications in well-established Natural Language Processing (NLP) tasks such as Named Entity Recognition (NER). This study aims to investigate the effectiveness of a source language when applied to a target language, particularly in the context of perturbing the input test set. We evaluate on …

abstract acquired adversarial applications apply arxiv capabilities cross-lingual cs.cl datasets information language language models language processing languages mllms multilingual natural natural language natural language processing ner nlp practical processing recognition robust robustness tasks transfer type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US