April 9, 2024, 4:50 a.m. | Poulami Ghosh, Shikhar Vashishth, Raj Dabre, Pushpak Bhattacharyya

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.04530v1 Announce Type: new
Abstract: How does the importance of positional encoding in pre-trained language models (PLMs) vary across languages with different morphological complexity? In this paper, we offer the first study addressing this question, encompassing 23 morphologically diverse languages and 5 different downstream tasks. We choose two categories of tasks: syntactic tasks (part-of-speech tagging, named entity recognition, dependency parsing) and semantic tasks (natural language inference, paraphrasing). We consider language-specific BERT models trained on monolingual corpus for our investigation. The …

abstract arxiv complexity cs.cl diverse encoding importance investigation language language models languages paper part part-of-speech positional encoding question speech study tasks type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US