all AI news
Bootstrapping Multilingual Semantic Parsers using Large Language Models. (arXiv:2210.07313v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Despite cross-lingual generalization demonstrated by pre-trained multilingual
models, the translate-train paradigm of transferring English datasets across
multiple languages remains to be the key ingredient for training task-specific
multilingual models. However, for many low-resource languages, the availability
of a reliable translation service entails significant amounts of costly
human-annotated translation pairs. Further, the translation services for
low-resource languages may continue to be brittle due to domain mismatch
between the task-specific input text and the general-purpose text used while
training the translation models. …
arxiv bootstrapping language language models large language models semantic