all AI news
From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency
April 19, 2024, 4:47 a.m. | Xenia Ohmer, Elia Bruni, Dieuwke Hupkes
cs.CL updates on arXiv.org arxiv.org
Abstract: The staggering pace with which the capabilities of large language models (LLMs) are increasing, as measured by a range of commonly used natural language understanding (NLU) benchmarks, raises many questions regarding what "understanding" means for a language model and how it compares to human understanding. This is especially true since many LLMs are exclusively trained on text, casting doubt on whether their stellar benchmark performances are reflective of a true understanding of the problems represented …
abstract arxiv benchmarks capabilities cs.ai cs.cl form language language model language models language understanding large language large language models llms meaning natural natural language nlu questions raises semantic type understanding
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Analyst
@ Alstom | Johannesburg, GT, ZA