March 6, 2024, 5:48 a.m. | Yuxin Zi, Hariram Veeramani, Kaushik Roy, Amit Sheth

cs.CL updates on arXiv.org arxiv.org

arXiv:2312.09932v2 Announce Type: replace
Abstract: Natural language understanding (NLU) using neural network pipelines often requires additional context that is not solely present in the input data. Through Prior research, it has been evident that NLU benchmarks are susceptible to manipulation by neural models, wherein these models exploit statistical artifacts within the encoded external knowledge to artificially inflate performance metrics for downstream tasks. Our proposed approach, known as the Recap, Deliberate, and Respond (RDR) paradigm, addresses this issue by incorporating three …

abstract arxiv benchmarks context cs.ai cs.cl data exploit language language understanding manipulation natural natural language network neural network nlu pipelines prior recap research statistical through type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079