May 26, 2022, 1:12 a.m. | Michael Saxon, Xinyi Wang, Wenda Xu, William Yang Wang

cs.CL updates on arXiv.org arxiv.org

Many believe human-level natural language inference (NLI) has already been
achieved. In reality, modern NLI benchmarks have serious flaws, rendering
progress questionable. Chief among them is the problem of single sentence label
leakage, where spurious correlations and biases in datasets enable the accurate
prediction of a sentence pair relation from only a single sentence, something
that should in principle be impossible. This leakage enables models to cheat
rather than learn the desired reasoning capabilities, and hasn't gone away
since its …

arxiv datasets inference language natural natural language

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States