March 27, 2024, 4:48 a.m. | Philip Lippmann, Matthijs Spaan, Jie Yang

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.17860v1 Announce Type: new
Abstract: Natural Language Processing (NLP) models optimized for predictive performance often make high confidence errors and suffer from vulnerability to adversarial and out-of-distribution data. Existing work has mainly focused on mitigation of such errors using either humans or an automated approach. In this study, we explore the usage of large language models (LLMs) for data augmentation as a potential solution to the issue of NLP models making wrong predictions with high confidence during classification tasks. We …

abstract adversarial arxiv automated confidence cs.cl data distribution errors humans language language processing llms natural natural language natural language processing nlp performance predictive processing synthetic textual type vulnerability work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US