Feb. 26, 2024, 5:48 a.m. | Yongqi Li, Mayi Xu, Xin Miao, Shen Zhou, Tieyun Qian

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.14791v2 Announce Type: replace
Abstract: Large language models (LLMs) have made remarkable progress in a wide range of natural language understanding and generation tasks. However, their ability to generate counterfactuals has not been examined systematically. To bridge this gap, we present a comprehensive evaluation framework on various types of NLU tasks, which covers all key factors in determining LLMs' capability of generating counterfactuals. Based on this framework, we 1) investigate the strengths and weaknesses of LLMs as the counterfactual generator, …

abstract arxiv bridge counterfactual cs.cl evaluation framework gap generate language language models language understanding large language large language models llms natural natural language nlu progress prompting study tasks type types understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN