April 4, 2024, 4:47 a.m. | Julia Rozanova, Marco Valentino, Andr\'e Freitas

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.02622v1 Announce Type: new
Abstract: Rigorous evaluation of the causal effects of semantic features on language model predictions can be hard to achieve for natural language reasoning problems. However, this is such a desirable form of analysis from both an interpretability and model evaluation perspective, that it is valuable to investigate specific patterns of reasoning with enough structure and regularity to identify and quantify systematic reasoning failures in widely-used models. In this vein, we pick a portion of the NLI …

abstract analysis arxiv causal cs.cl effects evaluation features form however interpretability language language model logic natural natural language perspective predictions reasoning semantic transformer type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India