May 26, 2022, 1:12 a.m. | Soumya Sanyal, Zeyi Liao, Xiang Ren

cs.CL updates on arXiv.org arxiv.org

Transformers have been shown to be able to perform deductive reasoning on a
logical rulebase containing rules and statements written in English natural
language. While the progress is promising, it is currently unclear if these
models indeed perform logical reasoning by understanding the underlying logical
semantics in the language. To this end, we propose RobustLR, a suite of
evaluation datasets that evaluate the robustness of these models to minimal
logical edits in rulebases and some standard logical equivalence conditions. In …

arxiv reasoning robustness

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Business Intelligence Developer / Analyst

@ Transamerica | Work From Home, USA

Data Analyst (All Levels)

@ Noblis | Bethesda, MD, United States