Feb. 28, 2024, 5:42 a.m. | Santosh T. Y. S. S, Nina Baumgartner, Matthias St\"urmer, Matthias Grabmair, Joel Niklaus

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17013v1 Announce Type: cross
Abstract: The assessment of explainability in Legal Judgement Prediction (LJP) systems is of paramount importance in building trustworthy and transparent systems, particularly considering the reliance of these systems on factors that may lack legal relevance or involve sensitive attributes. This study delves into the realm of explainability and fairness in LJP models, utilizing Swiss Judgement Prediction (SJP), the only available multilingual LJP dataset. We curate a comprehensive collection of rationales that `support' and `oppose' judgement from …

abstract arxiv assessment benchmarking building cs.ai cs.cl cs.lg dataset explainability fairness importance legal multilingual prediction reliance study systems trustworthy type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US