March 26, 2024, 4:51 a.m. | Josef Valvoda, Ryan Cotterell

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.16852v1 Announce Type: new
Abstract: Current legal outcome prediction models - a staple of legal NLP - do not explain their reasoning. However, to employ these models in the real world, human legal actors need to be able to understand their decisions. In the case of common law, legal practitioners reason towards the outcome of a case by referring to past case law, known as precedent. We contend that precedent is, therefore, a natural way of facilitating explainability for legal …

abstract actors arxiv case cs.ai cs.cl current decisions explainability however human law legal nlp prediction prediction models reason reasoning type world

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote