April 24, 2023, 12:45 a.m. | Julian Coda-Forno, Kristin Witte, Akshay K. Jagadish, Marcel Binz, Zeynep Akata, Eric Schulz

cs.LG updates on arXiv.org arxiv.org

Large language models are transforming research on machine learning while
galvanizing public debates. Understanding not only when these models work well
and succeed but also why they fail and misbehave is of great societal
relevance. We propose to turn the lens of computational psychiatry, a framework
used to computationally describe and modify aberrant behavior, to the outputs
produced by these models. We focus on the Generative Pre-Trained Transformer
3.5 and subject it to tasks commonly studied in psychiatry. Our results …

anxiety arxiv behavior bias computational exploration focus framework generative generative pre-trained transformer gpt gpt-3 gpt-3.5 language language models large language models machine machine learning public research transformer understanding work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru