Oct. 6, 2023, 11:25 p.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract: Large language models have permeated our everyday lives and are used in critical decision making scenarios that can affect millions of people. Despite their impressive progress, model deficiencies may result in exacerbating harmful biases or lead to catastrophic failures. In this talk, I discuss several important considerations for reliable model deployment that engender user trust. Beyond improved accuracy on new and complex tasks, users want more transparent models that better explain their predictions and are robust to data biases …

abstract biases debugging decision decision making deployment discuss interactive language language models large language large language models making model deployment people progress reliability talk

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston