April 26, 2023, 12:09 a.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract:
Large generative models create the potential for many beneficial use cases, but they also raise significant ethical and social risks of harm. How can we know whether a large model is aligned with our expectations? In this talk, we first present a taxonomy of ethical and social risks developed by a multidisciplinary group of DeepMind researchers. We then turn to evaluation approaches to measure these risks. Surveying existing evaluation approaches - such as automated benchmarking and human adversarial testing, …

abstract benchmarking cases deepmind evaluation frameworks generative generative models highlight human large models researchers risks social talk taxonomy testing use cases

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal