April 26, 2023, 12:09 a.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract:
Large generative models create the potential for many beneficial use cases, but they also raise significant ethical and social risks of harm. How can we know whether a large model is aligned with our expectations? In this talk, we first present a taxonomy of ethical and social risks developed by a multidisciplinary group of DeepMind researchers. We then turn to evaluation approaches to measure these risks. Surveying existing evaluation approaches - such as automated benchmarking and human adversarial testing, …

abstract benchmarking cases deepmind evaluation frameworks generative generative models highlight human large models researchers risks social talk taxonomy testing use cases

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US