Sept. 10, 2023, 6:01 p.m. | Machine Learning Street Talk

Machine Learning Street Talk www.youtube.com

Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/Prof--Melanie-Mitchell-2-0---AI-Benchmarks-are-Broken-e2959li

Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should …

ai system ai systems benchmarks capabilities cognitive cognitive science concept experimental human intelligence melanie mitchell multidimensional popular science systems testing understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN