Sept. 27, 2023, 4:34 p.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract: As models continue to rapidly evolve in complexity and scale, the status quo of how they are being evaluated and the quality of benchmarks has not significantly changed. This inertia leaves challenges in evaluation and data quality unaddressed, which results in the potential for erroneous conclusions. In this talk, I highlight these challenges in answering information-seeking questions. First, I discuss the failures of standard evaluation techniques such as lexical matching and other automated evaluation alternatives. Our study reveals the …

abstract benchmarks building challenges complexity data data quality evaluation quality quality data question answering scale systems

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote