Sept. 27, 2023, 4:34 p.m. | Allen Institute for AI

Allen Institute for AI www.youtube.com

Abstract: As models continue to rapidly evolve in complexity and scale, the status quo of how they are being evaluated and the quality of benchmarks has not significantly changed. This inertia leaves challenges in evaluation and data quality unaddressed, which results in the potential for erroneous conclusions. In this talk, I highlight these challenges in answering information-seeking questions. First, I discuss the failures of standard evaluation techniques such as lexical matching and other automated evaluation alternatives. Our study reveals the …

abstract benchmarks building challenges complexity data data quality evaluation quality quality data question answering scale systems

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US