March 31, 2023, 2:36 p.m. | Edd Gent

IEEE Spectrum spectrum.ieee.org



Building ever larger language models has led to groundbreaking jumps in performance. But it’s also pushing state-of-the-art AI beyond the reach of all but the most well-resourced AI labs. That makes efforts to shrink models down to more manageable sizes more important than ever, say researchers.

In 2020, researchers at OpenAI proposed AI scaling laws that suggested increasing model size led to reliable and predictable improvements in capability. But this trend is quickly putting the cutting edge of AI research …

ai labs art artificial intelligence beyond building efficiency gpt-3 gpt-4 labs language language models large language models llms openai performance researchers scaling state

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US