June 27, 2022, 6:55 p.m. | /u/alexlyzhov

Natural Language Processing www.reddit.com

We're used to finding that task performance scales well with large increases in sizes of language models. But for real-world applications, it's also very meaningful to search for failure cases preemptively to fix the underlying issues. Can you find and convincingly demonstrate these failure cases where language models scale *inversely*, with larger models behaving worse?

You don't necessarily need to have extra deep knowledge of ML or language models in order to participate and win, because all models are frozen …

language language models languagetechnology scaling

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN

@ EY | New York City, US, 10001-8604

Data Engineer- People Analytics

@ Volvo Group | Gothenburg, SE, 40531