June 7, 2024, 7:01 a.m. | /u/sam_the_tomato

Machine Learning www.reddit.com

When I heard about the Chinchilla scaling law a while back, they seemed to suggest that many of the mainstream LLMs were significantly undertrained, and that they should be trained on far more data, keeping their model size fixed.

However, I recently also came across the concept of 'double descent', which seems to argue the opposite - that you should just increase the number of parameters in your model as much as your compute budget will allow, even if the …

concept data however law laws llms machinelearning scaling scaling law while

Senior Data Engineer

@ Displate | Warsaw

Professor/Associate Professor of Health Informatics [LKCMedicine]

@ Nanyang Technological University | NTU Novena Campus, Singapore

Research Fellow (Computer Science (and Engineering)/Electronic Engineering/Applied Mathematics/Perception Sciences)

@ Nanyang Technological University | NTU Main Campus, Singapore

Java Developer - Assistant Manager

@ State Street | Bengaluru, India

Senior Java/Python Developer

@ General Motors | Austin IT Innovation Center North - Austin IT Innovation Center North

Research Associate (Computer Engineering/Computer Science/Electronics Engineering)

@ Nanyang Technological University | NTU Main Campus, Singapore