all AI news
I pretrained 16 language models from scratch with different tokenizers to benchmark the difference. Here are the results. [Research]
Sept. 3, 2023, 12:56 p.m. | /u/Pan000
Machine Learning www.reddit.com
Well here it is. I spent $8,000 from my own pocket, and 2 months, pretraining from scratch, finetuning and evaluating 16 language models. 12 small sized models of 91 - 124M parameters, and 4 medium sized models of 354M parameters.
[Here is the …
finetuning gpt gpt-2 language language models machinelearning medium small summary
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne