Aug. 19, 2023, 4:14 a.m. | /u/Azlarks

Machine Learning www.reddit.com

What he made is a LM architecture inspired by the Transformer and RWKV. Its validation loss went to 1.69 after a few minutes of training on Shakespeare character-wise, for a 0.4 million parameter model. This beats the stats that NanoGPT's GitHub page says for its small model (trained on a Macbook). He doesn't yet know how it will scale, but if this does prove to be as good as it seems what would next steps be? Just publishing on GitHub …

architecture github loss machinelearning nanogpt stats training transformer validation validation loss

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US