Sept. 12, 2023, 8:25 p.m. | /u/Singularian2501

Machine Learning www.reddit.com

Paper: [https://arxiv.org/abs/2309.05519](https://arxiv.org/abs/2309.05519)

Blog: [https://next-gpt.github.io/](https://next-gpt.github.io/)

**My opinion: It lacks a Cognitive Architecture:** [**https://arxiv.org/abs/2309.02427**](https://arxiv.org/abs/2309.02427) **Also the models are far too small and are more on the gpt-2 level. The idea in itself is a good one but can be far improved with bigger models. I also would like to remember in this that all foundation models could be improved if there would be no tokenizers:** [**https://x.com/karpathy/status/1657949234535211009?s=20**](https://x.com/karpathy/status/1657949234535211009?s=20)

Abstract:

>While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey …

abstract humans language language models large language large language models llms machinelearning multimodal multiple people prey through understanding world

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV