Nov. 6, 2023, 3:05 p.m. | 1littlecoder

1littlecoder www.youtube.com

" limited evidence that the transformer models’ in-context learning behavior is capable of generalizing beyond their pretraining data."

🔗 Links 🔗

Pretraining Data Mixtures Enable Narrow Model Selection
Capabilities in Transformer Models
Paper - https://arxiv.org/pdf/2311.00871.pdf

Yann Le Cunn's Tweet "Don't confuse the approximate retrieval abilities of LLMs for actual reasoning abilities." - https://twitter.com/ylecun/status/1721382045862052150

OpenAI AGI Charter - https://openai.com/charter

❤️ If you want to support the channel ❤️
Support here:
Patreon - https://www.patreon.com/1littlecoder/
Ko-Fi - https://ko-fi.com/1littlecoder

🧭 Follow me on 🧭 …

agi behavior beyond capabilities context data evidence in-context learning model selection narrow support transformer transformer models

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne