Feb. 13, 2024, 1 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

Apple's new contextual understanding benchmark (February 2024) provide new insight into AI reasoning. Especially the ICL (in-context learning prompt) from retrieved and augmented data, and their non-performance on low-complexity pre-trained LLMs. I build upon Apple's insights to discuss a new complexity class of synthetic pre-training data sets for better pre-trained LLMs.

Q: is the core of AI reasoning build upon the pre-trained LLM and what can a coherent-complexity-class fine-tuned dataset for the SFT achieve? Strong indications that AI reasoning is …

ai reasoning apple augmented data benchmark build class complexity context data data sets discuss google in-context learning insight insights llms low performance pre-training prompt reasoning synthetic training training data understanding

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada