Feb. 1, 2024, 12:41 p.m. | Gavin Mischler Yinghao Aaron Li Stephan Bickel Ashesh D. Mehta Nima Mesgarani

cs.CL updates on arXiv.org arxiv.org

Recent advancements in artificial intelligence have sparked interest in the parallels between large language models (LLMs) and human neural processing, particularly in language comprehension. While prior research has established similarities in the representation of LLMs and the brain, the underlying computational principles that cause this convergence, especially in the context of evolving LLMs, remain elusive. Here, we examined a diverse selection of high-performance LLMs with similar parameter sizes to investigate the factors contributing to their alignment with the brain's language …

artificial artificial intelligence brain computational converge convergence cs.ai cs.cl extraction feature human intelligence language language models large language large language models llms prior processing q-bio.nc representation research

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US