all AI news
Scaling laws for language encoding models in fMRI. (arXiv:2305.11863v4 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
Representations from transformer-based unidirectional language models are
known to be effective at predicting brain responses to natural language.
However, most studies comparing language models to brains have used GPT-2 or
similarly sized language models. Here we tested whether larger open-source
models such as those from the OPT and LLaMA families are better at predicting
brain responses recorded using fMRI. Mirroring scaling results from other
contexts, we found that brain prediction performance scales logarithmically
with model size from 125M to 30B …
arxiv brain brains cs.cl encoding fmri gpt gpt-2 language language encoding language models laws llama natural natural language open-source models responses scaling studies transformer