Oct. 20, 2022, 1:17 a.m. | Frank van der Velde

cs.CL updates on arXiv.org arxiv.org

Recently, a number of articles have argued that deep learning models such as
GPT could also capture key aspects of language processing in the human mind and
brain. However, I will argue that these models are not suitable as neural
models of human language. Firstly, because they fail on fundamental boundary
conditions, such as the amount of learning they require. This would in fact
imply that the mechanisms of GPT and brain language processing are
fundamentally different. Secondly, because they …

architecture architectures arxiv deep learning language logistics neural architectures processing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South