Feb. 27, 2024, 5:42 a.m. | Anej Svete, Robin Shing Moon Chan, Ryan Cotterell

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15814v1 Announce Type: cross
Abstract: Recent work by Hewitt et al. (2020) provides a possible interpretation of the empirical success of recurrent neural networks (RNNs) as language models (LMs).
It shows that RNNs can efficiently represent bounded hierarchical structures that are prevalent in human language.
This suggests that RNNs' success might be linked to their ability to model hierarchy.
However, a closer inspection of Hewitt et al.'s (2020) construction shows that it is not limited to hierarchical LMs, posing the …

abstract arxiv bias cs.cc cs.cl cs.lg hierarchical human inductive interpretation language language models lms networks neural networks recurrent neural networks rnn shows success type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079