all AI news
A Theoretical Result on the Inductive Bias of RNN Language Models
Feb. 27, 2024, 5:42 a.m. | Anej Svete, Robin Shing Moon Chan, Ryan Cotterell
cs.LG updates on arXiv.org arxiv.org
Abstract: Recent work by Hewitt et al. (2020) provides a possible interpretation of the empirical success of recurrent neural networks (RNNs) as language models (LMs).
It shows that RNNs can efficiently represent bounded hierarchical structures that are prevalent in human language.
This suggests that RNNs' success might be linked to their ability to model hierarchy.
However, a closer inspection of Hewitt et al.'s (2020) construction shows that it is not limited to hierarchical LMs, posing the …
abstract arxiv bias cs.cc cs.cl cs.lg hierarchical human inductive interpretation language language models lms networks neural networks recurrent neural networks rnn shows success type work
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 13 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
RL Analytics - Content, Data Science Manager
@ Meta | Burlingame, CA
Research Engineer
@ BASF | Houston, TX, US, 77079