Feb. 27, 2024, 5:42 a.m. | Anej Svete, Robin Shing Moon Chan, Ryan Cotterell

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15814v1 Announce Type: cross
Abstract: Recent work by Hewitt et al. (2020) provides a possible interpretation of the empirical success of recurrent neural networks (RNNs) as language models (LMs).
It shows that RNNs can efficiently represent bounded hierarchical structures that are prevalent in human language.
This suggests that RNNs' success might be linked to their ability to model hierarchy.
However, a closer inspection of Hewitt et al.'s (2020) construction shows that it is not limited to hierarchical LMs, posing the …

abstract arxiv bias cs.cc cs.cl cs.lg hierarchical human inductive interpretation language language models lms networks neural networks recurrent neural networks rnn shows success type work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US