all AI news
Memory Capacity of Recurrent Neural Networks with Matrix Representation. (arXiv:2104.07454v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
It is well known that canonical recurrent neural networks (RNNs) faced
limitations in learning long-term dependencies which has been addressed by
memory structures in long short-term memory (LSTM) networks. Neural Turing
machines (NTMs) are novel RNNs that implement the notion of programmable
computers with neural network controllers which can learn simple algorithmic
tasks. Matrix neural networks feature matrix representation which inherently
preserves the spatial structure of data when compared to canonical neural
networks that use vector-based representation. The matrix-representation of …
arxiv capacity matrix memory networks neural networks representation