April 5, 2024, 4:47 a.m. | Hongfei Xu, Yang Song, Qiuhui Liu, Josef van Genabith, Deyi Xiong

cs.CL updates on arXiv.org arxiv.org

arXiv:2007.06257v2 Announce Type: replace
Abstract: Stacking non-linear layers allows deep neural networks to model complicated functions, and including residual connections in Transformer layers is beneficial for convergence and performance. However, residual connections may make the model "forget" distant layers and fail to fuse information from previous layers effectively. Selectively managing the representation aggregation of Transformer layers may lead to better performance. In this paper, we present a Transformer with depth-wise LSTMs connecting cascading Transformer layers and sub-layers. We show that …

abstract aggregation arxiv convergence cs.cl functions however information linear networks neural networks non-linear performance representation residual transformer type wise

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South