Nov. 18, 2022, 2:15 a.m. | Piotr Nawrot, Jan Chorowski, Adrian Łańcucki, Edoardo M. Ponti

cs.CL updates on arXiv.org arxiv.org

Transformers achieve unrivalled performance in modelling language, but remain
inefficient in terms of memory and time complexity. A possible remedy is to
reduce the sequence length in the intermediate layers by pooling fixed-length
segments of tokens. Nevertheless, natural units of meaning, such as words or
phrases, display varying sizes. To address this mismatch, we equip language
models with a dynamic-pooling mechanism, which predicts segment boundaries in
an autoregressive fashion. We compare several methods to infer boundaries,
including end-to-end learning through …

arxiv pooling transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South