May 26, 2022, 1:12 a.m. | Zexuan Zhong, Tao Lei, Danqi Chen

cs.CL updates on arXiv.org arxiv.org

Recent work has improved language models remarkably by equipping them with a
non-parametric memory component. However, most existing approaches only
introduce memories at testing time, or represent them using a separately
trained encoder -- resulting in sub-optimal training of the language model. In
this work, we present TRIME, a novel yet simple training approach designed for
training language models with memory augmentation. Our approach uses a training
objective that directly takes in-batch examples as accessible memory. We also
present new …

arxiv augmentation language language models memory training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina