Web: http://arxiv.org/abs/2205.02832

May 6, 2022, 1:11 a.m. | Yasumasa Onoe, Michael J.Q. Zhang, Eunsol Choi, Greg Durrett

cs.CL updates on arXiv.org arxiv.org

Language models (LMs) are typically trained once on a large-scale corpus and
used for years without being updated. However, in a dynamic world, new entities
constantly arise. We propose a framework to analyze what LMs can infer about
new entities that did not exist when the LMs were pretrained. We derive a
dataset of entities indexed by their origination date and paired with their
English Wikipedia articles, from which we can find sentences about each entity.
We evaluate LMs' perplexity …

about arxiv

More from arxiv.org / cs.CL updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote