May 26, 2022, 1:12 a.m. | Jie Huang, Hanyin Shao, Kevin Chen-Chuan Chang

cs.CL updates on arXiv.org arxiv.org

Large Pre-Trained Language Models (PLMs) have facilitated and dominated many
NLP tasks in recent years. However, despite the great success of PLMs, there
are also privacy concerns brought with PLMs. For example, recent studies show
that PLMs memorize a lot of training data, including sensitive information,
while the information may be leaked unintentionally and be utilized by
malicious attackers.


In this paper, we propose to measure whether PLMs are prone to leaking
personal information. Specifically, we attempt to query PLMs …

arxiv information language language models

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States