April 18, 2024, 4:47 a.m. | Masahiro Kaneko, Youmi Ma, Yuki Wata, Naoaki Okazaki

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.11262v1 Announce Type: new
Abstract: Large Language Models (LLMs) are trained on large-scale web data, which makes it difficult to grasp the contribution of each text. This poses the risk of leaking inappropriate data such as benchmarks, personal information, and copyrighted texts in the training data. Membership Inference Attacks (MIA), which determine whether a given text is included in the model's training data, have been attracting attention. Previous studies of MIAs revealed that likelihood-based classification is effective for detecting leaks …

abstract arxiv attacks benchmarks cs.cl data inappropriate inference information language language models large language large language models likelihood llms personal information risk sampling scale text training training data type web

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India