April 18, 2024, 4:47 a.m. | Masahiro Kaneko, Youmi Ma, Yuki Wata, Naoaki Okazaki

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.11262v1 Announce Type: new
Abstract: Large Language Models (LLMs) are trained on large-scale web data, which makes it difficult to grasp the contribution of each text. This poses the risk of leaking inappropriate data such as benchmarks, personal information, and copyrighted texts in the training data. Membership Inference Attacks (MIA), which determine whether a given text is included in the model's training data, have been attracting attention. Previous studies of MIAs revealed that likelihood-based classification is effective for detecting leaks …

abstract arxiv attacks benchmarks cs.cl data inappropriate inference information language language models large language large language models likelihood llms personal information risk sampling scale text training training data type web

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York