all AI news
Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models
April 10, 2024, 4:42 a.m. | Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana
cs.LG updates on arXiv.org arxiv.org
Abstract: While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over. In this work, we address this concern for tabular data. Specifically, we introduce a variety of different techniques to assess whether a language model has seen a tabular dataset during training. This investigation reveals that LLMs have memorized many popular tabular datasets verbatim. We then …
arxiv cs.ai cs.cl cs.lg data elephants language language models large language large language models tabular tabular data type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist (Computer Science)
@ Nanyang Technological University | NTU Main Campus, Singapore
Intern - Sales Data Management
@ Deliveroo | Dubai, UAE (Main Office)