April 9, 2024, 4:42 a.m. | Bishwas Mandal, George Amariucai, Shuangqing Wei

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05047v1 Announce Type: new
Abstract: We investigate the application of large language models (LLMs), specifically GPT-4, to scenarios involving the tradeoff between privacy and utility in tabular data. Our approach entails prompting GPT-4 by transforming tabular data points into textual format, followed by the inclusion of precise sanitization instructions in a zero-shot manner. The primary objective is to sanitize the tabular data in such a way that it hinders existing machine learning models from accurately inferring private features while allowing …

abstract application arxiv cs.cr cs.lg data exploration format gpt gpt-4 inclusion language language models large language large language models llms privacy prompting tabular tabular data textual type utility zero-shot

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)