all AI news
Private Attribute Inference from Images with Vision-Language Models
April 17, 2024, 4:42 a.m. | Batuhan T\"omek\c{c}e, Mark Vero, Robin Staab, Martin Vechev
cs.LG updates on arXiv.org arxiv.org
Abstract: As large language models (LLMs) become ubiquitous in our daily tasks and digital interactions, associated privacy risks are increasingly in focus. While LLM privacy research has primarily focused on the leakage of model training data, it has recently been shown that the increase in models' capabilities has enabled LLMs to make accurate privacy-infringing inferences from previously unseen texts. With the rise of multimodal vision-language models (VLMs), capable of understanding both images and text, a pertinent …
abstract arxiv become capabilities cs.ai cs.cv cs.lg daily data digital focus images inference interactions language language models large language large language models llm llms model training data privacy research risks tasks training training data type vision vision-language vision-language models
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - AWS
@ 3Pillar Global | Costa Rica
Cost Controller/ Data Analyst - India
@ John Cockerill | Mumbai, India, India, India