Feb. 22, 2024, 5:42 a.m. | Zhifeng Kong, Kamalika Chaudhuri

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.11351v2 Announce Type: replace
Abstract: Deep generative models are known to produce undesirable samples such as harmful content. Traditional mitigation methods include re-training from scratch, filtering, or editing; however, these are either computationally expensive or can be circumvented by third parties. In this paper, we take a different approach and study how to post-edit an already-trained conditional generative model so that it redacts certain conditionals that will, with high probability, lead to undesirable content. This is done by distilling the …

abstract arxiv cs.cl cs.cv cs.lg data deep generative models edit editing filtering generative generative models paper parties samples scratch study training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote