all AI news
SOPHON: Non-Fine-Tunable Learning to Restrain Task Transferability For Pre-trained Models
April 22, 2024, 4:41 a.m. | Jiangyi Deng (Zhejiang University), Shengyuan Pang (Zhejiang University), Yanjiao Chen (Zhejiang University), Liangming Xia (Zhejiang University), Yij
cs.LG updates on arXiv.org arxiv.org
Abstract: Instead of building deep learning models from scratch, developers are more and more relying on adapting pre-trained models to their customized tasks. However, powerful pre-trained models may be misused for unethical or illegal tasks, e.g., privacy inference and unsafe content generation. In this paper, we introduce a pioneering learning paradigm, non-fine-tunable learning, which prevents the pre-trained model from being fine-tuned to indecent tasks while preserving its performance on the original task. To fulfill this goal, …
abstract arxiv building content generation cs.lg deep learning developers however inference paper pre-trained models privacy scratch tasks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A
GN SONG MT Market Research Data Analyst 09
@ Accenture | Bengaluru, BDC7A