Feb. 13, 2024, 5:43 a.m. | Z Liu J Lou W Bao Z Qin K Ren

cs.LG updates on arXiv.org arxiv.org

Finetuning on task-specific datasets is a widely-embraced paradigm of harnessing the powerful capability of pretrained LLMs for various downstream tasks. Due to the popularity of LLMs finetuning and its accompanying privacy concerns, differentially private (DP) finetuning of pretrained LLMs has garnered increasing attention to safeguarding the privacy of task-specific datasets. Lying at the design core of DP LLM finetuning methods is the satisfactory tradeoff between privacy, utility, and scalability. Most existing methods build upon the seminal work of DP-SGD. Despite …

attention capability concerns cs.ai cs.lg datasets finetuning language language model large language large language model llms paradigm privacy scalable tasks

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Encounter Data Management Professional

@ Humana | Work at Home - Kentucky

Pre-sales Manager (Data, Analytics & AI)

@ Databricks | Stockholm, Sweden

Lecturer / Senior Lecturer - Medical Imaging

@ Central Queensland University | Mackay, QLD, AU

Intern - Research Engineer

@ Plus | Santa Clara, CA