Aug. 22, 2022, 6:18 p.m. | /u/unsolved_integral

Machine Learning www.reddit.com

Are there any papers around the ethics/research of distillation on models trained on private datasets? (Or known occurrences where datasets have been stolen?)

it would seem that if you have a proprietary dataset and you train a model A on it, you might be able to "steal" the dataset by training a model B on model's A predictions. Could imagine this happening in industry, with the proliferation of proprietary models.

dataset distillation machinelearning model distillation

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Technical Program Manager, Expert AI Trainer Acquisition & Engagement

@ OpenAI | San Francisco, CA

Director, Data Engineering

@ PatientPoint | Cincinnati, Ohio, United States