all AI news
PubDef: Defending Against Transfer Attacks Using Public Models
Oct. 29, 2023, 2:42 p.m. | Mike Young
Replicate Codex notes.replicatecodex.com
Adversarial attacks pose a serious threat to the reliability and security of machine learning systems. By making small perturbations to inputs, attackers can cause models to produce completely incorrect outputs. Defending against these attacks is an active area of research, but most proposed defenses have major drawbacks.
This paper (repo
adversarial adversarial attacks attacks learning systems machine machine learning major making paper plain english papers public reliability research security small systems threat transfer
More from notes.replicatecodex.com / Replicate Codex
A change to make diffusion models 80% faster
3 weeks, 2 days ago |
notes.replicatecodex.com
The death of creativity
3 weeks, 5 days ago |
notes.replicatecodex.com
You're invited to submit your AI tool to AIModels.fyi
1 month, 2 weeks ago |
notes.replicatecodex.com
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne