March 1, 2024, 5:46 a.m. | Alexander Unnervik, Hatef Otroshi Shahreza, Anjith George, S\'ebastien Marcel

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.18718v1 Announce Type: new
Abstract: Backdoor attacks allow an attacker to embed a specific vulnerability in a machine learning algorithm, activated when an attacker-chosen pattern is presented, causing a specific misprediction. The need to identify backdoors in biometric scenarios has led us to propose a novel technique with different trade-offs. In this paper we propose to use model pairs on open-set classification tasks for detecting backdoors. Using a simple linear operation to project embeddings from a probe model's embedding space …

abstract algorithm arxiv attacks backdoor biometric classification cs.cr cs.cv detection embed embedding identify machine machine learning novel set tasks translation type vulnerability

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US