March 18, 2024, 4:41 a.m. | Mohamed elShehaby, Aditya Kotha, Ashraf Matrawy

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10461v1 Announce Type: new
Abstract: Machine Learning (ML) is susceptible to adversarial attacks that aim to trick ML models, making them produce faulty predictions. Adversarial training was found to increase the robustness of ML models against these attacks. However, in network and cybersecurity, obtaining labeled training and adversarial training data is challenging and costly. Furthermore, concept drift deepens the challenge, particularly in dynamic domains like network and cybersecurity, and requires various models to conduct periodic retraining. This letter introduces Adaptive …

abstract adversarial adversarial attacks adversarial training aim arxiv attacks continuous cs.cr cs.lg cs.ni cybersecurity data found however machine machine learning making ml models network predictions robustness them training training data trick type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US