March 18, 2024, 4:41 a.m. | Mohamed elShehaby, Aditya Kotha, Ashraf Matrawy

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10461v1 Announce Type: new
Abstract: Machine Learning (ML) is susceptible to adversarial attacks that aim to trick ML models, making them produce faulty predictions. Adversarial training was found to increase the robustness of ML models against these attacks. However, in network and cybersecurity, obtaining labeled training and adversarial training data is challenging and costly. Furthermore, concept drift deepens the challenge, particularly in dynamic domains like network and cybersecurity, and requires various models to conduct periodic retraining. This letter introduces Adaptive …

abstract adversarial adversarial attacks adversarial training aim arxiv attacks continuous cs.cr cs.lg cs.ni cybersecurity data found however machine machine learning making ml models network predictions robustness them training training data trick type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)