all AI news
Adversarial Attacks and Dimensionality in Text Classifiers
April 4, 2024, 4:41 a.m. | Nandish Chattopadhyay, Atreya Goswami, Anupam Chattopadhyay
cs.LG updates on arXiv.org arxiv.org
Abstract: Adversarial attacks on machine learning algorithms have been a key deterrent to the adoption of AI in many real-world use cases. They significantly undermine the ability of high-performance neural networks by forcing misclassifications. These attacks introduce minute and structured perturbations or alterations in the test samples, imperceptible to human annotators in general, but trained neural networks and other models are sensitive to it. Historically, adversarial attacks have been first identified and studied in the domain …
abstract adoption adversarial adversarial attacks algorithms arxiv attacks cases classifiers cs.lg dimensionality key machine machine learning machine learning algorithms networks neural networks performance samples test text type undermine use cases world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Robotics Technician - 3rd Shift
@ GXO Logistics | Perris, CA, US, 92571