Jan. 4, 2024, 9:49 p.m. |

News on Artificial Intelligence and Machine Learning techxplore.com

Adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction—and there's no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

ai and machine learning ai systems artificial artificial intelligence behavior computer cyberattacks defense developers identify institute intelligence machine machine learning nist report scientists security standards systems technology them types vulnerabilities

More from techxplore.com / News on Artificial Intelligence and Machine Learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Applied Scientist

@ Microsoft | Redmond, Washington, United States

Data Analyst / Action Officer

@ OASYS, INC. | OASYS, INC., Pratt Avenue Northwest, Huntsville, AL, United States