March 28, 2024, 8:51 p.m. | Dr. Tony Hoang

The Artificial Intelligence Podcast linktr.ee

Microsoft has implemented new security measures to safeguard its AI chatbots from malicious attacks. The tools, integrated into Azure AI, aim to prevent prompt injection attacks that manipulate the AI system into generating harmful content or extracting sensitive data. Microsoft is also addressing concerns relating to the AI system's quality and reliability, with prompt shields to detect and block injection attacks, groundedness detection to identify AI "hallucinations," and safety system messages to guide model behavior. The collaboration between Microsoft and …

ai chatbots aim ai system attacks azure azure ai chatbots concerns data microsoft prompt prompt injection prompt injection attacks quality security tools

More from linktr.ee / The Artificial Intelligence Podcast

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AIML - Sr Machine Learning Engineer, Data and ML Innovation

@ Apple | Seattle, WA, United States

Senior Data Engineer

@ Palta | Palta Cyprus, Palta Warsaw, Palta remote