March 15, 2024, 2:41 p.m. | Dr. Tony Hoang

The Artificial Intelligence Podcast linktr.ee

Generative AI, which can create original content like text, video, and images, is susceptible to data poisoning. Hackers can insert false or misleading information into the data used to train AI models, leading to the spread of misinformation. Generative AI models rely on data from the open web, making it easy for hackers to manipulate. Even a small amount of false information can significantly impact the outputs of AI models. Researchers warn that this poses a risk of disseminating harmful …

ai models data data poisoning easy false generative generative ai models hackers images information making misinformation text train train ai video web

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US