Oct. 18, 2023, 12:46 p.m. | Igor Paniuk

Hacker Noon - ai hackernoon.com

Recent research uncovered a vulnerability in deep learning models, including large language models, called "adversarial attacks." These attacks manipulate input data to mislead models. So, I decided to test out a framework that automatically generates universal adversarial prompts.

Read All

adversarial adversarial attacks ai ai ethics ai vulnerabilities attacks data deep learning ethical ai framework future-of-ai language language models large language large language models llms prompts research test vulnerability

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York