Feb. 9, 2024, 12:21 a.m. | H2O.ai

H2O.ai www.youtube.com

Join Asherith Barthur at H2O GenAI Day Atlanta 2024 for the workshop "How to Jailbreak an LLM." Asherith, a security scientist at H2O, guides participants through the vulnerabilities of Large Language Models (LLMs) and shares six techniques to bypass their security and ethical guardrails. This workshop sheds light on the simplicity and security risks associated with LLMs.

🔓 Techniques Discussed:

➡️ Changing the Question - Techniques for rephrasing queries to bypass model restrictions.
➡️ Hijacking the Response - Strategies for …

atlanta ethical genai guardrails guides h2o jailbreak join language language models large language large language models llm llms security shares six through vulnerabilities workshop

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US