Feb. 9, 2024, 12:21 a.m. | H2O.ai

H2O.ai www.youtube.com

Join Asherith Barthur at H2O GenAI Day Atlanta 2024 for the workshop "How to Jailbreak an LLM." Asherith, a security scientist at H2O, guides participants through the vulnerabilities of Large Language Models (LLMs) and shares six techniques to bypass their security and ethical guardrails. This workshop sheds light on the simplicity and security risks associated with LLMs.

🔓 Techniques Discussed:

➡️ Changing the Question - Techniques for rephrasing queries to bypass model restrictions.
➡️ Hijacking the Response - Strategies for …

atlanta ethical genai guardrails guides h2o jailbreak join language language models large language large language models llm llms security shares six through vulnerabilities workshop

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru