July 18, 2023, 11:45 a.m. | Prompt Engineering

Prompt Engineering www.youtube.com

In this video we will look at different approaches on how to avoid prompt injection/hacking with constitutional AI approaches within LangChain. We will explore the ConstitutionalChain along with custom prompts to control the behavior of your LLMs. We will use OpenAI's models as an example but same applies to open-source models.

#langchain #constitutional_AI #openai
▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: https://discord.com/invite/t4eYQRUcXB
▶️️ Subscribe: https://www.youtube.com/@engineerprompt?sub_confirmation=1
📧 Business Contact: engineerprompt@gmail.com …

behavior constitutional ai control example explore hacking langchain llms look openai open-source models prompt prompt injection prompts video

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South