Oct. 17, 2023, 11:30 a.m. | Kyle Wiggers

TechCrunch techcrunch.com

Sometimes, following instructions too precisely can land you in hot water — if you’re a large language model, that is. That’s the conclusion reached by a new, Microsoft-affiliated scientific paper that looked at the “trustworthiness” — and toxicity — of large language models (LLMs) including OpenAI’s GPT-4 and GPT-3.5, GPT-4’s predecessor. The co-authors write that, […]


© 2023 TechCrunch. All rights reserved. For personal use only.

ai authors flaws generative-ai gpt gpt-3 gpt-3.5 gpt-4 hot language language model language models large language large language model large language models llms microsoft openai paper research study toxicity water

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru