July 19, 2023, 1:06 p.m. | /u/Successful-Western27

Artificial Intelligence www.reddit.com

I've collected a half-dozen threads [on Twitter](https://twitter.com/mikeyoung44/status/1672971689573990400) from this subreddit of user complaints since March about the degraded quality of GPT outputs. I've noticed a huge drop in quality myself. A common (reasonable) response from some people was that the drop in quality was the result of perception anchoring, desensitization, or something unrelated to the overall performance of the model.

**A new study** by researchers Chen, Zaharia, and Zou at Stanford and UC Berkley now confirms that these perceived degradations …

artificial chen code gpt gpt-4 llms researchers stanford study versions

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States