Aug. 19, 2023, 3:11 a.m. | /u/markchadstone3

Data Science www.reddit.com

**Credit: I read about this in** [**this AI newsletter**](https://tomorrownow.beehiiv.com/p/new-ai-social-media-app-befake-llms-will-agree-false-claims-four-camps-ai-doom-scenarios) **and the research paper was written by Google Deepmind.**



https://preview.redd.it/2z9u3b42hzib1.png?width=1292&format=png&auto=webp&s=31b80273c8fd74ee8b1ff9049ece05574d2e86b6

**Summary**:

*Researchers investigated "sycophancy" in LLMs - the tendency to agree with a user's opinion, even if it's wrong. Models even agreed with blatantly false math claims if the user signaled agreement. Analyzing three sycophancy tasks showed model size and instruction tuning increased this behavior. A simple synthetic data intervention was proposed, fine-tuning models to strengthen resistance to freely available …

behavior data datascience false fine-tuning llms math opinion researchers simple summary synthetic synthetic data tasks

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Science Advisor

@ Blue Yonder | Hamburg

Data Analyst

@ Sinch | São Paulo, State of São Paulo, Brazil - Remote

Data Engineer - Híbrido

@ SGS | Callao, Peru

Senior Analytics Engineer Brazil

@ Hiflylabs | Blumenau, Hungary