Aug. 19, 2023, 3:11 a.m. | /u/markchadstone3

Data Science

**Credit: I read about this in** [**this AI newsletter**]( **and the research paper was written by Google Deepmind.**


*Researchers investigated "sycophancy" in LLMs - the tendency to agree with a user's opinion, even if it's wrong. Models even agreed with blatantly false math claims if the user signaled agreement. Analyzing three sycophancy tasks showed model size and instruction tuning increased this behavior. A simple synthetic data intervention was proposed, fine-tuning models to strengthen resistance to freely available …

behavior data datascience false fine-tuning llms math opinion researchers simple summary synthetic synthetic data tasks

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Science Advisor

@ Blue Yonder | Hamburg

Data Analyst

@ Sinch | São Paulo, State of São Paulo, Brazil - Remote

Data Engineer - Híbrido

@ SGS | Callao, Peru

Senior Analytics Engineer Brazil

@ Hiflylabs | Blumenau, Hungary