March 13, 2024, 4:01 a.m. | /u/Axon350

Machine Learning www.reddit.com

I recently read about [GPQA](https://arxiv.org/abs/2311.12022), the expert-level benchmark of very difficult questions in biology, physics, and chemistry. Apparently Claude 3 is very good at these questions.

However, Claude 3 and GPT-4 consistently give wrong information when I ask it about fields I have a "dedicated amateur" level of knowledge in. These are the types of questions I would expect someone interested in the topic to ask if they had no knowledge in the field. Often the mistakes appear early in …

claude claude 3 conversation expect fields gpt gpt-4 however information knowledge machinelearning mistakes questions the conversation types

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY