Oct. 9, 2023, 2:22 p.m. | /u/Successful-Western27

Machine Learning www.reddit.com

LLMs are great at basic reasoning when prompted, but still struggle with complex multi-step problems like optimization or planning. Humans tackle new problems by drawing on intuition from similar experiences, which LLMs can't do.

Researchers propose "Thought Propagation" to have LLMs reason more like humans - by thinking analogically. First, GPT is prompted to suggest related "analogous" problems to the input. Then it solves those. Finally, it aggregates the solutions to directly solve the input problem or extract useful strategies. …

basic humans intuition llms machinelearning optimization planning propagation reason reasoning researchers thinking thought

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne