all AI news
[R] Reasoning with Language Model is Planning with World Model - Shibo Hao et al UC San Diego - RAP on LLAMA-33B surpasses CoT on GPT-4 with 33% relative improvement in a plan generation setting!
May 25, 2023, 3:17 p.m. | /u/Singularian2501
Machine Learning www.reddit.com
Abstract:
>Large language models (LLMs) have shown remarkable reasoning capabilities, especially when prompted to generate intermediate reasoning steps (e.g., Chain-of-Thought, CoT). However, LLMs can still struggle with problems that are easy for humans, such as generating action plans for executing tasks in a given environment, or performing complex math, logical, and commonsense reasoning. The deficiency stems from the key fact that LLMs lack an internal *world model* to predict the world *state* (e.g., environment status, intermediate variable values) …
abstract easy environment humans intermediate language language models large language models llms machinelearning math reasoning the key thought
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Data Engineer
@ Bosch Group | San Luis Potosí, Mexico
DATA Engineer (H/F)
@ Renault Group | FR REN RSAS - Le Plessis-Robinson (Siège)
Advisor, Data engineering
@ Desjardins | 1, Complexe Desjardins, Montréal
Data Engineer Intern
@ Getinge | Wayne, NJ, US
Software Engineer III- Java / Python / Pyspark / ETL
@ JPMorgan Chase & Co. | Jersey City, NJ, United States