all AI news
[R] LLMs cannot find reasoning errors, but can correct them!
Nov. 20, 2023, 5:40 p.m. | /u/gladystyen
Machine Learning www.reddit.com
I recently did an internship at Google and wrote a paper on LLM self-correction. We released a dataset of Chain-of-Thought reasoning steps, generated using PaLM 2, and annotated with the location of the first logical error. Thought some folks here might be interested!
Paper link: [https://arxiv.org/abs/2311.08516](https://arxiv.org/abs/2311.08516)
GitHub link: [https://github.com/WHGTyen/BIG-Bench-Mistake](https://github.com/WHGTyen/BIG-Bench-Mistake)
# TL;DR
Recently, Google DeepMind showed that [LLMs cannot self-correct reasoning errors without external feedback](https://arxiv.org/abs/2310.01798). We wanted to investigate this and set out to answer these questions:
1. Can …
dataset error errors generated google internship llm llms location machinelearning palm palm 2 paper reasoning reddit them thought
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York