all AI news
Look Before You Leap: Problem Elaboration Prompting Improves Mathematical Reasoning in Large Language Models
Feb. 27, 2024, 5:49 a.m. | Haoran Liao, Jidong Tian, Shaohua Hu, Hao He, Yaohui Jin
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models~(LLMs) have exhibited impressive performance across NLP tasks. So far they still face challenges in complex reasoning tasks and can be sensitive to input context. Despite significant efforts have been invested in enhancing reasoning process and improving prefix-prompts robustness, the crucial role of problem context has been overlooked. In this study, we propose a new approach to improve the mathematical capacities of LLMs, named Problem Elaboration Prompting~(PEP). Specifically, PEP decomposes and elucidates the …
abstract arxiv challenges context cs.ai cs.cl face language language models large language large language models llms look mathematical reasoning nlp performance process prompting prompts reasoning robustness tasks type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US