June 25, 2024, 4:43 a.m. | Jinyoung Park, Ameen Patel, Omar Zia Khan, Hyunwoo J. Kim, Joo-Kyung Kim

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09762v2 Announce Type: replace
Abstract: Chain-of-Thought (CoT) prompting along with sub-question generation and answering has enhanced multi-step reasoning capabilities of Large Language Models (LLMs). However, prompting the LLMs to directly generate sub-questions is suboptimal since they sometimes generate redundant or irrelevant questions. To deal with them, we propose a GE-Reasoning method, which directs LLMs to generate proper sub-questions and corresponding answers. Concretely, given an input question, we first prompt the LLM to generate knowledge triplets, forming a graph representation of …

abstract arxiv capabilities cs.ai cs.cl cs.lg deal generate graph however language language models large language large language models llms multi prompting question questions reasoning replace them thought type

Senior Systems Engineer - RF/Electrical Focus

@ RTX | AZ805: RMS AP Bldg 805 1151 East Hermans Road Building 805, Tucson, AZ, 85756 USA

Model-Based Systems Engineer, Mid

@ Booz Allen Hamilton | USA, MD, Lexington Park (46950 Bradley Blvd)

Electromagnetic Warfare Hardware Engineering Lead

@ Booz Allen Hamilton | USA, OH, Beavercreek (3800 Pentagon Blvd)

Senior Software Focused Systems Engineer

@ RTX | AZ805: RMS AP Bldg 805 1151 East Hermans Road Building 805, Tucson, AZ, 85756 USA

Senior Principal Low Observable Design, Analysis, & Test Engineer - Tucson, AZ (Onsite)

@ RTX | AZ827: RMS AP Bldg 827C 1151 East Hermans Road Building 827C, Tucson, AZ, 85756 USA

Senior Principal Low Observable Materials Engineer - Onsite

@ RTX | AZ827: RMS AP Bldg 827C 1151 East Hermans Road Building 827C, Tucson, AZ, 85756 USA