April 3, 2024, 4:47 a.m. | Xin Su, Tiep Le, Steven Bethard, Phillip Howard

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.08505v2 Announce Type: replace
Abstract: An important open question in the use of large language models for knowledge-intensive tasks is how to effectively integrate knowledge from three sources: the model's parametric memory, external structured knowledge, and external unstructured knowledge. Most existing prompting methods either rely on one or two of these sources, or require repeatedly invoking large language models to generate similar or identical content. In this work, we overcome these limitations by introducing a novel semi-structured prompting approach that …

abstract arxiv cs.cl knowledge language language model language models large language large language models memory multiple parametric prompting question reasoning tasks thought type unstructured

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston