March 18, 2024, 4:42 a.m. | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.02427v3 Announce Type: replace-cross
Abstract: Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive …

abstract agents architectures arxiv class cognitive cognitive architectures control cs.ai cs.cl cs.lg cs.sc framework internet language language models large language large language models llms organize prompt reasoning resources success tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA