May 19, 2023, 8:38 a.m. | /u/ironborn123

Machine Learning www.reddit.com

This seems to be a more structured version of building problem solving agents on top of LLMs, compared to existing attempts like autogpt or babyagi.

https://arxiv.org/abs/2305.10601

But they also highlight the known limitation that these approaches can be quite expensive with paid LLM models. On the other hand, larger models show better reasoning abilities. Would be interesting if someone uses the llama/alpaca 65B model as the locally run LLM for ToT and then compares the results.

agents autogpt babyagi building highlight llm llms machinelearning paper reasoning show tree

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote