Feb. 12, 2024, 5:43 a.m. | Kaiqu Liang Zixu Zhang Jaime Fern\'andez Fisac

cs.LG updates on arXiv.org arxiv.org

Large language models (LLMs) exhibit advanced reasoning skills, enabling robots to comprehend natural language instructions and strategically plan high-level actions through proper grounding. However, LLM hallucination may result in robots confidently executing plans that are misaligned with user goals or, in extreme cases, unsafe. Additionally, inherent ambiguity in natural language instructions can induce task uncertainty, particularly in situations where multiple valid options exist. To address this issue, LLMs must identify such uncertainty and proactively seek clarification. This paper explores the …

advanced agents cases cs.ai cs.cl cs.lg enabling hallucination language language models large language large language models llm llm hallucination llms natural natural language planning reasoning refine robots skills through uncertainty

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Machine Learning Engineer

@ BlackStone eIT | Egypt - Remote

Machine Learning Engineer - 2

@ Parspec | Bengaluru, India