Feb. 7, 2024, 5:43 a.m. | Xiangru Tang Qiao Jin Kunlun Zhu Tongxin Yuan Yichi Zhang Wangchunshu Zhou Meng Qu Yilun Zhao

cs.LG updates on arXiv.org arxiv.org

Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines. While their capabilities are promising, they also introduce novel vulnerabilities that demand careful consideration for safety. However, there exists a notable gap in the literature, as there has been no comprehensive exploration of these vulnerabilities. This position paper fills this gap by conducting a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on …

agents autonomy capabilities cs.ai cs.cl cs.cy cs.lg demand discoveries gap intelligent language language models large language large language models literature llm llms novel risks safety science vulnerabilities

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston