Feb. 7, 2024, 5:43 a.m. | Xiangru Tang Qiao Jin Kunlun Zhu Tongxin Yuan Yichi Zhang Wangchunshu Zhou Meng Qu Yilun Zhao

cs.LG updates on arXiv.org arxiv.org

Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines. While their capabilities are promising, they also introduce novel vulnerabilities that demand careful consideration for safety. However, there exists a notable gap in the literature, as there has been no comprehensive exploration of these vulnerabilities. This position paper fills this gap by conducting a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on …

agents autonomy capabilities cs.ai cs.cl cs.cy cs.lg demand discoveries gap intelligent language language models large language large language models literature llm llms novel risks safety science vulnerabilities

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US