all AI news
Concept-Guided LLM Agents for Human-AI Safety Codesign
April 25, 2024, 7:42 p.m. | Florian Geissler, Karsten Roscher, Mario Trapp
cs.LG updates on arXiv.org arxiv.org
Abstract: Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people. This also leads to high quality requirements for generative AI. Therefore, the simplistic use of Large Language Models (LLMs) alone will not meet these quality demands. It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems. Ultimately, humans must understand …
abstract agents arxiv concept cs.hc cs.lg cs.se engineering generative harm human language language models large language large language models leads llm llms people quality requirements safety safety engineering software software engineering type will
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv