all AI news
LLM-Powered Test Case Generation for Detecting Tricky Bugs
April 17, 2024, 4:42 a.m. | Kaibo Liu, Yiyang Liu, Zhenpeng Chen, Jie M. Zhang, Yudong Han, Yun Ma, Ge Li, Gang Huang
cs.LG updates on arXiv.org arxiv.org
Abstract: Conventional automated test generation tools struggle to generate test oracles and tricky bug-revealing test inputs. Large Language Models (LLMs) can be prompted to produce test inputs and oracles for a program directly, but the precision of the tests can be very low for complex scenarios (only 6.3% based on our experiments). To fill this gap, this paper proposes AID, which combines LLMs with differential testing to generate fault-revealing test inputs and oracles targeting plausibly correct …
abstract arxiv automated bugs case cs.lg cs.se generate inputs language language models large language large language models llm llms low precision struggle test test case tests tools type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist - XR Input Perception
@ Meta | Sausalito, CA | Redmond, WA | Burlingame, CA
Sr. Data Engineer
@ Oportun | Remote - India