Feb. 2, 2024, 9:46 p.m. | Gabriel Ryan Siddhartha Jain Mingyue Shang Shiqi Wang Xiaofei Ma Murali Krishna Ramanathan Baishakhi R

cs.LG updates on arXiv.org arxiv.org

Testing plays a pivotal role in ensuring software quality, yet conventional Search Based Software Testing (SBST) methods often struggle with complex software units, achieving suboptimal test coverage. Recent work using large language models (LLMs) for test generation have focused on improving generation quality through optimizing the test generation context and correcting errors in model outputs, but use fixed prompting strategies that prompt the model to generate tests without additional guidance. As a result LLM-generated test suites still suffer from low …

code coverage cs.lg cs.se language language models large language large language models llm llms llm testing pivotal prompting quality regression role search software software testing struggle study test testing through units work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US