Feb. 29, 2024, 5:47 a.m. | Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.17916v1 Announce Type: new
Abstract: Large language models (LLMs) have significantly transformed the educational landscape. As current plagiarism detection tools struggle to keep pace with LLMs' rapid advancements, the educational community faces the challenge of assessing students' true problem-solving abilities in the presence of LLMs. In this work, we explore a new paradigm for ensuring fair evaluation -- generating adversarial examples which preserve the structure and difficulty of the original questions aimed for assessment, but are unsolvable by LLMs. Focusing …

abstract adversarial adversarial attacks arxiv attacks challenge community cs.ai cs.cl current detection detection tools educational explore landscape language language models large language large language models llm llms math plagiarism problem-solving struggle students tools true type via word work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA