April 24, 2024, 4:47 a.m. | Shashank Sonkar, Naiming Liu, Richard G. Baraniuk

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.15156v1 Announce Type: new
Abstract: This paper presents a novel exploration into the regressive side effects of training Large Language Models (LLMs) to mimic student misconceptions for personalized education. We highlight the problem that as LLMs are trained to more accurately mimic student misconceptions, there is a compromise in the factual integrity and reasoning ability of the models. Our work involved training an LLM on a student-tutor dialogue dataset to predict student responses. The results demonstrated a decrease in the …

abstract arxiv cs.cl education effects exploration highlight language language models large language large language models llms novel paper personalized personalized education training type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne