April 4, 2024, 4:48 a.m. | Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.07484v2 Announce Type: replace
Abstract: Instruction tuning aligns the response of large language models (LLMs) with human preferences. Despite such efforts in human--LLM alignment, we report that, interestingly, instruction tuning does not always make LLMs human-like from a cognitive modeling perspective. More specifically, next-word probabilities estimated by instruction-tuned LLMs are often worse at simulating human reading behavior than those estimated by base LLMs. In addition, we explore prompting methodologies in simulating human reading behavior with LLMs. Our results show that …

abstract alignment arxiv cognitive cs.ai cs.cl human human-like instruction-tuned language language models large language large language models llm llms modeling next perspective power predictive report type word

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US