all AI news
Incoherent Probability Judgments in Large Language Models. (arXiv:2401.16646v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Autoregressive Large Language Models (LLMs) trained for next-word prediction
have demonstrated remarkable proficiency at producing coherent text. But are
they equally adept at forming coherent probability judgments? We use
probabilistic identities and repeated judgments to assess the coherence of
probability judgments made by LLMs. Our results show that the judgments
produced by these models are often incoherent, displaying human-like systematic
deviations from the rules of probability theory. Moreover, when prompted to
judge the same event, the mean-variance relationship of probability …
adept arxiv cs.cl language language models large language large language models llms next prediction probability show text word