June 15, 2024, 2:12 p.m. | Matthias Bastian

THE DECODER the-decoder.com


Researchers argue that the falsehoods generated by ChatGPT and other large language models are better described as "bullshit" rather than hallucinations. These AI systems are indifferent to the truth when generating text.


The article ChatGPT isn't hallucinating, it's spreading "soft bullshit" appeared first on THE DECODER.

ai in practice ai systems article artificial intelligence chatbot chatgpt decoder generated hallucinations isn language language models large language large language models llm llms researchers systems text the decoder truth

More from the-decoder.com / THE DECODER

Senior Data Engineer

@ Displate | Warsaw

Professor/Associate Professor of Health Informatics [LKCMedicine]

@ Nanyang Technological University | NTU Novena Campus, Singapore

Research Fellow (Computer Science (and Engineering)/Electronic Engineering/Applied Mathematics/Perception Sciences)

@ Nanyang Technological University | NTU Main Campus, Singapore

Java Developer - Assistant Manager

@ State Street | Bengaluru, India

Senior Java/Python Developer

@ General Motors | Austin IT Innovation Center North - Austin IT Innovation Center North

Research Associate (Computer Engineering/Computer Science/Electronics Engineering)

@ Nanyang Technological University | NTU Main Campus, Singapore