Feb. 9, 2024, 6:41 p.m. | Dan Milmo Global technology editor

Artificial intelligence (AI) | The Guardian www.theguardian.com

Researchers find large language models, which power chatbots, can deceive human users and help spread disinformation

The UK’s new artificial intelligence safety body has found the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information.

The AI Safety Institute published initial findings from its research into the advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns.

Continue …

ai safeguards ai safety institute artificial artificial intelligence artificial intelligence (ai) chatbots computing disinformation found giving human information institute intelligence language language models large language large language models power researchers safeguards safety technology uk news

More from www.theguardian.com / Artificial intelligence (AI) | The Guardian

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne