Feb. 9, 2024, 6:41 p.m. | Dan Milmo Global technology editor

Artificial intelligence (AI) | The Guardian www.theguardian.com

Researchers find large language models, which power chatbots, can deceive human users and help spread disinformation

The UK’s new artificial intelligence safety body has found the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information.

The AI Safety Institute published initial findings from its research into the advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns.

Continue …

ai safeguards ai safety institute artificial artificial intelligence artificial intelligence (ai) chatbots computing disinformation found giving human information institute intelligence language language models large language large language models power researchers safeguards safety technology uk news

More from www.theguardian.com / Artificial intelligence (AI) | The Guardian

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York