June 14, 2022, 4:57 p.m. | Emily M Bender

Artificial intelligence (AI) | The Guardian www.theguardian.com

It’s easy to be fooled by the mimicry, but consumers need transparency about how such systems are used

The Google engineer Blake Lemoine wasn’t speaking for the company officially when he claimed that Google’s chatbot LaMDA was sentient, but Lemoine’s misconception shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate? …

alphabet artificial intelligence (ai) computing consciousness empathy engineers google human human-like neuroscience science technology

More from www.theguardian.com / Artificial intelligence (AI) | The Guardian

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne