Feb. 5, 2024, 8:58 p.m. | Victor Tangermann

Artificial Intelligence – Futurism futurism.com

A team of Stanford researchers tasked OpenAI's latest large language model to make high-stakes, society-level decisions in a series of wargame simulations — and it didn't bat an eye, recommending the use of nuclear weapons. It's a worrying indication that, without any major human oversight, AI models could eventually have the ability to sway policymakers […]

ai models artificial intelligence bat decisions eventually gpt gpt-4 human language language model large language large language model launch major nuclear nuclear war nuclear-weapons openai oversight researchers series simulations society stanford team tests war weapons

More from futurism.com / Artificial Intelligence – Futurism

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US