Feb. 8, 2024, 8:46 p.m. | Grant Gross

Computerworld www.computerworld.com



The US government has created an artificial intelligence safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.

The new US AI Safety Institute Consortium (AISIC), part of the National Institute of Standards and Technology, is tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.

On Thursday, the US Department of Commerce, NIST’s parent agency, …

academics ai regulation ai safety institute artificial artificial intelligence creators development generative-ai government guardrails guidelines institute intelligence part regulation safety standards technology

More from www.computerworld.com / Computerworld

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York