Feb. 17, 2024, 11:39 a.m. | Thomas Claburn

The Register - Software: AI + ML www.theregister.com

We speak to professor who with colleagues tooled up OpenAI's GPT-4 and other neural nets

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.…

act ai models auto automated beyond colleagues concerns gpt gpt-4 llms neural nets openai openai's gpt-4 professor risk safety speak systems tools websites

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US