Nov. 14, 2023, 9:53 a.m. | Romain Dillet

TechCrunch techcrunch.com

Giskard is a French startup working on an open-source testing framework for large language models. It can alert developers of risks of biases, security holes and a model’s ability to generate harmful or toxic content. While there’s a lot of hype around AI models, ML testing systems will also quickly become a hot topic as […]


© 2023 TechCrunch. All rights reserved. For personal use only.

ai ai act ai models alert biases developers framework french generate language language models large language large language models production risks security startup startups systems testing

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote