June 7, 2023, 3:16 p.m. | /u/Giskard_AI

Machine Learning www.reddit.com

Documentation: [https://docs.giskard.ai/](https://docs.giskard.ai/)

We’ve just released a beta of our ML Testing library, covering any Python model, from tabular to LLMs. It allows to scan AI models and identify vulnerabilities, such as data leakage, non-robustness, ethical biases, and overconfidence.

If the method `giskard.scan(model, dataset)` detects issues in your model, you can generate a set of tests that dive deeper into the detected errors, by using `results.generate_test_suite()`. You can easily customize the tests depending on your use case by defining domain-specific data …

ai models beta biases data data leakage dataset errors identify library llms machinelearning python robustness set tabular testing tests vulnerabilities

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York