June 7, 2023, 3:16 p.m. | /u/Giskard_AI

Machine Learning www.reddit.com

Documentation: [https://docs.giskard.ai/](https://docs.giskard.ai/)

We’ve just released a beta of our ML Testing library, covering any Python model, from tabular to LLMs. It allows to scan AI models and identify vulnerabilities, such as data leakage, non-robustness, ethical biases, and overconfidence.

If the method `giskard.scan(model, dataset)` detects issues in your model, you can generate a set of tests that dive deeper into the detected errors, by using `results.generate_test_suite()`. You can easily customize the tests depending on your use case by defining domain-specific data …

ai models beta biases data data leakage dataset errors identify library llms machinelearning python robustness set tabular testing tests vulnerabilities

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV