Feb. 14, 2024, 5:44 a.m. | Aaditya Naik Adam Stein Yinjun Wu Eric Wong Mayur Naik

cs.LG updates on arXiv.org arxiv.org

Finding errors in machine learning applications requires a thorough exploration of their behavior over data. Existing approaches used by practitioners are often ad-hoc and lack the abstractions needed to scale this process. We present TorchQL, a programming framework to evaluate and improve the correctness of machine learning applications. TorchQL allows users to write queries to specify and check integrity constraints over machine learning models and datasets. It seamlessly integrates relational algebra with functional programming to allow for highly expressive queries …

abstractions applications behavior constraints cs.db cs.lg cs.se data errors exploration framework integrity machine machine learning machine learning applications process programming scale

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote