Feb. 14, 2024, 5:44 a.m. | Remi Desmartin Omri Isac Grant Passmore Kathrin Stark Guy Katz Ekaterina Komendantskaya

cs.LG updates on arXiv.org arxiv.org

Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker. In this work, …

adoption bugs community cs.lg cs.lo cs.pl deep neural network implementation network networks neural network neural networks safety safety-critical systems tools verification

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne