Feb. 2, 2024, 9:46 p.m. | Jacob G. Elkins Farbod Fahimi

cs.LG updates on arXiv.org arxiv.org

Deep neural networks (DNNs), trained with gradient-based optimization and backpropagation, are currently the primary tool in modern artificial intelligence, machine learning, and data science. In many applications, DNNs are trained offline, through supervised learning or reinforcement learning, and deployed online for inference. However, training DNNs with standard backpropagation and gradient-based optimization gives no intrinsic performance guarantees or bounds on the DNN, which is essential for applications such as controls. Additionally, many offline-training and online-inference problems, such as sim2real transfer of …

applications artificial artificial intelligence backpropagation control cs.lg cs.ne cs.ro cs.sy data data science eess.sy gradient inference intelligence machine machine learning modern networks neural networks offline optimization reinforcement reinforcement learning science standard supervised learning systems through tool training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne