April 22, 2024, 4:43 a.m. | Lirand\"e Pira, Chris Ferrie

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.11098v2 Announce Type: replace-cross
Abstract: Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to explain and comprehend. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not apply straightforwardly to the quantum setting. Here, we …

abstract architectures artificial artificial intelligence arxiv behavior cs.lg focus intelligence interpretability networks neural architectures neural networks quant-ph quantum quantum neural networks systems type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne