June 13, 2022, 1:11 a.m. | Kunal Sharma, M. Cerezo, Lukasz Cincio, Patrick J. Coles

cs.LG updates on arXiv.org arxiv.org

Several architectures have been proposed for quantum neural networks (QNNs),
with the goal of efficiently performing machine learning tasks on quantum data.
Rigorous scaling results are urgently needed for specific QNN constructions to
understand which, if any, will be trainable at a large scale. Here, we analyze
the gradient scaling (and hence the trainability) for a recently proposed
architecture that we called dissipative QNNs (DQNNs), where the input qubits of
each layer are discarded at the layer's output. We find …

arxiv networks neural networks perceptron quantum quantum neural networks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne