Feb. 21, 2024, 5:42 a.m. | Hao-Yuan Chen, Yen-Jui Chang, Shih-Wei Liao, Ching-Ray Chang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.12704v1 Announce Type: cross
Abstract: Quantum embedding with transformers is a novel and promising architecture for quantum machine learning to deliver exceptional capability on near-term devices or simulators. The research incorporated a vision transformer (ViT) to advance quantum significantly embedding ability and results for a single qubit classifier with around 3 percent in the median F1 score on the BirdCLEF-2021, a challenging high-dimensional dataset. The study showcases and analyzes empirical evidence that our transformer-based architecture is a highly versatile and …

abstract advance architecture arxiv capability classifier cs.lg data devices embedding machine machine learning near novel quant-ph quantum qubit research transformer transformers type vision vit

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne