April 17, 2023, 8:03 p.m. | Chris Rohlfs

cs.LG updates on arXiv.org arxiv.org

This paper introduces two new ensemble-based methods to reduce the data and
computation costs of image classification. They can be used with any set of
classifiers and do not require additional training. In the first approach, data
usage is reduced by only analyzing a full-sized image if the model has low
confidence in classifying a low-resolution pixelated version. When applied on
the best performing classifiers considered here, data usage is reduced by 61.2%
on MNIST, 69.6% on KMNIST, 56.3% on …

applications arxiv attention classification classifiers computation confidence costs data data usage ensemble image low mnist model selection networks neural networks paper reduce set training usage

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne