July 20, 2022, 1:12 a.m. | Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov

cs.CV updates on arXiv.org arxiv.org

It was shown that adversarial examples improve object recognition. But what
about their opposite side, easy examples? Easy examples are samples that the
machine learning model classifies correctly with high confidence. In our paper,
we are making the first step toward exploring the potential benefits of using
easy examples in the training procedure of neural networks. We propose to use
an auxiliary batch normalization for easy examples for the standard and robust
accuracy improvement.

arxiv easy lg normalization

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC