April 29, 2022, 1:11 a.m. | Miloš Nikolić, Enrique Torres Sanchez, Jiahui Wang, Ali Hadi Zadeh, Mostafa Mahmoud, Ameer Abdelhadi, Andreas Moshovos

cs.LG updates on arXiv.org arxiv.org

We introduce a software-hardware co-design approach to reduce memory traffic
and footprint during training with BFloat16 or FP32 boosting energy efficiency
and execution time performance. We introduce methods to dynamically adjust the
size and format of the floating-point containers used to store activations and
weights during training. The different value distributions lead us to different
approaches for exponents and mantissas. Gecko exploits the favourable exponent
distribution with a loss-less delta encoding approach to reduce the total
exponent footprint by up …

arxiv containers deep learning deep learning training learning training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571