Oct. 3, 2022, 1:12 a.m. | Fabian Tschopp

cs.LG updates on arXiv.org arxiv.org

Deep learning has become a useful data analysis method, however mainstream
adaption in distributed computer software and embedded devices has been low so
far. Often, adding deep learning inference in mainstream applications and
devices requires new hardware with signal processors suited for convolutional
neural networks. This work adds new data types (quantized 16-bit and 8-bit
integer, 16-bit floating point) to Caffe in order to save memory and increase
inference speed on existing commodity graphics processors with OpenCL, common
in everyday …

arxiv experts mixed mixed-precision networks neural networks precision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne