all AI news
Quantization Aware Factorization for Deep Neural Network Compression. (arXiv:2308.04595v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Tensor decomposition of convolutional and fully-connected layers is an
effective way to reduce parameters and FLOP in neural networks. Due to memory
and power consumption limitations of mobile or embedded devices, the
quantization step is usually necessary when pre-trained models are deployed. A
conventional post-training quantization approach applied to networks with
decomposed weights yields a drop in accuracy. This motivated us to develop an
algorithm that finds tensor approximation directly with quantized factors and
thus benefit from both compression techniques …
arxiv compression deep neural network devices embedded embedded devices factorization limitations memory mobile network networks neural network neural networks power power consumption pre-trained models quantization reduce tensor training