all AI news
FFCNN: Fast FPGA based Acceleration for Convolution neural network inference. (arXiv:2208.13250v1 [cs.LG])
Aug. 30, 2022, 1:14 a.m. | F. Keddous, H-N. Nguyen, A. Nakib
cs.CV updates on arXiv.org arxiv.org
We present a new efficient OpenCL-based Accelerator for large scale
Convolutional Neural Networks called Fast Inference on FPGAs for Convolution
Neural Network (FFCNN). FFCNN is based on a deeply pipelined OpenCL kernels
architecture. As pointed out before, high-level synthesis tools such as the
OpenCL framework can easily port codes originally designed for CPUs and GPUs to
FPGAs, but it is still difficult to make OpenCL codes run efficiently on FPGAs.
This work aims to propose an efficient FPGA implementation of …
arxiv convolution convolution neural network inference network neural network
More from arxiv.org / cs.CV updates on arXiv.org
Multi-View Spectrogram Transformer for Respiratory Sound Classification
2 days, 15 hours ago |
arxiv.org
GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation
2 days, 15 hours ago |
arxiv.org
OTMatch: Improving Semi-Supervised Learning with Optimal Transport
2 days, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A
GN SONG MT Market Research Data Analyst 09
@ Accenture | Bengaluru, BDC7A