Aug. 30, 2022, 1:14 a.m. | F. Keddous, H-N. Nguyen, A. Nakib

cs.CV updates on arXiv.org arxiv.org

We present a new efficient OpenCL-based Accelerator for large scale
Convolutional Neural Networks called Fast Inference on FPGAs for Convolution
Neural Network (FFCNN). FFCNN is based on a deeply pipelined OpenCL kernels
architecture. As pointed out before, high-level synthesis tools such as the
OpenCL framework can easily port codes originally designed for CPUs and GPUs to
FPGAs, but it is still difficult to make OpenCL codes run efficiently on FPGAs.
This work aims to propose an efficient FPGA implementation of …

arxiv convolution convolution neural network inference network neural network

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A

GN SONG MT Market Research Data Analyst 09

@ Accenture | Bengaluru, BDC7A