May 12, 2022, 9:43 a.m. | /u/emilec___

Deep Learning www.reddit.com

nebullvm is an opensource library that generates an optimize version of your deep learning model that runs 2-10 times faster in inference without performance loss by leveraging multiple deep learning compilers (openvino, tensorrt, etc.). And thanks to today's new release, nebullvm can accelerate up to 30x if you specify that you are willing to trade off a self-defined amount of accuracy/precision to get even lower response time and a lighter model. This additional acceleration is achieved by exploiting optimization techniques …

ai art compilers deep learning deeplearning distillation inference learning major opensource optimization quantization release sparsity state

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France