Aug. 3, 2022, 6:17 a.m. | /u/pommedeterresautee

Machine Learning www.reddit.com

**TL;DR**: `TorchDynamo` (prototype from PyTorch team) plus `nvfuser` (from Nvidia) backend makes Bert (the tool is model agnostic) inference on PyTorch > **3X** faster most of the time (it depends on input shape) by just adding a single line of code in Python script. The surprising thing is that during the benchmark, **we have not seen any drawback implied by the use of this library**, the acceleration just comes for free. On the same model, TensorRT is (of course) much …

benchmarking inference machinelearning onnx pytorch team tensorrt transformers

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Automated Greenhouse Expert - Phenotyping & Data Analysis (all genders)

@ Bayer | Frankfurt a.M., Hessen, DE

Machine Learning Scientist II

@ Expedia Group | India - Bengaluru

Data Engineer/Senior Data Engineer, Bioinformatics

@ Flagship Pioneering, Inc. | Cambridge, MA USA

Intern (AI lab)

@ UL Solutions | Dublin, Co. Dublin, Ireland

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States