March 24, 2024, 3:39 p.m. | /u/ski233

Machine Learning www.reddit.com

I've been looking at options to get my trained pytorch model onto the TensorRT engine (since that appears to be the fastest inference setup for Nvidia devices). However, there seem to be several ways to do this and I was wanting to know if anyone has experience with which of these approaches yield the best throughput results?

1. converting a torch model to tensorrt using torch-tensorrt (this library seems pretty new and the documentation has a lot of issues and …

devices experience however inference machinelearning nvidia pytorch setup tensorrt

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN