July 3, 2023, 12:30 a.m. | Sovit Ranjan Rath

DebuggerCafe debuggercafe.com

In this article, we train a custom PyTorch RetinaNet model and export it to ONNX format. Further, we run inference using it on the CUDA device.


The post Export PyTorch Model to ONNX – Convert a Custom Detection Model to ONNX appeared first on DebuggerCafe.

article cuda deep learning detection export format inference object-detection onnx onnx-runtime pytorch pytorch retinanet retinanet

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City