May 23, 2024, 6:39 a.m. | /u/Loud-Insect9247

Machine Learning www.reddit.com

I quantized YOLOv8 in Jetson Orin Nano. I exported it with TensorRT (FP16, INT8) and compared the performance. Based on YOLOv8s, the mAP50-95 of the base model is 44.7 and the inference speed is 33.1 ms. The model exported with TensorRT (FP16) showed that mAP50-95 was 44.7 and the inference speed was 11.4 ms. The model exported with TensorRT (INT8) showed that mAP50-95 was 41.2 and the inference speed was 8.2 ms. There was a slight loss in mAP50-95, but …

fp16 inference jetson jetson orin machinelearning performance project quantization speed tensorrt yolov8

Senior Data Engineer

@ Displate | Warsaw

Associate Director, Technology & Data Lead - Remote

@ Novartis | East Hanover

Product Manager, Generative AI

@ Adobe | San Jose

Associate Director – Data Architect Corporate Functions

@ Novartis | Prague

Principal Data Scientist

@ Salesforce | California - San Francisco

Senior Analyst Data Science

@ Novartis | Hyderabad (Office)