May 23, 2024, 6:39 a.m. | /u/Loud-Insect9247

Machine Learning www.reddit.com

I quantized YOLOv8 in Jetson Orin Nano. I exported it with TensorRT (FP16, INT8) and compared the performance. Based on YOLOv8s, the mAP50-95 of the base model is 44.7 and the inference speed is 33.1 ms. The model exported with TensorRT (FP16) showed that mAP50-95 was 44.7 and the inference speed was 11.4 ms. The model exported with TensorRT (INT8) showed that mAP50-95 was 41.2 and the inference speed was 8.2 ms. There was a slight loss in mAP50-95, but …

fp16 inference jetson jetson orin machinelearning performance project quantization speed tensorrt yolov8

Senior Data Engineer

@ Displate | Warsaw

Content Designer

@ Glean | Palo Alto, CA

IT&D Data Solution Architect

@ Reckitt | Hyderabad, Telangana, IN, N/A

Python Developer

@ Riskinsight Consulting | Hyderabad, Telangana, India

Technical Lead (Java/Node.js)

@ LivePerson | Hyderabad, Telangana, India (Remote)

Backend Engineer - Senior and Mid-Level - Sydney Hybrid or AU remote

@ Displayr | Sydney, New South Wales, Australia