Oct. 12, 2022, 1:03 a.m. | /u/muunbo

Deep Learning www.reddit.com

I'm an embedded SW dev who had to help a company once to optimize their data pipeline so they could do computer vision on an edge device (an Nvidia Jetson, in case you were curious).

I'm wondering, is this a common issue that companies have? I've heard that ML inference is starting more and more to move to the edge devices instead of being run on the cloud. How do companies deal with having to optimize everything to run on …

deep learning deeplearning deep learning inference edge inference the edge

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne