Nov. 29, 2023, 6 p.m. | noreply@blogger.com (TensorFlow Blog)

The TensorFlow Blog blog.tensorflow.org


Posted by Marat Dukhan and Frank Barchard, Software Engineers




CPUs deliver the widest reach for ML inference and remain the default target for TensorFlow Lite. Consequently, improving CPU inference performance is a top priority, and we are excited to announce that we doubled floating-point inference performance in TensorFlow Lite’s XNNPack backend by enabling half-precision inference on ARM CPUs. This means that more AI powered features may be deployed to older and lower tier devices.


Traditionally, TensorFlow Lite supported two kinds …

ai announcement backend cpu cpus enabling engineers how-to inference learn ml inference performance precision software software engineers tensorflow tensorflow-lite

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst

@ S&P Global | IN - HYDERABAD SKYVIEW

EY GDS Internship Program - Junior Data Visualization Engineer (June - July 2024)

@ EY | Wrocław, DS, PL, 50-086

Staff Data Scientist

@ ServiceTitan | INT Armenia Yerevan

Master thesis on deterministic AI inference on-board Telecom Satellites

@ Airbus | Taufkirchen / Ottobrunn

Lead Data Scientist

@ Picket | Seattle, WA