Oct. 14, 2023, 5:14 p.m. | /u/markurtz

Natural Language Processing www.reddit.com

The latest research paper between Neural Magic and IST Austria has just landed on Arxiv: [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)! In the paper, we pushed the bounds of what's possible for sparsity within generative AI models and LLMs. The result is smaller, faster, cheaper, and more environmentally friendly deployments.

Our state-of-the-art research has moved the needle for compression and performance on generative models, including 75% sparse MPT models with negligible accuracy loss and sparse T5 and …

accuracy art compression core cpu cpus generative generative models inferencing languagetechnology loss performance quantization recovery research state tokens whisper

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India