June 28, 2022, 4:21 p.m. | /u/Farconion

Machine Learning www.reddit.com

Large language models and the recent spur of diffusion based text-to-image models are gosh-darn fun to play with, but due to their size and expensive training costs - they're only accessible via an API or if you yourself have a access to a large # of GPUs. Yet there are also a number of compression techniques like pruning and quantization that can drastically reduce the size (+90%), and thus computational requirements, of a trained model. Has there been any work …

compression dalle dalle-2 gpt gpt-3 machinelearning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Stagista Technical Data Engineer

@ Hager Group | BRESCIA, IT

Data Analytics - SAS, SQL - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India