March 12, 2024, 4:41 a.m. | Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas Singh

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.06082v1 Announce Type: new
Abstract: Transformers are the backbone of powerful foundation models for many Vision and Natural Language Processing tasks. But their compute and memory/storage footprint is large, and so, serving such models is expensive often requiring high-end hardware. To mitigate this difficulty, Post-Training Quantization seeks to modify a pre-trained model and quantize it to eight bits or lower, significantly boosting compute/memory/latency efficiency. Such models have been successfully quantized to four bits with some performance loss. In this work, …

abstract and natural language processing arxiv compute cs.cl cs.lg foundation hardware language language processing low memory natural natural language natural language processing processing quantization storage tasks training transformers type vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US