April 10, 2024, 4:43 a.m. | Haim Barad, Ekaterina Aidova, Yury Gorbachev

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.04951v2 Announce Type: replace
Abstract: Inference optimizations are critical for improving user experience and reducing infrastructure costs and power consumption. In this article, we illustrate a form of dynamic execution known as speculative sampling to reduce the overall latency of text generation and compare it with standard autoregressive sampling. This can be used together with model-based optimizations (e.g. quantization) to provide an optimized solution. Both sampling methods make use of KV caching. A Jupyter notebook and some sample executions are …

abstract article arxiv cache consumption costs cs.ai cs.lg cs.pf dynamic experience form generative improving inference infrastructure latency openvino power power consumption reduce sampling standard text text generation together type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru