Jan. 17, 2024, 9:53 p.m. | /u/xiaofanlu

Machine Learning www.reddit.com



https://preview.redd.it/fk76iuoan2dc1.png?width=357&format=png&auto=webp&s=7b909330a6ccf70258e620c2cc1cfdfa11ee4c40

https://preview.redd.it/n78eend9n2dc1.png?width=353&format=png&auto=webp&s=316b4b74fed8360b5d83845a20f757d9331d131b

q(x) is draft model

p(x) is original, target model

I don't understand why after the speculative decoding algorithm the output distribution is the same as target model distribution, for example:

1. why keeping xi if q(xi) <= p(xi) will not change the output distribution from target model?
2. why we need sample x again from norm(max(0,p(x) -q(x) ) if x was rejected
1. and why normalized p(x) is not changing the distribution from target model?

really appreciate …

algorithm change decoding decoding algorithm distribution draft example machinelearning research sample will

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru