May 23, 2024, 2:36 p.m. | /u/DriftingClient

Machine Learning www.reddit.com

Hi all,

I'm working on variational inference methods, mainly in the context of BNNs. Using the reverse (exclusive) KL as the variational objective is the common approach, though lately I stumbled upon some interesting works that use the forward (inclusive) KL as an objective instead, e.g \[1\]\[2\]\[3\]. Also in the context of VI for GPs both divergence measures have been used, see e.g \[4\].

While I'm familiar with the well-known difference between the objectives that the reverse KL is 'mode-seeking' …

context exclusive inference machinelearning

Senior Data Engineer

@ Displate | Warsaw

Decision Scientist

@ Tesco Bengaluru | Bengaluru, India

Senior Technical Marketing Engineer (AI/ML-powered Cloud Security)

@ Palo Alto Networks | Santa Clara, CA, United States

Associate Director, Technology & Data Lead - Remote

@ Novartis | East Hanover

Product Manager, Generative AI

@ Adobe | San Jose

Associate Director – Data Architect Corporate Functions

@ Novartis | Prague