May 23, 2024, 2:36 p.m. | /u/DriftingClient

Machine Learning www.reddit.com

Hi all,

I'm working on variational inference methods, mainly in the context of BNNs. Using the reverse (exclusive) KL as the variational objective is the common approach, though lately I stumbled upon some interesting works that use the forward (inclusive) KL as an objective instead, e.g \[1\]\[2\]\[3\]. Also in the context of VI for GPs both divergence measures have been used, see e.g \[4\].

While I'm familiar with the well-known difference between the objectives that the reverse KL is 'mode-seeking' …

context exclusive inference machinelearning

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Lead, Sales Operations Strategy EMEA - 12 Month Fixed Term Contract

@ Snap Inc. | London - 50 Cowcross Street

Senior Staff Engineer- Observability and Reliability Platform Engineering (REMOTE)

@ GEICO | MD Chevy Chase (Office) - JPS

Senior Manager, Software Quality Assurance

@ IQVIA | Ottawa, Ontario, Canada

Associate, Software Application Engineer

@ BlackRock | MU8-South (A) Wing, 7-10 Floor, Nesco IT Park Tower 4, Western Express Highway, Mumbai