Jan. 21, 2022, 2:11 a.m. | Simon Heilig, Maximilian Münch, Frank-Michael Schleif

cs.LG updates on arXiv.org arxiv.org

Matrix approximations are a key element in large-scale algebraic machine
learning approaches. The recently proposed method MEKA (Si et al., 2014)
effectively employs two common assumptions in Hilbert spaces: the low-rank
property of an inner product matrix obtained from a shift-invariant kernel
function and a data compactness hypothesis by means of an inherent
block-cluster structure. In this work, we extend MEKA to be applicable not only
for shift-invariant kernels but also for non-stationary kernels like polynomial
kernels and an extreme …

arxiv kernel learning perspective

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina