Feb. 13, 2024, 5:41 a.m. | Jos\'e \'Angel Mart\'in-Baos Ricardo Garc\'ia-R\'odenas Luis Rodriguez-Benitez Michel Bierlaire

cs.LG updates on arXiv.org arxiv.org

The application of kernel-based Machine Learning (ML) techniques to discrete choice modelling using large datasets often faces challenges due to memory requirements and the considerable number of parameters involved in these models. This complexity hampers the efficient training of large-scale models. This paper addresses these problems of scalability by introducing the Nystr\"om approximation for Kernel Logistic Regression (KLR) on large datasets. The study begins by presenting a theoretical analysis in which: i) the set of KLR solutions is characterised, ii) …

analysis application approximation challenges complexity cs.lg datasets kernel large datasets large-scale models logistic regression machine machine learning memory modelling paper parameters regression requirements scalable scale stat.ml training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne