Feb. 16, 2024, 5:44 a.m. | Shyam Venkatasubramanian, Ahmed Aloui, Vahid Tarokh

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.12356v2 Announce Type: replace
Abstract: Advancing loss function design is pivotal for optimizing neural network training and performance. This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data. Distinct from traditional loss functions that target minimizing pointwise errors, RLP loss operates by minimizing the distance between sets of hyperplanes connecting fixed-size subsets of feature-prediction pairs and feature-label pairs. Our empirical evaluations, conducted across benchmark datasets and synthetic examples, …

abstract arxiv cs.lg data design efficiency errors function functions hyperplane linear loss network networks network training neural network neural networks novel optimization performance pivotal random relationships training type work

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York