April 11, 2022, 1:11 a.m. | Yoonseok Yang, Kyu Seok Kim, Minsam Kim, Juneyoung Park

cs.CL updates on arXiv.org arxiv.org

Content-based collaborative filtering (CCF) provides personalized item
recommendations based on both users' interaction history and items' content
information. Recently, pre-trained language models (PLM) have been used to
extract high-quality item encodings for CCF. However, it is resource-intensive
to finetune PLM in an end-to-end (E2E) manner in CCF due to its multi-modal
nature: optimization involves redundant content encoding for interactions from
users. For this, we propose GRAM (GRadient Accumulation for Multi-modality):
(1) Single-step GRAM which aggregates gradients for each item while …

arxiv collaborative collaborative filtering fine-tuning language language models

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote