all AI news
RecGPT: Generative Pre-training for Text-based Recommendation
May 22, 2024, 4:47 a.m. | Hoang Ngo, Dat Quoc Nguyen
cs.CL updates on arXiv.org arxiv.org
Abstract: We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT
arxiv cs.cl cs.ir generative pre-training recommendation text training type
More from arxiv.org / cs.CL updates on arXiv.org
LLM-SQL-Solver: Can LLMs Determine SQL Equivalence?
1 day, 20 hours ago |
arxiv.org
mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs
1 day, 20 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Engineer III, Back-End Server (mult.)
@ Samsung Electronics | 645 Clyde Avenue, Mountain View, CA, USA
Senior Product Security Engineer - Cyber Security Researcher
@ Boeing | USA - Arlington, VA
Senior Manager, Software Engineering, DevOps
@ Capital One | Richmond, VA
PGIM Quantitative Solutions, Investment Multi-Asset Research (Hybrid)
@ Prudential Financial | Prudential Tower, 655 Broad Street, Newark, NJ
Cyber Security Engineer
@ HP | FTC02 - Fort Collins, CO East Link (FTC02)