all AI news
RecGPT: Generative Pre-training for Text-based Recommendation
May 22, 2024, 4:47 a.m. | Hoang Ngo, Dat Quoc Nguyen
cs.CL updates on arXiv.org arxiv.org
Abstract: We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT
arxiv cs.cl cs.ir generative pre-training recommendation text training type
More from arxiv.org / cs.CL updates on arXiv.org
Dodo: Dynamic Contextual Compression for Decoder-only LMs
1 day, 21 hours ago |
arxiv.org
Active Learning for Multilingual Fingerspelling Corpora
1 day, 21 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Analyst, Data Analytics
@ T. Rowe Price | Owings Mills, MD - Building 4
Regulatory Data Analyst
@ Federal Reserve System | San Francisco, CA
Sr. Data Analyst
@ Bank of America | Charlotte
Data Analyst- Tech Refresh
@ CACI International Inc | 1J5 WASHINGTON DC (BOLLING AFB)
Senior AML/CFT & Data Analyst
@ Ocorian | Ebène, Mauritius