May 22, 2024, 4:47 a.m. | Hoang Ngo, Dat Quoc Nguyen

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.12715v1 Announce Type: cross
Abstract: We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT

arxiv cs.cl cs.ir generative pre-training recommendation text training type

Senior Data Engineer

@ Displate | Warsaw

Analyst, Data Analytics

@ T. Rowe Price | Owings Mills, MD - Building 4

Regulatory Data Analyst

@ Federal Reserve System | San Francisco, CA

Sr. Data Analyst

@ Bank of America | Charlotte

Data Analyst- Tech Refresh

@ CACI International Inc | 1J5 WASHINGTON DC (BOLLING AFB)

Senior AML/CFT & Data Analyst

@ Ocorian | Ebène, Mauritius