May 22, 2024, 4:47 a.m. | Hoang Ngo, Dat Quoc Nguyen

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.12715v1 Announce Type: cross
Abstract: We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT

arxiv cs.cl cs.ir generative pre-training recommendation text training type

Senior Data Engineer

@ Displate | Warsaw

Engineer III, Back-End Server (mult.)

@ Samsung Electronics | 645 Clyde Avenue, Mountain View, CA, USA

Senior Product Security Engineer - Cyber Security Researcher

@ Boeing | USA - Arlington, VA

Senior Manager, Software Engineering, DevOps

@ Capital One | Richmond, VA

PGIM Quantitative Solutions, Investment Multi-Asset Research (Hybrid)

@ Prudential Financial | Prudential Tower, 655 Broad Street, Newark, NJ

Cyber Security Engineer

@ HP | FTC02 - Fort Collins, CO East Link (FTC02)