all AI news
Self-Supervised Contrastive Pre-Training for Multivariate Point Processes
Feb. 5, 2024, 6:41 a.m. | Xiao Shou Dharmashankar Subramanian Debarun Bhattacharjya Tian Gao Kristin P. Bennet
cs.LG updates on arXiv.org arxiv.org
bert best of context cs.lg event foundation gpt gpt-3 knowledge language language models large language large language models multivariate new paradigm paradigm popular pre-training processes representation representation learning self-supervised learning supervised learning supervision training
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote