all AI news
Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data
March 19, 2024, 4:45 a.m. | Antonis Antoniades, Yiyi Yu, Joseph Canzano, William Wang, Spencer LaVere Smith
cs.LG updates on arXiv.org arxiv.org
Abstract: State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis. Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive spatiotemporal generation problem. Neuroformer is a multimodal, multitask generative pretrained transformer (GPT) model that is specifically designed to handle the intricacies of data in systems neuroscience. It scales linearly with feature size, …
abstract analysis art arxiv brain cellular cs.lg cs.ne data data sets domains generative language multimodal multimodal data neuroscience pretrained models pretraining q-bio.nc scale state success systems tools type vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US