Feb. 15, 2024, 5:43 a.m. | Aditya Malusare, Harish Kothandaraman, Dipesh Tamboli, Nadia A. Lanman, Vaneet Aggarwal

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.02333v2 Announce Type: replace
Abstract: This paper presents the Ensemble Nucleotide Byte-level Encoder-Decoder (ENBED) foundation model, analyzing DNA sequences at byte-level precision with an encoder-decoder Transformer architecture. ENBED uses a sub-quadratic implementation of attention to develop an efficient model capable of sequence-to-sequence transformations, generalizing previous genomic models with encoder-only or decoder-only architectures. We use Masked Language Modeling to pre-train the foundation model using reference genome sequences and apply it in the following downstream tasks: (1) identification of enhancers, promotors and …

abstract architecture arxiv attention cs.lg decoder dna encoder encoder-decoder ensemble foundation foundation model genomic implementation language natural natural language paper precision q-bio.gn transformer transformer architecture type understanding

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Cloud Data Platform Engineer

@ First Central | Home Office (Remote)

Associate Director, Data Science

@ MSD | USA - New Jersey - Rahway