March 14, 2022, 1:11 a.m. | Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, Jian Tang

cs.LG updates on arXiv.org arxiv.org

Learning effective protein representations is critical in a variety of tasks
in biology such as predicting protein function or structure. Existing
approaches usually pretrain protein language models on a large number of
unlabeled amino acid sequences and then finetune the models with some labeled
data in downstream tasks. Despite the effectiveness of sequence-based
approaches, the power of pretraining on smaller numbers of known protein
structures has not been explored for protein property prediction, though
protein structures are known to be …

arxiv learning protein structure representation representation learning

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town