all AI news
PAPERCLIP: Associating Astronomical Observations and Natural Language with Multi-Modal Models
March 15, 2024, 4:42 a.m. | Siddharth Mishra-Sharma, Yiding Song, Jesse Thaler
cs.LG updates on arXiv.org arxiv.org
Abstract: We present PAPERCLIP (Proposal Abstracts Provide an Effective Representation for Contrastive Language-Image Pre-training), a method which associates astronomical observations imaged by telescopes with natural language using a neural network model. The model is fine-tuned from a pre-trained Contrastive Language-Image Pre-training (CLIP) model using successful observing proposal abstracts and corresponding downstream observations, with the abstracts optionally summarized via guided generation using large language models (LLMs). Using observations from the Hubble Space Telescope (HST) as an example, …
abstract arxiv astro-ph.im clip cs.cl cs.cv cs.ir cs.lg image language modal multi-modal natural natural language network neural network pre-training representation telescopes training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US