April 24, 2024, 4:47 a.m. | Hongxuan Liu, Haoyu Yin, Zhiyao Luo, Xiaonan Wang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.14467v1 Announce Type: new
Abstract: This paper presents a study on the integration of domain-specific knowledge in prompt engineering to enhance the performance of large language models (LLMs) in scientific domains. A benchmark dataset is curated to encapsulate the intricate physical-chemical properties of small molecules, their drugability for pharmacology, alongside the functional attributes of enzymes and crystal materials, underscoring the relevance and applicability across biological and chemical domains.The proposed domain-knowledge embedded prompt engineering method outperforms traditional prompt engineering strategies on …

abstract arxiv benchmark chemistry cs.ai cs.cl dataset domain domains engineering integration knowledge language language models large language large language models llms molecules paper performance prompt scientific small study type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York