Oct. 12, 2022, 1:18 a.m. | Bhavya Bhavya, Jinjun Xiong, Chengxiang Zhai

cs.CL updates on arXiv.org arxiv.org

We propose a novel application of prompting Pre-trained Language Models
(PLMs) to generate analogies and study how to design effective prompts for two
task settings: generating a source concept analogous to a given target concept
(aka Analogous Concept Generation or ACG), and generating an explanation of the
similarity between a given pair of target concept and source concept (aka
Analogous Explanation Generation or AEG). We found that it is feasible to
prompt InstructGPT to generate meaningful analogies and the best …

analogy arxiv case case study instructgpt language language models large language models study

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote