all AI news
Meaning Representations from Trajectories in Autoregressive Models. (arXiv:2310.18348v2 [cs.CL] UPDATED)
cs.LG updates on arXiv.org arxiv.org
We propose to extract meaning representations from autoregressive language
models by considering the distribution of all possible trajectories extending
an input text. This strategy is prompt-free, does not require fine-tuning, and
is applicable to any pre-trained autoregressive model. Moreover, unlike
vector-based representations, distribution-based representations can also model
asymmetric relations (e.g., direction of logical entailment, hypernym/hyponym
relations) by using algebraic operations between likelihood functions. These
ideas are grounded in distributional perspectives on semantics and are
connected to standard constructions in automata …
arxiv autoregressive model autoregressive models distribution extract fine-tuning free language language models meaning prompt relations strategy text vector