all AI news
History Compression via Language Models in Reinforcement Learning. (arXiv:2205.12258v1 [cs.LG])
cs.CL updates on arXiv.org arxiv.org
In a partially observable Markov decision process (POMDP), an agent typically
uses a representation of the past to approximate the underlying MDP. We propose
to utilize a frozen Pretrained Language Transformer (PLT) for history
representation and compression to improve sample efficiency. To avoid training
of the Transformer, we introduce FrozenHopfield, which automatically associates
observations with original token embeddings. To form these associations, a
modern Hopfield network stores the original token embeddings, which are
retrieved by queries that are obtained by …
arxiv compression history language language models learning reinforcement reinforcement learning