all AI news
History Compression via Language Models in Reinforcement Learning. (arXiv:2205.12258v3 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
In a partially observable Markov decision process (POMDP), an agent typically
uses a representation of the past to approximate the underlying MDP. We propose
to utilize a frozen Pretrained Language Transformer (PLT) for history
representation and compression to improve sample efficiency. To avoid training
of the Transformer, we introduce FrozenHopfield, which automatically associates
observations with pretrained token embeddings. To form these associations, a
modern Hopfield network stores these token embeddings, which are retrieved by
queries that are obtained by a …
arxiv compression history language language models learning reinforcement reinforcement learning